name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
502528 | Data mining with sparse grids using simplicial basis functions. | Recently we presented a new approach [18] to the classification problem arising in data mining. It is based on the regularization network approach but, in contrast to other methods which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [49]. Thus, only O(hn-1nd-1) instead of O(hn-d) grid points and unknowns are involved. Here d denotes the dimension of the feature space and gives the mesh size. We use the sparse grid combination technique [28] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point.We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 10 dimensions. It turns out that our new method achieves correctness rates which are competitive to that of the best existing methods. | INTRODUCTION
Data mining is the process of nding patterns, relations
and trends in large data sets. Examples range from scien-
tic applications like the post-processing of data in medicine
or the evaluation of satellite pictures to nancial and commercial
applications, e.g. the assessment of credit risks or
the selection of customers for advertising campaign letters.
For an overview on data mining and its various tasks and
approaches see [5, 12].
In this paper we consider the classication problem arising
in data mining. Given is a set of data points in a d-dimensional
feature space together with a class label. From
this data, a classier must be constructed which allows to
predict the class of any newly given data point for future decision
making. Widely used approaches are, besides others,
decision tree induction, rule learning, adaptive multivariate
regression splines, neural networks, and support vector
machines. Interestingly, some of these techniques can be interpreted
in the framework of regularization networks [21].
This approach allows a direct description of the most important
neural networks and it also allows for an equivalent
description of support vector machines and n-term approximation
schemes [20]. Here, the classication of data is interpreted
as a scattered data approximation problem with
certain additional regularization terms in high-dimensional
spaces.
In [18] we presented a new approach to the classication
problem. It is also based on the regularization network approach
but, in contrast to the other methods which employ
mostly global ansatz functions associated to data points, we
use an independent grid with associated local ansatz functions
in the minimization process. This is similar to the
numerical treatment of partial dierential equations. Here,
a uniform grid would result in O(h d
denotes the dimension of the feature space and
gives the mesh size. Therefore the complexity of the problem
would grow exponentially with d and we encounter the curse
of dimensionality. This is probably the reason why conventional
grid-based techniques are not used in data mining up
to now.
However, there is the so-called sparse grids approach which
allows to cope with the complexity of the problem to some
extent. This method has been originally developed for the
solution of partial dierential equations [2, 8, 28, 49] and
is now used successfully also for integral equations [14, 27],
interpolation and approximation [3, 26, 39, 42], eigenvalue
problems [16] and integration problems [19]. In the information
based complexity community it is also known as 'hyper-
bolic cross points' and the idea can even be traced back to
[41]. For a d-dimensional problem, the sparse grid approach
employs only O(h 1
points in the dis-
cretization. The accuracy of the approximation however is
nearly as good as for the conventional full grid methods, provided
that certain additional smoothness requirements are
fullled. Thus a sparse grid discretization method can be
employed also for higher-dimensional problems. The curse
of the dimensionality of conventional 'full' grid methods affects
sparse grids much less.
In this paper, we apply the sparse grid combination technique
[28] to the classication problem. For that the regularization
network problem is discretized and solved on a
certain sequence of conventional grids with uniform mesh
sizes in each coordinate direction. In contrast to [18], where
d-linear functions stemming from a tensor-product approach
were used, we now apply linear basis functions based on a
simplicial discretization. In comparison, this approach allows
the processing of more dimensions and needs less operations
per data point. The sparse grid solution is then
obtained from the solutions on the dierent grids by linear
combination. Thus the classier is build on sparse grid
points and not on data points. A discussion of the complexity
of the method gives that the method scales linearly
with the number of instances, i.e. the amount of data to be
classied. Therefore, our method is well suited for realistic
data mining applications where the dimension of the feature
space is moderately high (e.g. after some preprocessing
steps) but the amount of data is very large. Furthermore
the quality of the classier build by our new method seems
to be very good. Here we consider standard test problems
from the UCI repository and problems with huge synthetical
data sets in up to 10 dimensions. It turns out that our new
method achieves correctness rates which are competitive to
those of the best existing methods. Note that the combination
method is simple to use and can be parallelized in a
natural and straightforward way.
The remainder of this paper is organized as follows: In
Section 2 we describe the classication problem in the frame-work
of regularization networks as minimization of a (qua-
dratic) functional. We then discretize the feature space and
derive the associated linear problem. Here we focus on grid-based
discretization techniques. Then, we introduce the
sparse grid combination technique for the classication problem
and discuss its properties. Furthermore, we present a
new variant based on a discretization by simplices and discuss
complexity aspects. Section 3 presents the results of
numerical experiments conducted with the sparse grid combination
method, demonstrates the quality of the classier
build by our new method and compares the results with
the ones from [18] and with the ones obtained with dierent
forms of SVMs [33]. Some nal remarks conclude the paper.
2. THE PROBLEM
Classication of data can be interpreted as traditional
scattered data approximation problem with certain additional
regularization terms. In contrast to conventional scattered
data approximation applications, we now encounter
quite high-dimensional spaces. To this end, the approach of
regularization networks [21] gives a good framework. This
approach allows a direct description of the most important
neural networks and it also allows for an equivalent description
of support vector machines and n-term approximation
schemes [20].
Consider the given set of already classied data (the training
d Rg
Assume now that these data have been obtained by sampling
of an unknown function f which belongs to some function
space V dened over R d . The sampling process was disturbed
by noise. The aim is now to recover the function f
from the given data as good as possible. This is clearly an
ill-posed problem since there are innitely many solutions
possible. To get a well-posed, uniquely solvable problem we
have to assume further knowledge on f . To this end, regularization
theory [43, 47] imposes an additional smoothness
constraint on the solution of the approximation problem
and the regularization network approach considers the
variational problem
min f2V
with
Here, C(:; :) denotes an error cost function which measures
the interpolation error and (f) is a smoothness functional
which must be well dened for . The rst term enforces
closeness of f to the data, the second term enforces
smoothness of f and the regularization parameter balances
these two terms. Typical examples are
and
2 with
with r denoting the gradient and the Laplace operator.
The value of can be chosen according to cross-validation
techniques [13, 22, 37, 44] or to some other principle, such as
structural risk minimization [45]. Note that we nd exactly
this type of formulation in the case scattered data
approximation methods, see [1, 31], where the regularization
term is usually physically motivated.
2.1 Discretization
We now restrict the problem to a nite dimensional sub-space
. The function f is then replaced by
Here the ansatz functions f' j g N
should span VN and preferably
should form a basis for VN . The coecients f j g N j=1
denote the degrees of freedom. Note that the restriction to
a suitably chosen nite-dimensional subspace involves some
additional regularization (regularization by discretization)
which depends on the choice of VN .
In the remainder of this paper, we restrict ourselves to the
choice
and
for some given linear operator P . This way we obtain from
the minimization problem a feasible linear system. We thus
have to minimize
with fN in the nite dimensional space VN . We plug (2)
into (4) and obtain after dierentiation with respect to k ,
@k
'k
This is equivalent to
In matrix notation we end up with the linear system
Here C is a square N N matrix with entries C
is a rectangular N
M matrix with entries B
. The vector y contains the data labels y i and has
length M . The unknown vector contains the degrees of
freedom j and has length N .
Depending on the regularization operator we obtain different
minimization problems in VN . For example if we use
the gradient (fN
in the regularization expression
in (1) we obtain a Poisson problem with an additional
term which resembles the interpolation problem. The
natural boundary conditions for such a partial dierential
equation are Neumann conditions. The discretization (2)
gives us then the linear system (7) where C corresponds to
a discrete Laplacian. To obtain the classier fN we now
have to solve this system.
2.2 Grid based discrete approximation
Up to now we have not yet been specic what nite-
dimensional subspace VN and what type of basis functions
we want to use. In contrast to conventional data
mining approaches which work with ansatz functions associated
to data points we now use a certain grid in the attribute
space to determine the classier with the help of these grid
points. This is similar to the numerical treatment of partial
dierential equations.
For reasons of simplicity, here and in the the remainder of
this paper, we restrict ourself to the case x i= [0; 1] d .
This situation can always be reached by a proper rescaling
of the data space. A conventional nite element discretization
would now employ an equidistant
grid
n with mesh
size for each coordinate direction, where n is the
renement level. In the following we always use the gradient
in the regularization expression (3). Let j denote the
d . A nite element method with
piecewise d-linear, i.e. linear in each dimension, test- and
trial-functions n;j (x) on
grid
now would give
n;j n;j (x)
and the variational procedure (4) - (6) would result in the
discrete linear system
of size and matrix entries corresponding to (7).
Note that fn lives in the space
Vn := spanf n;j
The discrete problem (8) might in principle be treated by
an appropriate solver like the conjugate gradient method, a
multigrid method or some other suitable ecient iterative
method. However, this direct application of a nite element
discretization and the solution of the resulting linear system
by an appropriate solver is clearly not possible for a
d-dimensional problem if d is larger than four. The number
of grid points is of the order O(h d
the best case, the number of operations is of the same order.
Here we encounter the so-called curse of dimensionality: The
complexity of the problem grows exponentially with d. At
least for d > 4 and a reasonable value of n, the arising system
can not be stored and solved on even the largest parallel
computers today.
2.3 The sparse grid combination technique
Therefore we proceed as follows: We discretize and solve
the problem on a certain sequence of
grids
l
l 1 ;:::;l d
with uniform mesh sizes h in the t-th coordinate
direction. These grids may possess dierent mesh sizes for
dierent coordinate directions. To this end, we consider all
grids
l with
For the two-dimensional case, the grids needed in the combination
formula of level 4 are shown in Figure 1. The -
nite element approach with piecewise d-linear test- and trial-
functions
l;j (x) :=
d
Y
on
grid
l now would give
f l
l d
l;j l;j (x)
and the variational procedure (4) - (6) would result in the
discrete system
l
with the matrices
M; and the unknown
vector d. We then solve these
c
Figure
1: Combination technique with level
two dimensions
problems by a feasible method. To this end we use here
a diagonally preconditioned conjugate gradient algorithm.
But also an appropriate multigrid method with partial semi-
coarsening can be applied. The discrete solutions f l are
contained in the spaces
of piecewise d-linear functions on
grid
l .
Note that all these problems are substantially reduced in
size in comparison to (8). Instead of one problem with size
nd ), we now have to deal with
problems of size dim(V l
Moreover, all these problems can be solved independently,
which allows for a straightforward parallelization on a coarse
grain level, see [23]. There is also a simple but eective static
load balancing strategy available [25].
Finally we linearly combine the results f l
l;j l;j (x); from the dierent
grids
l as follows:
f (c)
The resulting function f (c)
n lives in the sparse grid space
This space has dim(V
It is
spanned by a piecewise d-linear hierarchical tensor product
basis, see [8].
Note that the summation of the discrete functions from
dierent spaces V l in (13) involves d-linear interpolation
which resembles just the transformation to a representation
in the hierarchical basis. For details see [24, 28, 29]. However
we never explicitly assemble the function f (c)
but keep
instead the solutions f l on the dierent
grids
l which arise
in the combination formula. Now, any linear operation F
on f (c)
can easily be expressed by means of the combination
Figure
2: Two-dimensional sparse grid (left) and
three-dimensional sparse grid
acting directly on the functions f l , i.e.
F(f (c)
l 1 +:::+l d =n+(d 1) q
F(f l
Therefore, if we now want to evaluate a newly given set
of data points f ~
(the test or evaluation set) by
~
we just form the combination of the associated values for f l
according to (13). The evaluation of the dierent f l in the
test points can be done completely in parallel, their summation
needs basically an all-reduce/gather operation.
For second order elliptic PDE model problems, it was
proven that the combination solution f (c)
n is almost as accurate
as the full grid solution fn , i.e. the discretization error
jje (c)
provided that a slightly stronger smoothness requirement
on f than for the full grid approach holds. We need the
seminorm
1
to be bounded. Furthermore, a series expansion of the error
is necessary for the combination technique. Its existence was
shown for PDE model problems in [10].
The combination technique is only one of the various methods
to solve problems on sparse grids. Note that there exist
also nite dierence [24, 38] and Galerkin nite element approaches
[2, 8, 9] which work directly in the hierarchical
product basis on the sparse grid. But the combination technique
is conceptually much simpler and easier to implement.
Moreover it allows to reuse standard solvers for its dierent
subproblems and is straightforwardly parallelizable.
2.4 Simplicial basis functions
So far we only mentioned d-linear basis functions based on
a tensor-product approach, this case was presented in detail
in [18]. But on the grids of the combination technique linear
basis functions based on a simplicial discretization are also
possible. For that we use the so-called Kuhn's triangulation
[15, 32] for each rectangular block, see Figure 3. Now, the
summation of the discrete functions for the dierent spaces
l in (13) only involves linear interpolation.
Table
1: Complexities of the storage, the assembly and the matrix-vector multiplication for the dierent
matrices arising in the combination method on one
grid
l for both discretization approaches. C l and G l can
be stored together in one matrix structure.
d-linear basis functions linear basis functions
l B l
storage O(3 d N) O(3 d N) O(2 d M) O((2 d
assembly O(3 d N) O(d 2 2d M) O(d 2 d M) O((2 d
mv-multiplication O(3 d N) O(3 d N) O(2 d M) O((2 d
Figure
3: Kuhn's triangulation of a three-dimensional
unit cube
The theroetical properties of this variant of the sparse grid
technique still has to be investigated in more detail. However
the results which are presented in section 3 warrant its
use. We see, if at all, just slightly worse results with linear
basis functions than with d-linear basis functions and we
believe that our new approach results in the same approximation
order.
Since in our new variant of the combination technique the
overlap of supports, i.e. the regions where two basis functions
are both non-zero, is greatly reduced due to the use of a
simplicial discretization, the complexities scale signicantly
better. This concerns both the costs of the assembly and
the storage of the non-zero entries of the sparsely populated
matrices from (8), see Table 1. Note that for general operators
P the complexities for C l scale with O(2 d N ). But for
our choice of zero-entries arise, which need
not to be considered, and which further reduce the complex-
ities, see Table 1 (right), column C l . The actual iterative
solution process (by a diagonally preconditioned conjugate
gradient method) scales independent of the number of data
points for both approaches.
Note however that both the storage and the run time
complexities still depend exponentially on the dimension d.
Presently, due to the limitations of the memory of modern
workstations (512 MByte - 2 GByte), we therefore can only
deal with the case d 8 for d-linear basis functions and
d 11 for linear basis functions. A decomposition of the
matrix entries over several computers in a parallel environment
would permit more dimensions.
3. NUMERICAL RESULTS
We now apply our approach to dierent test data sets.
Here we use both synthetical data and real data from practical
data mining applications. All the data sets are rescaled
to [0; 1] d . To evaluate our method we give the correctness
rates on testing data sets, if available, or the ten-fold cross-validation
results otherwise. For further details and a criti-
Figure
4: Spiral data set, sparse grid with level 5
(top left) to 8 (bottom right)
cal discussion on the evaluation of the quality of classica-
tion algorithms see [13, 37].
3.1 Two-dimensional problems
We rst consider synthetic two-dimensional problems with
small sets of data which correspond to certain structures.
3.1.1 Spiral
The rst example is the spiral data set, proposed by Alexis
Wieland of MITRE Corp [48]. Here, 194 data points describe
two intertwined spirals, see Figure 4. This is surely
an articial problem which does not appear in practical ap-
plications. However it serves as a hard test case for new
data mining algorithms. It is known that neural networks
can have severe problems with this data set and some neural
networks can not separate the two spirals at all [40].
In
Table
2 we give the correctness rates achieved with the
leave-one-out cross-validation method, i.e. a 194-fold cross-
validation. The best testing correctness was achieved on
level 8 with 89.18% in comparison to 77.20% in [40].
In
Figure
4 we show the corresponding results obtained
with our sparse grid combination method for the levels 5
to 8. With level 7 the two spirals are clearly detected and
resolved. Note that here 1281 grid points are contained in
the sparse grid. For level 8 (2817 sparse grid points) the
shape of the two reconstructed spirals gets smoother and
Table
3: Results for the Ripley data set
linear basis d-linear basis best possible %
level ten-fold test % test data % test data % linear d-linear
9 87.7 0.0015 90.1 90.9 91.1 91.0
level training correctness testing correctness
9 0.0006 100.00 % 88.14 %
Table
2: Leave-one-out cross-validation results for
the spiral data set
the reconstruction gets more precise.
3.1.2 Ripley
This data set, taken from [36], consists of 250 training
data and 1000 test points. The data set was generated synthetically
and is known to exhibit 8 % error. Thus no better
testing correctness than 92 % can be expected.
Since we now have training and testing data, we proceed
as follows: First we use the training set to determine the best
regularization parameter per ten-fold cross-validation. The
best test correctness rate and the corresponding are given
for dierent levels n in the rst two columns of Table 3.
With this we then compute the sparse grid classier from
the 250 training data. The column three of Table 3 gives
the result of this classier on the (previously unknown) test
data set. We see that our method works well. Already level
4 is sucient to obtain results of 91.4 %. The reason is
surely the relative simplicity of the data, see Figure 5. Just
a few hyperplanes should be enough to separate the classes
quite properly. We also see that there is not much need
to use any higher levels, on the contrary there is even an
overtting eect visible in Figure 5.
In column 4 we show the results from [18], there we achieve
almost the same results with d-linear functions.
To see what kind of results could be possible with a more
sophisticated strategy for determing we give in the last two
columns of Table 3 the testing correctness which is achieved
for the best possible . To this end we compute for all
(discrete) values of the sparse grid classiers from the 250
data points and evaluate them on the test set. We then pick
the best result. We clearly see that there is not much of
a dierence. This indicates that the approach to determine
the value of from the training set by cross-validation works
well. Again we have almost the same results with linear and
d-linear basis functions. Note that a testing correctness of
Figure
5: Ripley data set, combination technique
with linear basis functions. Left: level 4,
Right: level 8,
90.6 % and 91.1 % was achieved in [36] and [35], respectively,
for this data set.
3.2 6-dimensional problems
3.2.1 BUPA Liver
The BUPA Liver Disorders data set from Irvine Machine
Learning Database Repository [6] consists of 345 data points
with 6 features and a selector eld used to split the data
into 2 sets with 145 instances and 200 instances respectively.
Here we have no test data and therefore can only report our
ten-fold cross-validation results.
We compare with our d-linear results from [18] and with
the two best results from [33], the therein introduced smoothed
support vector machine (SSVM) and the classical support
vector machine (SVM jj:jj 2) [11, 46]. The results are given in
Table
4.
As expected, our sparse grid combination approach with
linear basis functions performs slightly worse than the d-
linear approach. The best test result was 69.60% on level
4. The new variant of the sparse grid combination technique
performs only slightly worse than the SSVM, whereas
the d-linear variant performs slighly better than the support
vector machines. Note that the results for other SVM approaches
like the support vector machine using the 1-norm
approach (SVM jj:jj 1
were reported to be somewhat worse
in [33].
Table
4: Results for the BUPA liver disorders data set
linear d-linear For comparison with
% % other methods
level 1 10-fold train. correctness 0.012 76.00 0.020 76.00 SVM [33]
10-fold test. correctness 69.00 67.87 SSVM SVM jj:jj 2level 2 10-fold train. correctness 0.040 76.13 0.10 77.49 70.37 70.57
10-fold test. correctness 66.01 67.84 70.33 69.86
level 3 10-fold train. correctness 0.165 78.71 0.007 84.28
10-fold test. correctness 66.41 70.34
level 4 10-fold train. correctness 0.075 92.01 0.0004 90.27
10-fold test. correctness 69.60 70.92
3.2.2 Synthetic massive data set in 6D
To measure the performance on a massive data set we
produced with DatGen [34] a 6-dimensional test case with
5 million training points and 20 000 points for testing. We
used the call datgen -r1 -X0/100,R,O:0/100,R,O:0/100,R,O:
-O5020000 -p -e0.15.
The results are given in Table 5. Note that already on level
1 a testing correctness of over 90 % was achieved with just
0:01. The main observation on this test case concerns
the execution time, measured on a Pentium III 700 MHz
machine. Besides the total run time, we also give the CPU
time which is needed for the computation of the matrices
l .
We see that with linear basis functions really huge data
sets of 5 million points can be processed in reasonable time.
Note that more than 50 % of the computation time is spent
for the data matrix assembly only and, more importantly,
that the execution time scales linearly with the number of
data points. The latter is also the case for the d-linear func-
tions, but, as mentioned, this approach needs more operations
per data point and results in a much longer execution
time, compare also Table 5. Especially the assembly of the
data matrix needs more than 96 % of the total run time for
this variant. For our present example the linear basis approach
is about 40 times faster than the d-linear approach
on the same renement level, e.g. for level 2 we need 17
minutes in the linear case and 11 hours in the d-linear case.
For higher dimensions the factor will be even larger.
3.3 10-dimensional problems
3.3.1 Forest cover type
The forest cover type dataset comes from the UCI KDD
Archive [4], it was also used in [30], where an approach similar
to ours was followed. It consists of cartographic variables
for meter cells and a forest cover type is to be pre-
dicted. The 12 originally measured attributes resulted in 54
attributes in the data set, besides 10 quantitative variables
there are 4 binary wilderness areas and 40 binary soil type
variables. We only use the quantitative variables. The
class label has 7 values, Spruce/Fir, Lodgepole Pine, Ponderosa
Pine, Cottonwood/Willow, Aspen, Douglas-r and
Krummholz. Like [30] we only report results for the classi-
cation of Ponderosa Pine, which has 35754 instances out of
the total 581012.
Since far less than 10 % of the instances belong to Ponderosa
Pine we weigh this class with a factor of 5, i.e. Ponderosa
Pine has a class value of 5, all others of -1 and the
treshold value for separating the classes is 0. The data set
was randomly separated into a training set, a test set, and
a evaluation set, all similar in size.
In [30] only results up to 6 dimensions could be reported.
In
Table
6 we present our results for the 6 dimensions chosen
there, i.e. the dimensions 1,4,5,6,7, and 10, and for all 10
dimensions as well. To give an overview of the behavior over
several 's we present for each level n the overall correctness
results, the correctness results for Ponderosa Pine and the
correctness result for the other class for three values of .
We then give results on the evaluation set for a chosen .
We see in Table 6 that already with level 1 we have a
testing correctness of 93.95 % for the Ponderosa Pine in the
6 dimensional version. Higher renement levels do not give
better results. The result of 93.52% on the evaluation set
is almost the same as the corresponding testing correctness.
Note that in [30] a correctness rate of 86.97 % was achieved
on the evaluation set.
The usage of all 10 dimensions improves the results slightly,
we get 93.81 % as our evaluation result on level 1. As before
higher renement levels do not improve the results for this
data set.
Note that the forest cover example is sound enough as an
example of classication, but it might strike forest scientists
as being amusingly supercial. It has been known for
years that the dynamics of forest growth can have a dominant
eect on which species is present at a given location
[7], yet there are no dynamic variables in the classier. This
one can see as a warning that it should never be assumed
that the available data contains all the relevant information.
3.3.2 Synthetic massive data set in 10D
To measure the performance on a still higher dimensional
massive data set we produced with DatGen [34] a 10-dimen-
sional test case with 5 million training points and 50 000
points for testing. We used the call datgen -r1 -X0/200,R,O:
Like in the synthetical 6-dimensional example the main
observations concern the run time, measured on a Pentium
III 700 MHz machine. Besides the total run time, we also
give the CPU time which is needed for the computation of
the matrices G
l . Note that the highest amount
of memory needed (for level 2 in the case of 5 million data
points) was 500 MBytes, about 250 MBytes for the matrix
and about 250 MBytes for keeping the data points in memory
More than 50 % of the run time is spent for the assembly
Table
5: Results for a 6D synthetic massive data set,
training testing total data matrix # of
# of points correctness correctness time (sec) time (sec) iterations
linear basis functions
level 1 500 000 90.5 90.5 25 8 25
5 million 90.5 90.6 242 77 28
level 2 500 000 91.2 91.1 110 55 204
5 million 91.1 91.2 1086 546 223
50 000 92.2 91.4 48 23 869
level 3 500 000 91.7 91.7 417 226 966
5 million 91.6 91.7 4087 2239 1057
d-linear basis functions
level 1 500 000 90.7 90.8 597 572 91
5 million 90.7 90.7 5897 5658 102
level 2 500 000 91.5 91.6 4285 4168 656
5 million 91.4 91.5 42690 41596 742
of the data matrix and the time needed for the data matrix
scales linearly with the number of data points, see Table 7.
The total run time seems to scale even better than linear.
4. CONCLUSIONS
We presented the sparse grid combination technique with
linear basis functions based on simplices for the classication
of data in moderate-dimensional spaces. Our new method
gave good results for a wide range of problems. It is capable
to handle huge data sets with 5 million points and more. The
run time scales only linearly with the number of data. This
is an important property for many practical applications
where often the dimension of the problem can substantially
be reduced by certain preprocessing steps but the number
of data can be extremely huge. We believe that our sparse
grid combination method possesses a great potential in such
practical application problems.
We demonstrated for the Ripley data set how the best
value of the regularization parameter can be determined.
This is also of practical relevance.
A parallel version of the sparse grid combination technique
reduces the run time signicantly, see [17]. Note that
our method is easily parallelizable already on a coarse grain
level. A second level of parallelization is possible on each
grid of the combination technique with the standard techniques
known from the numerical treatment of partial differential
equations.
Since not necessarily all dimensions need the maximum renement
level, a modication of the combination technique
with regard to dierent renement levels in each dimension
along the lines of [19] seems to be promising.
Note furthermore that our approach delivers a continuous
classier function which approximates the data. It therefore
can be used without modication for regression problems as
well. This is in contrast to many other methods like e.g.
decision trees. Also more than two classes can be handled
by using isolines with just dierent values.
Finally, for reasons of simplicity, we used the operator
r. But other dierential (e.g.
operators can be employed here with their associated regular
nite element ansatz functions.
5.
ACKNOWLEDGEMENTS
Part of the work was supported by the German Bundesministerium
fur Bildung und Forschung (BMB+F)
within the project 03GRM6BN. This work was carried out
in cooperation with Prudential Systems Software GmbH,
Chemnitz. The authors thank one of the referees for his
remarks on the forest cover data set.
6.
--R
Adaptive Verfahren f
The UCI KDD archive.
UCI repository of machine learning databases
Some ecological consequences of a computer model of forest growth.
Tensor product approximation spaces for the e
Learning from Data - Concepts
Data Mining Methods for Knowledge Discovery.
Approximate statistical tests for comparing supervised classi
Information Complexity of Multivariate Fredholm Integral Equations in Sobolev Classes.
Simplizialzerlegungen von beschr
On the computation of the eigenproblems of hydrogen and helium in strong magnetic and electric
On the parallelization of the sparse grid approach for data mining.
Data mining with sparse grids.
Numerical Integration using Sparse Grids.
An equivalence between sparse approximation and support vector machines.
Regularization theory and neural networks architectures.
Generalized cross validation as a method for choosing a good ridge parameter.
The combination technique for the sparse grid solution of PDEs on multiprocessor machines.
Adaptive sparse grid multilevel methods for elliptic PDEs based on
Optimized tensor-product approximation spaces
Sparse grids for boundary integral equations.
A combination technique for the solution of sparse grid problems.
High dimensional smoothing based on multilevel analysis.
Grundlagen der goemetrischen Datenverarbeitung
Some combinatorial lemmas in topology.
SSVM: A smooth support vector machine for classi
A program that creates structured data.
Bayesian neural networks for classi
Neural networks and related methods for classi
On comparing classi
Die Methode der Finiten Di
Interpolation on sparse grids and Nikol'skij-Besov spaces of dominating mixed smoothness
2d spiral pattern recognition with possibilistic measures.
Quadrature and interpolation formulas for tensor products of certain classes of functions.
Approximation of functions with bounded mixed derivative.
Solutios of ill-posed problems
Estimation of dependences based on empirical data.
The Nature of Statistical Learning Theory.
Spline models for observational data
Spiral data set.
Sparse grids.
--TR
Regularization theory and neural networks architectures
Approximation of scattered data using smooth grid functions
The nature of statistical learning theory
Information complexity of multivariate Fredholm integral equations in Sobolev classes
2D spiral pattern recognition with possibilistic measures
An equivalence between sparse approximation and support vector machines
Data mining methods for knowledge discovery
Adaptive sparse grid multilevel methods for elliptic PDEs based on finite differences
Approximate statistical tests for comparing supervised classification learning algorithms
Bayesian neural networks for classification
On the computation of the eigenproblems of hydrogen helium in strong magnetic and electric fields with the sparse grid combination technique
Learning from Data
On Comparing Classifiers
On the Parallel Solution of 3D PDEs on a Network of Workstations and on Vector Computers
--CTR
Jochen Garcke, Regression with the optimised combination technique, Proceedings of the 23rd international conference on Machine learning, p.321-328, June 25-29, 2006, Pittsburgh, Pennsylvania
Deepak K. Agarwal, Shrinkage estimator generalizations of Proximal Support Vector Machines, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
J. Garcke , M. Griebel , M. Thess, Data mining with sparse grids, Computing, v.67 n.3, p.225-253, November 2001 | data mining;classification;approximation;simplicial discretization;sparse grids;combination technique |
502529 | Mining time-changing data streams. | Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them changed during this time, sometimes radically. Although a number of algorithms have been proposed for learning time-changing concepts, they generally do not scale well to very large databases. In this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner. This algorithm, called CVFDT, stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate. CVFDT learns a model which is similar in accuracy to the one that would be learned by reapplying VFDT to a moving window of examples every time a new example arrives, but with O(1) complexity per example, as opposed to O(w), where w is the size of the window. Experiments on a set of large time-changing data streams demonstrate the utility of this approach. | INTRODUCTION
Modern organizations produce data at unprecedented
rates; among large retailers, e-commerce sites, telecommunications
providers, and scientic projects, rates of gigabytes
per day are common. While this data can contain valuable
knowledge, its volume increasingly outpaces practitioners'
ability to mine it. As a result, it is now common practice
either to mine a subsample of the available data or to mine
for models drastically simpler than the data could support.
In some cases, the volume and time span of accumulated
data is such that just storing it consistently and reliably
for future use is a challenge. Further, even when storage is
not problematic, it is often di-cult to gather the data in
one place, at one time, in a format appropriate for mining.
For all these reasons, in many areas the notion of mining a
xed-sized database is giving way to the notion of mining
an open-ended data stream as it arrives. The goal of our re-search
is to help make this possible with a minimum of eort
for the data mining practitioner. In a previous paper [9] we
presented VFDT, a decision tree induction system capable
of learning from high-speed data streams in an incremental,
anytime fashion, while producing models that are asymptotically
arbitrarily close to those that would be learned by
traditional decision tree induction systems.
Most statistical and machine-learning algorithms, including
VFDT, make the assumption that training data is a
random sample drawn from a stationary distribution. Un-
fortunately, most of the large databases and data streams
available for mining today violate this assumption. They exist
over months or years, and the underlying processes generating
them changes during this time, sometimes radically.
For example, a new product or promotion, a hacker's attack,
a holiday, changing weather conditions, changing economic
conditions, or a poorly calibrated sensor could all lead to violations
of this assumption. For classication systems, which
attempt to learn a discrete function given examples of its inputs
and outputs, this problem takes the form of changes in
the target function over time, and is known as concept drift.
Traditional systems assume that all data was generated by a
single concept. In many cases, however, it is more accurate
to assume that data was generated by a series of concepts, or
by a concept function with time-varying parameters. Traditional
systems learn incorrect models when they erroneously
assume that the underlying concept is stationary if in fact
it is drifting.
One common approach to learning from time-changing
data is to repeatedly apply a traditional learner to a sliding
window of w examples; as new examples arrive they are
inserted into the beginning of the window, a corresponding
number of examples is removed from the end of the win-
dow, and the learner is reapplied [27]. As long as w is small
relative to the rate of concept drift, this procedure assures
availability of a model re
ecting the current concept generating
the data. If the window is too small, however, this
may result in insu-cient examples to satisfactorily learn the
concept. Further, the computational cost of reapplying a
learner may be prohibitively high, especially if examples arrive
at a rapid rate and the concept changes quickly.
To meet these challenges we propose the CVFDT system,
which is capable of learning decision trees from high-speed,
time changing data streams. CVFDT works by e-ciently
keeping a decision tree up-to-date with a window of exam-
ples. In particular, it is able to keep its model consistent
with a window using only a constant amount of time for
each new example (more precisely, time proportional to the
number of attributes in the data and the depth of the induced
tree). CVFDT grows an alternate subtree whenever
an old one seems to be out-of-date, and replaces the old one
when the new one becomes more accurate. This allows it to
make smooth, ne-grained adjustments when concept drift
occurs. In eect, CVFDT is able to learn a nearly equivalent
model to the one VFDT would learn if repeatedly reapplied
to a window of examples, but in O(1) time instead of O(w)
time per new example.
In the next section we discuss the basics of the VFDT sys-
tem, and in the following section we introduce the CVFDT
system. We then present a series of experiments on synthetic
data which demonstrate how CVFDT can outperform traditional
systems on high-speed, time-changing data streams.
Next, we apply CVFDT to mining the stream of web page
requests for the entire University of Washington campus.
We conclude with a discussion of related and future work.
2. THE VFDT SYSTEM
The classication problem is generally dened as follows.
A set of N training examples of the form (x; y) is given,
where y is a discrete class label and x is a vector of d at-
tributes, each of which may be symbolic or numeric. The
goal is to produce from these examples a model
which will predict the classes y of future examples x with
high accuracy. For example, x could be a description of a
client's recent purchases, and y the decision to send that customer
a catalog or not; or x could be a record of a cellular-
telephone call, and y the decision whether it is fraudulent
or not. One of the most eective and widely-used classi-
cation methods is decision tree learning [4, 20]. Learners of
this type induce models in the form of decision trees, where
each node contains a test on an attribute, each branch from
a node corresponds to a possible outcome of the test, and
each leaf contains a class prediction. The label
for an example x is obtained by passing the example down
from the root to a leaf, testing the appropriate attribute at
each node and following the branch corresponding to the
attribute's value in the example. A decision tree is learned
by recursively replacing leaves by test nodes, starting at the
root. The attribute to test at a node is chosen by comparing
all the available attributes and choosing the best one
according to some heuristic measure. Classic decision tree
learners like C4.5 [20], CART, SLIQ [17], and SPRINT [24]
Table
1: The VFDT Algorithm.
Inputs: S is a stream of examples,
X is a set of symbolic attributes,
G(:) is a split evaluation function,
- is one minus the desired probability of
choosing the correct attribute at any
given node,
is a user-supplied tie threshold,
nmin is the # examples between checks for
growth.
Output: HT is a decision tree.
Procedure VFDT (S; X;G; -; )
Let HT be a tree with a single leaf l 1 (the root).
g.
be the G obtained by predicting the most
frequent class in S.
For each class yk
For each value x ij of each attribute X
For each example (x; y) in S
Sort (x; y) into a leaf l using HT .
For each x ij in x such that X
Increment n ijy (l).
Label l with the majority class among the examples
seen so far at l.
Let n l be the number of examples seen at l.
If the examples seen so far at l are not all of the same
class and n l mod nmin is 0, then
Compute
using the counts n ijk (l).
Let Xa be the attribute with highest G l .
Let X b be the attribute with second-highest G l .
Compute using Equation 1.
If ((G l > ) or (G l <= <
Replace l by an internal node that splits on Xa .
For each branch of the split
Add a new leaf l m , and let
be the G obtained by predicting
the most frequent class at l m .
For each class yk and each value x ij of each
attribute
Return HT .
use every available training example to select the best attribute
for each split. This policy is necessary when data is
scarce, but it has two problems when training examples are
abundant: it requires all examples be available for consideration
throughout their entire runs, which is problematic
when data does not t in RAM or on disk, and it assumes
that the process generating examples remains the same during
the entire period over which the examples are collected
and mined.
In previous work [9] we presented the VFDT (Very Fast
Decision Tree learner) system, which is able to learn from
abundant data within practical time and memory constrai-
nts. It accomplishes this by noting, with Catlett [5] and others
[12, 19], that it may be su-cient to use a small sample
of the available examples when choosing the split attribute
at any given node. Thus, only the rst examples to arrive
on the data stream need to be used to choose the split attribute
at the root; subsequent ones are passed through the
induced portion of the tree until they reach a leaf, are used
to choose a split attribute there, and so on recursively. To
determine the number of examples needed for each decision,
VFDT uses a statistical result known as Hoeding bounds
or additive Cherno bounds [13]. After n independent observations
of a real-valued random variable r with range R,
the Hoeding bound ensures that, with condence 1 -, the
true mean of r is at least r , where r is the observed mean
of the samples and
r
2n (1)
This is true irrespective of the probability distribution that
generated the observations. Let G(X i ) be the heuristic measure
used to choose test attributes (we use information gain).
After seeing n samples at a leaf, let Xa be the attribute with
the best heuristic measure and X b be the attribute with the
second best. Let be a new random
variable, the dierence between the observed heuristic val-
ues. Applying the Hoeding bound to G, we see that if
G > (as calculated by Equation 1 with a user-supplied
-), we can condently say that the dierence between G(Xa)
and G(X b ) is larger than zero, and select Xa as the split at-
tribute. 1;2 Table 1 contains pseudo-code for VFDT's core al-
gorithm. The counts n ijk are the su-cient statistics needed
to compute most heuristic measures; if other quantities are
required, they can be similarly maintained. When the sucient
statistics ll the available memory, VFDT reduces its
memory requirements by temporarily deactivating learning
in the least promising nodes; these nodes can be reactivated
later if they begin to look more promising than currently
active nodes. VFDT employs a tie mechanism which precludes
it from spending inordinate time deciding between
1 This is valid as long as G (and therefore G) can be viewed
as an average over all examples seen at the leaf, which is the
case for most commonly-used heuristics. For example, if
information gain is used, the quantity being averaged is the
reduction in the uncertainty regarding the class membership
of the example.
2 In this paper we assume that the third-best and lower attributes
have su-ciently smaller gains that their probability
of being the true best choice is negligible. We plan to lift
this assumption in future work. If the attributes at a given
node are (pessimistically) assumed independent, it simply
involves a Bonferroni correction to - [18].
attributes whose practical dierence is negligible. That is,
VFDT declares a tie and selects Xa as the split attribute
any time G < < (where is a user-supplied tie thresh-
old). Pre-pruning is carried out by considering at each node
a \null" attribute X ; that consists of not splitting the node.
Thus a split will only be made if, with condence 1 -, the
best split found is better according to G than not splitting.
Notice that the tests for splits and ties are only executed
once for every nmin (a user supplied value) examples that
arrive at a leaf. This is justied by the observation that
VFDT is unlikely to make a decision after any given exam-
ple, so it is wasteful to carry out these calculations for each
one of them. The pseudo-code shown is only for symbolic
attributes; we are currently developing its extension to numeric
ones. The sequence of examples S may be innite, in
which case the procedure never terminates, and at any point
in time a parallel procedure can use the current tree HT to
make class predictions.
Using o-the-shelf hardware, VFDT is able to learn as
fast as data can be read from disk. The time to incorporate
an example is O(ldvc) where l is the maximum depth of
HT , d is the number of attributes, v is the maximum number
of values per attribute, and c is the number of classes.
This time is independent of the total number of examples
already seen (assuming the size of the tree depends only on
the \true" concept, and not on the dataset). Because of the
use of Hoeding bounds, these speed gains do not necessarily
lead to a loss of accuracy. It can be shown that, with
high condence, the core VFDT system (without ties or de-
activations due to memory constraints) will asymptotically
induce a tree arbitrarily close to the tree induced by a traditional
batch learner. Let DT1 be the tree induced by a
version of VFDT using innite data to choose each node's
split attribute, HT - be the tree learned by the core VFDT
system given an innite data stream, and p be the probability
that an example passed through DT1 to level i will
fall into a leaf at that point. Then the probability that an
arbitrary example will take a dierent path through DT1
and HT - is bounded by -=p [9]. A corollary of this result
states that the tree learned by the core VFDT system on
a nite sequence of examples will correspond to a subtree
of DT1 with the same bound of -=p. See Domingos and
Hulten [9] for more details on VFDT and this -=p bound.
3. THE CVFDT SYSTEM
CVFDT (Concept-adapting Very Fast Decision Tree
learner) is an extension to VFDT which maintains VFDT's
speed and accuracy advantages but adds the ability to detect
and respond to changes in the example-generating process.
Like other systems with this capability, CVFDT works by
keeping its model consistent with a sliding window of ex-
amples. However, it does not need to learn a new model
from scratch every time a new example arrives; instead, it
updates the su-cient statistics at its nodes by incrementing
the counts corresponding to the new example, and decrementing
the counts corresponding to the oldest example in
the window (which now needs to be forgotten). This will
statistically have no eect if the underlying concept is sta-
tionary. If the concept is changing, however, some splits
that previously passed the Hoeding test will no longer do
so, because an alternative attribute now has higher gain (or
the two are too close to tell). In this case CVFDT begins to
grow an alternative subtree with the new best attribute at
Table
2: The CVFDT algorithm.
Inputs: S is a sequence of examples,
X is a set of symbolic attributes,
G(:) is a split evaluation function,
- is one minus the desired probability of
choosing the correct attribute at any
given node,
is a user-supplied tie threshold,
w is the size of the window,
nmin is the # examples between checks for growth,
f is the # examples between checks for drift.
Output: HT is a decision tree.
Procedure CVFDT(S;X; G; -; ; w; nmin)
/* Initialize */
Let HT be a tree with a single leaf l 1 (the root).
Let ALT (l 1) be an initially empty set of alternate
trees for l 1 .
be the G obtained by predicting the most
frequent class in S.
g.
Let W be the window of examples, initially empty.
For each class yk
For each value x ij of each attribute X
/* Process the examples */
For each example (x; y) in S
Sort (x; y) into a set of leaves L using HT and all
trees in ALT of any node (x; y) passes through.
Let ID be the maximum id of the leaves in L.
Add ((x; y); ID) to the beginning of W .
If
be the last element of W
removed
G; (x; y); -; nmin ; )
If there have been f examples since the last checking
of alternate trees
CheckSplitValidity(HT; n; -)
Return HT .
Table
3: The CVFDTGrow procedure.
Procedure CVFDTGrow(HT; n; G; (x; y); -; nmin ; )
Sort (x; y) into a leaf l using HT .
Let P be the set of nodes traversed in the sort.
For each node l pi in P
For each x ij in x such that X
Increment n ijy (l p ).
For each tree Ta in ALT (l p)
G; (x; y); -; nmin ; )
Label l with the majority class among the examples seen
so far at l.
Let n l be the number of examples seen at l.
If the examples seen so far at l are not all of the same
class and n l mod nmin is 0, then
Compute
using the counts n ijk (l).
Let Xa be the attribute with highest G l .
Let X b be the attribute with second-highest G l .
Compute using Equation 1 and -.
If ((G l > ) or (G l <= <
Replace l by an internal node that splits on Xa .
For each branch of the split
Add a new leaf l m , and let
Let ALT (l m) = fg.
be the G obtained by predicting the
most frequent class at l m .
For each class yk and each value x ij of each
attribute
its root. When this alternate subtree becomes more accurate
on new data than the old one, the old subtree is replaced by
the new one.
Table
contains a pseudo-code outline of the CVFDT
algorithm. CVFDT does some initializations, and then processes
examples from the stream S indenitely. As each
example arrives, it is added to the window 3 , an old
example is forgotten if needed, and (x; y) is incorporated
into the current model. CVFDT periodically scans HT and
all alternate trees looking for internal nodes whose su-cient
statistics indicate that some new attribute would make a
better test than the chosen split attribute. An alternate
subtree is started at each such node.
Table
3 contains pseudo-code for the tree-growing portion
of the CVFDT system. It is similar to the Hoeding
Tree algorithm, but CVFDT monitors the validity of its old
decisions by maintaining su-cient statistics at every node
in HT (instead of only at the leaves like VFDT). Forgetting
an old example is slightly complicated by the fact that
HT may have grown or changed since the example was initially
incorporated. Therefore, nodes are assigned a unique,
monotonically increasing ID as they are created. When an
example is added to W , the maximum ID of the leaves it
reaches in HT and all alternate trees is recorded with it. An
example's eects are forgotten by decrementing the counts
in the su-cient statistics of every node the example reaches
3 The window is stored in RAM if resources are available,
otherwise it will be kept on disk.
Table
4: The ForgetExample procedure.
Procedure
while it traverses leaves
with id IDw ,
Let P be the set of nodes traversed in the sort.
For each node l in P
For each x ij in x such that X
Decrement n ijk (l).
For each tree T alt in ALT (l)
in HT whose ID is the stored ID. See the pseudo-code in
Table
4 for more detail about how CVFDT forgets examples.
CVFDT periodically scans the internal nodes of HT looking
for ones where the chosen split attribute would no longer
be selected; that is, where G(Xa) G(X b ) and > .
When it nds such a node, CVFDT knows that it either
initially made a mistake splitting on Xa (which should happen
less than -% of the time), or that something about the
process generating examples has changed. In either case,
CVFDT will need to take action to correct HT . CVFDT
grows alternate subtrees to changed subtrees of HT , and
only modies HT when the alternate is more accurate than
the original. To see why this is needed, let l be a node
where change was detected. A simple solution is to replace
l with a leaf predicting the most common class in l 's sufcient
statistics. This policy assures that HT is always as
current as possible with respect to the process generating
examples. However, it may be too drastic, because it initially
forces a single leaf to do the job previously done by a
whole subtree. Even if the subtree is outdated, it may still
be better than the best single leaf. This is particularly true
when l is at or near the root of HT , as it will result in
drastic short-term reductions in HT 's predictive accuracy {
clearly not acceptable when a parallel process is using HT
to make critical decisions.
Each internal node in HT has a list of alternate subtrees
being considered as replacements for the subtree rooted at
the node. Table 5 contains pseudo-code for the CheckSplit-
procedure. CheckSplitValidity starts an alternate
subtree whenever it nds a new winning attribute at a node;
that is, when there is a new best attribute and G > or if
< and G =2. This is very similar to the procedure
used to choose initial splits, except the tie criteria is tighter
to avoid excessive alternate tree creation. CVFDT supports
a parameter which limits the total number of alternate trees
being grown at any one time. Alternate trees are grown
the same way HT is, via recursive calls to the CVFDT pro-
cedures. Periodically, each node with a non-empty set of
alternate subtrees, l test , enters a testing mode to determine
if it should be replaced by one of its alternate subtrees. Once
in this mode, l test collects the next m training examples that
arrive at it and, instead of using them to grow its children
or alternate trees, uses them to compare the accuracy of the
subtree it roots with the accuracies of all of its alternate
subtrees. If the most accurate alternate subtree is more accurate
than the l test , l test is replaced by the alternate. During
the test phase, CVFDT also prunes alternate subtrees
that are not making progress (i.e., whose accuracy is not in-
Table
5: The CheckSplitValidity procedure.
Procedure CheckSplitValidity(HT; n; -)
For each node l in HT that is not a leaf
For each tree T alt in ALT (l)
Let Xa be the split attribute at l.
Let Xn be the attribute with the highest G l
other than Xa .
Let X b be the attribute with the highest G l
other than Xn .
If G l 0 and no tree in ALT (l) already splits on
at its root
Compute using Equation 1 and -.
If (G l > ) or ( < and G l =2), then
Let l new be an internal node that splits on Xn .
Let ALT
For each branch of the split
Add a new leaf l m to l new
Let ALT (l m) = fg.
be the G obtained by predicting
the most frequent class at l m .
For each class yk and each value x ij of each
attribute
creasing over time). For each alternate subtree of l test , l i
alt ,
CVFDT remembers the smallest accuracy dierence ever
achieved between the two, min (l test ; l i alt ). CVFDT prunes
any alternate whose current test phase accuracy dierence
is at least min (l test ; l i
One window size w will not be appropriate for every concept
and every type of drift; it may be benecial to dynamically
change w during a run. For example, it may make
sense to shrink w when many of the nodes in HT become
questionable at once, or in response to a rapid change in
data rate, as these events could indicate a sudden concept
change. Similarly, some applications may benet from an increase
in w when there are few questionable nodes because
this may indicate that the concept is stable { a good time to
learn a more detailed model. CVFDT is able to dynamically
adjust the size of its window in response to user-supplied
events. Events are specied in the form of hook functions
which monitor S and HT and can call the SetWindowSize
function when appropriate. CVFDT changes the window
size by updating w and immediately forgetting any examples
that no longer t in W .
We now discuss a few of the properties of the CVFDT system
and brie
y compare it with VFDT-Window, a learner
that reapplies VFDT to W for every new example. CVFDT
requires memory proportional to O(ndvc) where n is the
number of nodes in CVFDT's main tree and all alternate
trees, d is the number of attributes, v is the maximum number
of values per attribute, and c is the number of classes.
The window of examples can be in RAM or can be stored on
4 When RAM is short, CVFDT is more aggressive about
pruning unpromising alternate subtrees.
disk at the cost of a few disk accesses per example. There-
fore, CVFDT's memory requirements are dominated by the
su-cient statistics and are independent of the total number
of examples seen. At any point during a run, CVFDT
will have available a model which re
ects the current concept
generating W . It is able to keep this model up-to-date
in time proportional to O(lcdvc) per example, where l c is
the length of the longest path an example will have to take
through HT times the number of alternate trees. VFDT-
Window requires O(lvdvcw) time to keep its model up-to-
date for every new example, where l v is the maximum depth
of HT . VFDT is a factor of wlv =lc worse than CVFDT; em-
pirically, we observed l c to be smaller than l v in all of our
experiments. Despite this large time dierence, CVFDT's
drift mechanisms allow it to produce a model of similar ac-
curacy. The structure of the models induced by the two may,
however, be signicantly dierent, for the following reason.
VFDT-Window uses the information from each training example
at one place in the tree it induces: the leaf where
the example falls when it arrives. This means that VFDT-
Window uses the rst examples from W to make a decision
at its root, the next to make a decision at the rst level of
the tree, and so on. After an initial building phase, CVFDT
will have a fully induced tree available. Every new example
is passed through this induced tree, and the information it
contains is used to update statistics at every node it passes
through. This dierence can be an advantage for CVFDT,
as it allows the induction of larger trees with better probability
estimates at the leaves. It can also be a disadvantage
and VFDT-Window may be more accurate when there is a
large concept shift part-way through W . This is because
VFDT-Window's leaf probabilities will be set by examples
near the end of W while CVFDT's will re
ect all of W .
Also notice that, even when the structure of the induced
tree does not change, CVFDT and VFDT-Window can out-perform
VFDT simply because their leaf probabilities (and
therefore class predictions) are updated faster, without the
\dead weight" of all the examples that fell into leaves before
the current window.
4. EMPIRICAL STUDY
We conducted a series of experiments comparing CVFDT
to VFDT and VFDT-Window. Our goals were to evaluate
CVFDT's ability to scale up, to evaluate CVFDT's ability
to deal with varying levels of drift, and to identify and characterize
the situations where CVFDT outperforms the other
systems.
4.1 Synthetic Data
The experiments with synthetic data used a changing concept
based on a rotating hyperplane. A hyperplane in d-dimensional
space is the set of points x that satisfy
d
where x i is the ith coordinate of x. Examples for which
are labeled positive, and examples for which
are labeled negative. Hyperplanes are useful
for simulating time-changing concepts because we can
change the orientation and position of the hyperplane in a
smooth manner by changing the relative size of the weights.
In particular, sorting the weights by their magnitudes provides
a good indication of which dimensions contain the
most information; in the limit, when all but one of the
weights are zero, the dimension associated with the non-zero
weight is the only one that contains any information about
the concept. This allows us to control the relative information
content of the attributes, and thus change the optimal
order of tests in a decision tree representing the hyperplane,
by simply changing the relative sizes of the weights. We
sought a concept that maintained the advantages of a hy-
perplane, but where the weights could be randomly modied
without potentially causing the decision frontier to move
outside the range of the data. To meet these goals we used a
series of alternating class bands separated by parallel hyper-
planes. We start with a reference hyperplane whose weights
are initialized to :2 except for w0 which is :25d. To label
an example, we substitute its coordinates into the left hand
side of Equation 2 to obtain a sum s. If jsj :1 w0 the
example is labeled positive, otherwise if jsj :2 w0 the
example is labeled negative, and so on. Examples were generated
uniformly in a d-dimensional unit hypercube (with
the value of each x i ranging from [0, 1]). They were then
labeled using the concept, and their continuous attributes
were uniformly discretized into ve bins. Noise was added
by randomly switching the class labels of p% of the exam-
ples. Unless otherwise stated, each experiment used the following
settings: ve million training examples;
window on disk; no memory limits; no pre-pruning; a test
set of 50,000 examples; and
alternate tree test mode after 9,000 examples and used test
samples of 1,000 examples. All runs were done on a 1GHz
Pentium III machine with 512 MB of RAM, running Linux.
The rst series of experiments compares the ability of
CVFDT and VFDT to deal with large concept-drifting data-
sets. Concept drift was added to the datasets in the following
manner. Every 50,000 examples w1 was modied by
adding 0:01d to it, and the test set was relabeled with the
updated concept (with p% noise as before). was initially
1 and was multiplied by 1 at 5% of the drift points and
also just before w1 fell below 0 or rose above :25d. Figure 1
compares the accuracy of the algorithms as a function of
d, the dimensionality of the space. The reported values are
obtained by testing the accuracy of the learned models every
10,000 examples throughout the run and averaging these
results. Drift level, reported on the minor axis, is the average
percentage of the test set that changes label at each
point the concept changes. CVFDT is substantially more
accurate than VFDT, by approximately 10% on average,
and CVFDT's performance improves slightly with increasing
d.
Figure
2 compares the average size of the models
induced during the run shown in Figure 1 (the reported values
are generated by averaging after every 10,000 examples,
as before). CVFDT's trees are substantially smaller than
VFDT's, and the advantage is consistent across all the values
of d we tried. This simultaneous accuracy and size advantage
derives from the fact that CVFDT's tree is built on
the 100,000 most relevant examples, while VFDT's is built
on millions of outdated examples.
We next carried out a more detailed evaluation of
CVFDT's concept drift mechanism. Figure 3 shows a detailed
view of one of the runs from Figures 1 and 2, the one
for 50. The minor axis shows the portion of the test
Figure
1: Error rates as a function of the number of
attributes.
Figure
2: Tree sizes as a function of the number of
attributes.
Figure
3: Error rates of learners as a function of the
number of examples seen.
set that is labeled negative at each test point (computed
before noise is added to the test set) and is included to illustrate
the concept drift present in the dataset. CVFDT is
able to quickly respond to drift, while VFDT's error rate often
rises drastically before reacting to the change. Further,
VFDT's error rate seems to peak at worse values as the run
goes on, while CVFDT's error peaks seem to have constant
height. We believe this happens because VFDT has more
trouble responding to drift when it has induced a larger tree
and must replicate corrections across more outdated struc-
ture. CVFDT does not face this problem because it replaces
subtrees when they become outdated. We gathered some
detailed statistics about this run. CVFDT took 4.3 times
longer than VFDT (5.7 times longer if including time to do
the disk I/O needed to keep the window on disk). VFDT's
average memory allocation over the course of the run was 23
MB while CVFDT's was 16.5 MB. The average number of
nodes in VFDT's tree was 2696 and the average number in
CVFDT's tree was 677, of which 132 were in alternate trees
and the remainder were in the main tree.
Next we examined how CVFDT responds to changing levels
of concept drift on ve datasets with
added using a parameter D. Every 75,000 examples, D of
the concept hyperplane's weights were selected at random
and updated as before, w
has a 25% chance of
ipping signs, chosen to prevent too
many weights from drifting in the same pattern). Figure 4
shows the comparison on these datasets. CVFDT substantially
outperformed VFDT at every level of drift. Notice
that VFDT's error rate approaches 50% for D > 2, and
that the variance in VFDT's data points is large. CVFDT's
error rate seems to grow smoothly with increasing levels of
concept change, suggesting that its drift adaptations are robust
and eective.
We wanted to gain some insight into the way CVFDT
starts new alternate subtrees, prunes existing ones, and replaces
portions of HT with alternates. For this purpose,
we instrumented a run of CVFDT on the
from
Figure
4 to output a token in response to each of these
events. We aggregated the events in chunks of 100,000 training
examples, and generated data points for all non-zero values
Figure
5 shows the results of this experiment. There
are a large number of events during the run. For example,
alternate subtrees were swapped into HT . Most of the
swaps seem to occur when the examples in the test set are
changing labels quickly.
We also wanted to see how well CVFDT would compare
to a system using traditional drift-tracking methods. We
thus compared CVFDT, VFDT, and VFDT-Window. We
simulated VFDT-Window by running VFDT on W for every
100,000 examples instead of for every example. The
dataset for the experiment had used the same
drift settings used to generate Figure 4 with
6 shows the results. CVFDT's error rate was the same
as VFDT-Window's, except for a brief period during the
middle of the run when class labels were changing most
rapidly. CVFDT's average error rate for the run was 16.3%,
VFDT's was 19.4%, and VFDT-Window's was 15.3%. The
dierence in runtimes was very large. VFDT took about 10
minutes, CVFDT took about 46 minutes, and we estimate
that VFDT-Window would have taken 548 days to do its
complete run if applied to every new example. Put another
way, VFDT-Window provides a 4% accuracy gain compared
Figure
4: Error rates as a function of the amount of
concept drift.
Figure
5: CVFDT's drift characteristics.
Figure
rates over time of CVFDT, VFDT,
and VFDT-Window.
to VFDT, at a cost of increasing the running time by a factor
of 17,000. CVFDT provides 75% of VFDT-Window's
accuracy gain, and introduces a time penalty of less than
0.1% of VFDT-Window's.
CVFDT's alternate trees and additional su-cient statistics
do not use too much RAM. For example, none of
runs ever grew to more than 70MB. We
never observed CVFDT to use more RAM than VFDT; in
fact it often used as little as half the RAM of VFDT. The
systems' RAM requirements are dominated by the su-cient
statistics which are kept at the leaves in VFDT, and at every
node in CVFDT. We observed that VFDT often had twice
as many leaves as there were nodes in CVFDT's tree and all
alternate trees combined. This is what we expected: VFDT
considers many more examples and is forced to grow larger
trees to make up for the fact that its early decisions become
incorrect due to concept drift. CVFDT's alternate tree
pruning mechanism seems to be eective at trading memory
for smooth transitions between concepts. Further, there is
room for more aggressive pruning if CVFDT exhausts available
RAM. Exploring this tradeo is an area for future work.
4.2 Web Data
We are currently applying CVFDT to mining the stream
of Web page requests emanating from the whole University
of Washington main campus. The nature of the data
is described in detail in Wolman et al. [29]. In our experiments
so far we have used a one-week anonymized trace of all
the external web accesses made from the university campus.
There were 23,000 active clients during this one-week trace
period, and the entire university population is estimated at
50,000 people (students, faculty and sta). The trace contains
million requests, which arrive at a peak rate of
17,400 per minute. The size of the compressed trace le is
about 20 GB. 5 Each request is tagged with an anonymized
organization ID that associates the request with one of the
organizations (colleges, departments, etc.) within the
university. One purpose this data can be used for is to improve
Web caching. The key to this is predicting as accurately
as possible which hosts and pages will be requested in
the near future, given recent requests. We applied decision-tree
learning to this problem in the following manner. We
split the campus-wide request log into a series of equal time
slices in the experiments we report, each
time slice is an hour. For each organization O1 ;
and each of the 244k hosts appearing in the logs
maintained a count of how many
times the organization accessed the host in the time slice,
C ijt . We discretized these counts into four buckets, representing
\no requests," \1 { 12 requests," \13 { 25 requests"
and \26 or more requests." Then for each time slice and
host accessed in that time slice (T t generated an example
with attributes
if H j is requested in time slice T t+1 and 0 if it is not. This
can be carried out in real time using modest resources by
keeping statistics on the last and current time slices C t 1
and C t in memory, only keeping counts for hosts that actually
appear in a time slice (we never needed more than 30k
counts), and outputting the examples for C t 1 as soon as
C t is complete. Using this procedure we obtained a dataset
containing 1.89 million examples, 60.9% of which were la-
5 This log is from May 1999. Tra-c in May 2000 was more
than double this size.
beled with the most common class (that the host did not
appear again in the next time slice).
Our exploration was designed to determine if CVFDT's
concept drift features would provide any benet to this ap-
plication. As each example arrived, we tested the accuracy
of the learners' models on it, and then allowed the learners
to update their models with the example. We kept statistics
about how the aggregated accuracies changed over time.
VFDT and CVFDT were both run with
and additional parameters were
achieved 72.7% accuracy
over the whole dataset and CVFDT achieved 72.3%.
However, CVFDT's aggregated accuracy was higher for the
rst 70% of the run, at times by as much as 1.0%. CVFDT's
accuracy fell behind only near the end of the run, for (we
believe) the following reason. Its drift tracking kept it ahead
throughout the rst part of the run, but its window was too
small for it to learn as detailed a model of the data as VFDT
did by the end. This experiment shows that the data does
indeed contain concept drift, and that CVFDT's ability to
respond to the drift gives it an advantage over VFDT. The
next step is to run CVFDT with dierent, perhaps dynamic,
window sizes to further evaluate the nature of the drift. We
also plan to evaluate CVFDT over traces longer than a week.
5. RELATED WORK
Schlimmer and Granger's [23] STAGGER system was one
of the rst to explicitly address the problem of concept
drift. Salganico [21] studied drift in the context of nearest-neighbor
learning. Widmer and Kubat's [27] FLORA system
used a window of examples, but also stored old concept
descriptions and reactivated them if they seemed to be appropriate
again. All of these systems were only applied to
small databases (by today's standards). Kelly, Hand, and
Adams [14] addressed the issue of drifting parameters in
probability distributions. Theoretical work on concept drift
includes [16] and [3].
Ganti, Gehrke, and Ramakrishnan's [11] DEMON frame-work
is designed to help adapt incremental learning algorithms
to work eectively with time-changing data streams.
DEMON diers from CVFDT by assuming data arrives pe-
riodically, perhaps daily, in large blocks, while CVFDT deals
with each example as it arrives. The framework uses o-line
processing time to mine interesting subsets of the available
data blocks.
In earlier work [12] Gehrke, Ganti, and Ramakrishnan
presented an incremental decision tree induction algorithm,
BOAT, which works in the DEMON framework. BOAT is
able to incrementally maintain a decision tree equivalent to
the one that would be learned by a batch decision tree induction
system. When the underlying concept is stable, BOAT
can perform this maintenance extremely quickly. When drift
is present, BOAT must discard and regrow portions of its
induced tree. This can be very expensive when the drift is
large or aects nodes near the root of the tree. CVFDT
avoids the problem by using alternate trees and removing
the restriction that it learn exactly the tree that a batch
system would. A comparison between BOAT and CVFDT
is an area for future work.
There has been a great deal of work on incrementally
maintaining association rules. Cheung, Han, Ng, and Wong
and Fazil, Tansel, and Arkun [2] propose algorithms for
maintaining sets of association rules when new transactions
are added to the database. Sarda and Srinivas [22] have also
done some work in the area. DEMON's contribution [11] is
particularly relevant, as it addresses association rule maintenance
specically in the high-speed data stream domain
where blocks of transactions are added and deleted from the
database on a regular basis.
Aspects of the concept drift problem are also addressed
in the areas of activity monitoring [10], active data mining
[1] and deviation detection [6]. The main goal here is to
explicitly detect changes, rather than simply maintain an
up-to-date concept, but techniques for the latter can obviously
help in the former.
Several pieces of research on concept drift and context-sensitive
learning are collected in a special issue of the journal
Machine Learning [28]. Other relevant research appeared
in the ICML-96 Workshop on Learning in Context-Sensitive
Domains [15], the AAAI-98 Workshop on AI Approaches
to Time-Series Problems [8], and the NIPS-2000
Workshop on Real-Time Modeling for Complex Learning
Tasks [26]. Turney [25] maintains an online bibliography on
context-sensitive learning.
6. FUTURE WORK
We plan to apply CVFDT to more real-world problems;
its ability to adjust to concept changes should allow it to
perform very well on a broad range of tasks. CVFDT may
be a useful tool for identifying anomalous situations. Currently
CVFDT discards subtrees that are out-of-date, but
some concepts change periodically and these subtrees may
become useful again { identifying these situations and taking
advantage of them is another area for further study. Other
areas for study include: comparisons with related systems;
continuous attributes; weighting examples; partially forgetting
examples by allowing their weights to decay; simulating
weights by subsampling; and controlling the weight decay
function according to external information about drift.
7. CONCLUSION
This paper introduced CVFDT, a decision-tree induction
system capable of learning accurate models from the most
demanding high-speed, concept-drifting data streams.
CVFDT is able to maintain a decision-tree up-to-date with
a window of examples by using a small, constant amount
of time for each new example that arrives. The resulting
accuracy is similar to what would be obtained by reapplying
a conventional learner to the entire window every time a new
example arrives. Empirical studies show that CVFDT is
eectively able to keep its model up-to-date with a massive
data stream even in the face of large and frequent concept
shifts. A preliminary application of CVFDT to a real world
domain shows promising results.
8.
ACKNOWLEDGMENTS
This research was partly supported by a gift from the Ford
Motor Company, and by NSF CAREER and IBM Faculty
awards to the third author.
9.
--R
Active data mining.
Learning changing concepts by exploiting the structure of change.
Megainduction: Machine Learning on Very Large Databases.
Mining surprising patterns using temporal description length.
Maintenance of discovered association rules in large databases: An incremental updating technique.
Mining high-speed data streams
Activity monitoring: Noticing interesting changes in behavior.
DEMON: Mining and monitoring evolving data.
BOAT: optimistic decision tree construction.
The impact of changing populations on classi
The complexity of learning according to two models of a drifting environment.
SLIQ: A fast scalable classi
Decision theoretic subsampling for induction on large databases.
An adaptive algorithm for incremental mining of association rules.
SPRINT: A scalable parallel classi
Learning in the presence of concept drift and hidden contexts.
Special issue on context sensitivity and concept drift.
--TR
C4.5: programs for machine learning
Learning in the presence of concept drift and hidden contexts
BOATMYAMPERSANDmdash;optimistic decision tree construction
Activity monitoring
An efficient algorithm to update large itemsets with early pruning
The impact of changing populations on classifier performance
The Complexity of Learning According to Two Models of a Drifting Environment
Mining high-speed data streams
Learning Changing Concepts by Exploiting the Structure of Change
Maintenance of Discovered Association Rules in Large Databases
Mining Surprising Patterns Using Temporal Description Length
An Adaptive Algorithm for Incremental Mining of Association Rules
DEMON
--CTR
Ying Yang , Xindong Wu , Xingquan Zhu, Combining proactive and reactive predictions for data streams, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Charu C. Aggarwal , Jiawei Han , Jianyong Wang , Philip S. Yu, On demand classification of data streams, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Francisco Ferrer-Troyano , Jesus S. Aguilar-Ruiz , Jose C. Riquelme, Incremental rule learning based on example nearness from numerical data streams, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Francisco Ferrer-Troyano , Jesus S. Aguilar-Ruiz , Jose C. Riquelme, Data streams classification by incremental rule learning with parameterized generalization, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Joong Hyuk Chang , Won Suk Lee, Finding recently frequent itemsets adaptively over online transactional data streams, Information Systems, v.31 n.8, p.849-869, December 2006
Joo Gama , Pedro Medas , Pedro Rodrigues, Learning decision trees from dynamic data streams, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Joo Gama , Ricardo Rocha , Pedro Medas, Accurate decision trees for mining high-speed data streams, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Yi Zhang , Xiaoming Jin, An automatic construction and organization strategy for ensemble learning on data streams, ACM SIGMOD Record, v.35 n.3, p.28-33, September 2006
Francisco Ferrer-Troyano , Jess S. Aguilar-Ruiz , Jos C. Riquelme, Prototype-based mining of numeric data streams, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Brain Babcock , Mayur Datar , Rajeev Motwani , Liadan O'Callaghan, Maintaining variance and k-medians over data stream windows, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.234-243, June 09-11, 2003, San Diego, California
Francisco Ferrer-Troyano , Jess S. Aguilar-Ruiz , Jos C. Riquelme, Discovering decision rules from numerical data streams, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Nilesh Dalvi , Pedro Domingos , Mausam , Sumit Sanghai , Deepak Verma, Adversarial classification, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
M. Otey , S. Parthasarathy , A. Ghoting , G. Li , S. Narravula , D. Panda, Towards NIC-based intrusion detection, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
George Forman, Tackling concept drift by temporal inductive transfer, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Shi Zhong, Efficient streaming text clustering, Neural Networks, v.18 n.5-6, p.790-798, June 2005
Wei Fan, StreamMiner: a classifier ensemble-based engine to mine concept-drifting data streams, Proceedings of the Thirtieth international conference on Very large data bases, p.1257-1260, August 31-September 03, 2004, Toronto, Canada
Geoff Hulten , Pedro Domingos, Mining complex models from arbitrarily large databases in constant time, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Anand Narasimhamurthy , Ludmila I. Kuncheva, A framework for generating data to simulate changing environments, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, p.384-389, February 12-14, 2007, Innsbruck, Austria
Orna Raz , Philip Koopman , Mary Shaw, Semantic anomaly detection in online data sources, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Yunyue Zhu , Dennis Shasha, Efficient elastic burst detection in data streams, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Rouming Jin , Gagan Agrawal, Efficient decision tree construction on streaming data, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Graham Cormode , Mayur Datar , Piotr Indyk , S. Muthukrishnan, Comparing data streams using Hamming norms (how to zero in), Proceedings of the 28th international conference on Very Large Data Bases, p.335-345, August 20-23, 2002, Hong Kong, China
Jimeng Sun , Dacheng Tao , Christos Faloutsos, Beyond streams and graphs: dynamic tensor analysis, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Wei-Guang Teng , Ming-Syan Chen , Philip S. Yu, A regression-based temporal pattern mining scheme for data streams, Proceedings of the 29th international conference on Very large data bases, p.93-104, September 09-12, 2003, Berlin, Germany
Haixun Wang , Jian Yin , Jian Pei , Philip S. Yu , Jeffrey Xu Yu, Suppressing model overfitting in mining concept-drifting data streams, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Hillol Kargupta , Byung-Hoon Park , Sweta Pittie , Lei Liu , Deepali Kushraj , Kakali Sarkar, MobiMine: monitoring the stock market from a PDA, ACM SIGKDD Explorations Newsletter, v.3 n.2, January 2002
Wei Fan, Systematic data selection to mine concept-drifting data streams, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Lilian Harada, Detection of complex temporal patterns over data streams, Information Systems, v.29 n.6, p.439-459, September 2004
Themistoklis Palpanas , Dimitris Papadopoulos , Vana Kalogeraki , Dimitrios Gunopulos, Distributed deviation detection in sensor networks, ACM SIGMOD Record, v.32 n.4, December
Joong Hyuk Chang , Won Suk Lee, Efficient mining method for retrieving sequential patterns over online data streams, Journal of Information Science, v.31 n.5, p.420-432, October 2005
Sreenivas Gollapudi , D. Sivakumar, Framework and algorithms for trend analysis in massive temporal data sets, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Kevin B. Pratt , Gleb Tschapek, Visualizing concept drift, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Joo Gama , Ricardo Fernandes , Ricardo Rocha, Decision trees for mining data streams, Intelligent Data Analysis, v.10 n.1, p.23-45, January 2006
Charu C. Aggarwal, On Change Diagnosis in Evolving Data Streams, IEEE Transactions on Knowledge and Data Engineering, v.17 n.5, p.587-600, May 2005
Daniel Kifer , Shai Ben-David , Johannes Gehrke, Detecting change in data streams, Proceedings of the Thirtieth international conference on Very large data bases, p.180-191, August 31-September 03, 2004, Toronto, Canada
Mohamed Medhat Gaber , Shonali Krishnaswamy , Arkady Zaslavsky, Cost-efficient mining techniques for data streams, Proceedings of the second workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation, p.109-114, January 01, 2004, Dunedin, New Zealand
Graham Cormode , Mayur Datar , Piotr Indyk , S. Muthukrishnan, Comparing Data Streams Using Hamming Norms (How to Zero In), IEEE Transactions on Knowledge and Data Engineering, v.15 n.3, p.529-540, March
Malu Castellanos , Fabio Casati , Umeshwar Dayal , Ming-Chien Shan, A Comprehensive and Automated Approach to Intelligent Business Processes Execution Analysis, Distributed and Parallel Databases, v.16 n.3, p.239-273, November 2004
Joong Hyuk Chang , Won Suk Lee, estWin: Online data stream mining of recent frequent itemsets by sliding window method, Journal of Information Science, v.31 n.2, p.76-90, April 2005
Tho Manh Nguyen , Josef Schiefer , A. Min Tjoa, Sense & response service architecture (SARESA): an approach towards a real-time business intelligence solution and its use for a fraud detection application, Proceedings of the 8th ACM international workshop on Data warehousing and OLAP, November 04-05, 2005, Bremen, Germany
Haixun Wang , Wei Fan , Philip S. Yu , Jiawei Han, Mining concept-drifting data streams using ensemble classifiers, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Yixin Chen , Guozhu Dong , Jiawei Han , Benjamin W. Wah , Jianyong Wang, Multi-dimensional regression analysis of time-series data streams, Proceedings of the 28th international conference on Very Large Data Bases, p.323-334, August 20-23, 2002, Hong Kong, China
Jiawei Han , Yixin Chen , Guozhu Dong , Jian Pei , Benjamin W. Wah , Jianyong Wang , Y. Dora Cai, Stream Cube: An Architecture for Multi-Dimensional Analysis of Data Streams, Distributed and Parallel Databases, v.18 n.2, p.173-197, September 2005
Weng-Keen Wong , Andrew Moore , Gregory Cooper , Michael Wagner, What's Strange About Recent Events (WSARE): An Algorithm for the Early Detection of Disease Outbreaks, The Journal of Machine Learning Research, 6, p.1961-1998, 12/1/2005
Yasushi Sakurai , Spiros Papadimitriou , Christos Faloutsos, BRAID: stream mining through group lag correlations, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, June 14-16, 2005, Baltimore, Maryland
Chang-Tien Lu , Yufeng Kou , Jiang Zhao , Li Chen, Detecting and tracking regional outliers in meteorological data, Information Sciences: an International Journal, v.177 n.7, p.1609-1632, April, 2007
Yang , Li Lee , Wynne Hsu, Finding hot query patterns over an XQuery stream, The VLDB Journal The International Journal on Very Large Data Bases, v.13 n.4, p.318-332, December 2004
Marcus A. Maloof , Ryszard S. Michalski, Incremental learning with partial instance memory, Artificial Intelligence, v.154 n.1-2, p.95-126, April 2004
Jrgen Beringer , Eyke Hllermeier, Online clustering of parallel data streams, Data & Knowledge Engineering, v.58 n.2, p.180-204, August 2006
Streaming pattern discovery in multiple time-series, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Brian Babcock , Shivnath Babu , Mayur Datar , Rajeev Motwani , Jennifer Widom, Models and issues in data stream systems, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Zhiyuan Chen , Chen Li , Jian Pei , Yufei Tao , Haixun Wang , Wei Wang , Jiong Yang , Jun Yang , Donghui Zhang, Recent progress on selected topics in database research: a report by nine young Chinese researchers working in the United States, Journal of Computer Science and Technology, v.18 n.5, p.538-552, September
single-pass mining of path traversal patterns over streaming web click-sequences, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1474-1487, 14 July 2006
Mohamed Medhat Gaber , Arkady Zaslavsky , Shonali Krishnaswamy, Mining data streams: a review, ACM SIGMOD Record, v.34 n.2, June 2005
Shivnath Babu , Jennifer Widom, Continuous queries over data streams, ACM SIGMOD Record, v.30 n.3, September 2001
Lukasz Golab , M. Tamer zsu, Issues in data stream management, ACM SIGMOD Record, v.32 n.2, p.5-14, June
Venkatesh Ganti , Johannes Gehrke , Raghu Ramakrishnan, Mining data streams under block evolution, ACM SIGKDD Explorations Newsletter, v.3 n.2, January 2002 | data streams;incremental learning;concept drift;decision trees;subsampling;hoeffding bounds |
502532 | Robust space transformations for distance-based operations. | For many KDD operations, such as nearest neighbor search, distance-based clustering, and outlier detection, there is an underlying &kgr;-D data space in which each tuple/object is represented as a point in the space. In the presence of differing scales, variability, correlation, and/or outliers, we may get unintuitive results if an inappropriate space is used.The fundamental question that this paper addresses is: "What then is an appropriate space?" We propose using a robust space transformation called the Donoho-Stahel estimator. In the first half of the paper, we show the key properties of the estimator. Of particular importance to KDD applications involving databases is the stability property, which says that in spite of frequent updates, the estimator does not: (a) change much, (b) lose its usefulness, or (c) require re-computation. In the second half, we focus on the computation of the estimator for high-dimensional databases. We develop randomized algorithms and evaluate how well they perform empirically. The novel algorithm we develop called the Hybrid-random algorithm is, in most cases, at least an order of magnitude faster than the Fixed-angle and Subsampling algorithms. | INTRODUCTION
For many KDD operations, such as nearest neighbor search,
distance-based clustering, and outlier detection, there is an
underlying k-D data space in which each tuple/object is
represented as a point in the space. Often times, the tuple
represented simply as the point
in the k-D space. More formally, the transformation
from the tuple t to the point p t is the identity
matrix. We begin by arguing that the identity transformation
is not appropriate for many distance-based operations,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGKDD 2001 San Francisco, California USA
particularly in the presence of variability, correlation, out-
liers, and/or diering scales. Consider a dataset with the
following attributes:
systolic blood pressure (typical range: 100-160 mm of
mercury, with mean =120)
body temperature degrees Celsius, with a very
small standard deviation (e.g., 1-2 degrees for sick pa-
age (range: 20-50 years of age in this example)
Note that dierent attributes have dierent scales and units
(e.g., mm of Hg vs. degree Celsius), and dierent variability
(e.g., high variability for blood pressure vs. low variability
for body temperature). Also, attributes may be correlated
(e.g., age and blood pressure), and there may be outliers.
Example Operation 1 (nearest neighbor search).
Consider a nearest neighbor search using the Euclidean distance
function in the original data space, i.e., the identity
transformation. The results are likely to be dominated by
blood pressure readings, because their variability is much
higher than that of the other attributes. Consider the query
point (blood pressure
Using Euclidean distance, the point (120, 40, 35) is
nearer to the query point than (130, 37, 35) is. But, in terms
of similarity/dissimilarity, this nding is not very meaningful
because, intuitively, a body temperature of 40 degrees is
far away from a body temperature of 37 degrees; in fact, a
person with a body temperature of 40 degrees needs medical
attention immediately!
A simple x to the above problem is to somehow weight
the various attributes. One common approach is to apply
a \normalization" transformation, such as normalizing each
attribute into the range [0,1]. This is usually not a satisfactory
solution because a single outlier (e.g., blood pressure =
could cause virtually all other values to be contained in
a small subrange, again making the nearest neighbor search
produce less meaningful results.
Another common x is to apply a \standardization" trans-
formation, such as subtracting the mean from each attribute
and then dividing by its standard deviation. While this
transformation is superior to the normalization transforma-
tion, outliers may still be too in
uential in skewing the mean
and the standard deviation. Equally importantly, this transformation
does not take into account possible correlation
between attributes. For example, older people tend to have
higher blood pressure than younger people. This means that
we could be \double counting" when determining distances.
Example Operation 2 (Data Mining Operations).
Data clustering is one of the most studied operations in data
mining. As an input to a clustering algorithm, a distance
function is specied. Although some algorithms can deal
with non-metric distance functions (e.g., CLARANS [20]),
most algorithms require metric ones. Among those, a sub-class
of algorithms that has received a lot of attention recently
is the class of density-based algorithms (e.g., DBSCAN
[11] and DENCLUE [14]). The density of a region
is computed based on the number of points contained in
a xed size neighborhood. Thus, density calculation can
be viewed as a xed radius search. Hence, all the concerns
raised above for nearest neighbor search apply just the same
for density-based clustering.
Outlier detection is another important operation in data
mining, particularly for surveillance applications. Many outlier
detection algorithms are distance- or density-based [16,
21, 7]. Again, the issues of diering scale, variability, corre-
lation, and outliers could seriously aect the eectiveness of
those algorithms. At rst glance, the statement that outliers
could impact the eectiveness of outlier detection algorithms
may seem odd. But if attention is not paid to outliers, it is
possible that the outliers may aect the quantities used to
scale the data, eectively masking (hiding) themselves [3].
Contributions of this Paper The fundamental question
addressed in this paper is: \What is an appropriate
space in the presence of diering scale, variability, correla-
tion, and outliers?" So far, we have seen that the spaces
associated with the identity, normalization, and standardization
transformations are inadequate. In this paper, we
focus on robust space transformations, or robust estimators,
so that for distance computation, all points in the space are
treated \fairly". Specically:
Among many robust space estimators that have been
studied in statistics, we propose using the Donoho-
Stahel estimator (DSE). In Section 3, we show two
important properties of the DSE. The rst is the Euclidean
property. It says that while inappropriate in
the original space, the Euclidean distance function becomes
reasonable in the DSE transformed space.
The second, and arguably the more important, property
is the stability property. It says that the transformed
space is robust against updates. That is, in
spite of frequent updates, the transformed space does
not lose its usefulness and requires no re-computation.
Stability is a particularly meaningful property for KDD
applications. If an amount of eort x was spent to set
up an index in the transformed space, we certainly
would not like to spend another amount x after every
single update to the database. In Section 3, we
give experimental results showing that the DSE transformed
space is so stable that it can easily withstand
adding many more tuples to the database (e.g., 50%
of the database size).
Having shown its key properties, in the second half of
this paper, we focus on the computation of the DSE for
high-dimensional (e.g., 10 attributes) databases. The
original DSE algorithm was dened independently by
both Donoho and Stahel [25]; we refer to it as the
Fixed-angle algorithm. In Section 4, we show that the
original algorithm does not scale well with dimension-
ality. Stahel also proposed a version of the algorithm
which uses subsampling (i.e., taking samples of sam-
ples) [25]. However, the number of subsamples to be
used in order to obtain good results is not well known.
We follow the work of Rousseeuw on least median of
squares [22], and come up with a heuristic that seems
to work well, as shown in Section 6. For comparison
purposes, we have implemented this algorithm, applied
some heuristics (e.g., number of subsamples), and evaluated
eectiveness and e-ciency.
Last but not least, in Section 5, we develop a new
algorithm, which we refer to as the Hybrid-random
algorithm, for computing the DSE. Our experimental
results show that the Hybrid-random algorithm
is at least an order of magnitude more e-cient than
the Fixed-angle and Subsampling algorithms. Fur-
thermore, to support the broader claim that the DSE
transformation should be used for KDD operations,
the Hybrid-random algorithm can run very e-ciently
(e.g., compute the estimator for 100,000 5-D tuples in
tens of seconds of total time).
Related Work Space transformations have been studied
in the database and KDD literature. However, they are from
the class of distance-preserving transformations (e.g., [12]),
where the objective is to reduce the dimensionality of the
space. As far as space transformations go, our focus is not so
much on preserving distances, but on providing robustness
and stability.
Principal component analysis (PCA) is useful for data re-
duction, and is well-studied in the statistics literature [17,
15, 10]. The idea is to nd linear combinations of the at-
tributes, while either maximizing or minimizing the variabil-
ity. Unfortunately, PCA is not robust since a few outliers
can radically aect the results. Outliers can also be masked
(hidden by other points). Moreover, PCA lacks the stability
requirements that we desire (cf: Section 3). SVD is
not robust either; it, too, may fail to detect outliers due to
masking.
Many clustering algorithms have been proposed in recent
years, and most are distance-based or density-based [11, 26,
1, 14]. The results presented in this paper will improve
the eectiveness of all these algorithms in producing more
meaningful clusters.
Outlier detection has received considerable attention in
recent years. Designed for large high-dimensional datasets,
the notion of DB-outliers introduced in [16] is distance-
based. A variation of this notion is considered in [21]. The
notion of outliers studied in [7] is density-based. Again, all
of these notions and detection algorithms will benet from
the results presented in this paper.
Developing eective multi-dimensional indexing structures
is the subject of numerous studies [13, 4, 6]. However, this
paper is not about indexing structures. Instead, we focus
on determining an appropriate space within which an index
is to be created.
In [24], nearest neighbor search based on quadratic form
distance functions is considered, that is, distances computed
using some matrix A. That study assumes prior knowledge
of A. For some applications, A may be data-independent
and well-known. For example, for a distance between two
color histograms, each entry in A represents the degree of
perceptual similarity between two colors [12]. However, for
most applications, it is far from clear what a suitable A
could be. The focus of this paper is to propose a meaningful
way of picking A in a data-dependent fashion.
2. BACKGROUND: DONOHO-STAHEL
If two similar attributes are being compared, and those attributes
are independent and have the same scale and vari-
ability, then all points within distance D of a point P lie
within the circle of radius D centered at P . In the presence
of diering scales, variability, and correlation, all points
within distance D of a point P lie within an ellipse. If there
is no correlation, then the major and minor axes of the ellipse
lie on the standard coordinate axes; but, if there is
correlation, then the ellipse is rotated through some angle
. (See Fig. 4 later.) This generalizes to higher dimen-
sions. In 3-D, the ellipsoid resembles a football, with the
covariance determining the football's size, and the correlation
determining its orientation.
An estimator A, also called a scatter matrix, is a k k
square matrix, where k is the dimensionality of the original
data space. An estimator is related to an ellipsoid as follows.
Suppose x and y are k-dimensional column vectors. The
Euclidean distance between x and y can be expressed as
where T denotes the transpose operator. A quadratic form
distance function can be expressed as dA(x;
y). For x 6= 0, and x T Ax > 0, A is called
a positive denite matrix, and x T Ax yields an ellipsoid.
Donoho-Stahel Estimator and Fixed-angle Algorithm
The DSE is a robust multivariate estimator of location and
scatter. Essentially, it is an \outlyingness-weighted" mean
and covariance, which downweights any point that is many
robust standard deviations away from the sample in some
univariate projection [19, 22]. This estimator also possesses
desirable statistical properties such as a-ne equivariance,
which not all transformations (e.g., principal component
analysis) possess.
The DSE is our estimator of choice, although there are
many estimators to choose from [22]. For example, we chose
the DSE over the Minimum Volume Ellipsoid estimator because
the DSE is easier to compute, scales better, and has
much less bias (especially for dimensions > 2) [18]. In our
extended work, we consider other robust estimators, but the
one that seems to perform the best (based on numerous simulations
and analyses) is the DSE. In the interest of space,
we only deal with the DSE in this paper. Although the application
we focus on here is outlier detection, we add that
the DSE is a general transformation that is useful for many
applications, such as those described in Section 1.
Fig. 1 gives a skeleton of the initial algorithm proposed by
Stahel [25] for computing the estimator in 2-D. Let us step
through the algorithm to understand how the estimator is
dened. The input is a dataset containing N 2-D points
of the form y In step 1, we iterate through
the unit circle, to consider a large number of possible an-
gles/directions , on which to project. We iterate through
degrees rather than 360 degrees since the 180-360 degree
range is redundant. Hereafter, we call this algorithm
The Fixed-angle Donoho Stahel Algorithm
1. For to (i.e., 0 < ) using some small
increment (e.g., 1 degree), do:
(a) For
is the unit vector)
(b) Compute
(c) Compute (The MAD is
dened to be 1:4826 (medianjx i () m()j)).
(d) For
2. For
3. Compute the robust multivariate centre ^
and the weighting
function w(t) is dened as follows:
2:5
4. Compute the robust covariance matrix ^
5. Return the Donoho-Stahel estimator of location and
scatter:
Figure
1: The DSE Fixed-angle Algorithm for 2-D
the Fixed-angle algorithm.
For each , each point is projected onto the line corresponding
to rotating the x-axis by , giving the value x i ().
Mathematically, this is given by the dot product between y i
and u, which is the unit vector We call u
the projection vector.
In step 1(b), we compute m(), which is the median of
all the x i () values. MAD is an acronym for median absolute
deviation from the median. It is a better estimator of
scatter than the standard deviation in the presence of out-
liers. Finally, step 1(d) yields d i (), which measures how
outlying the projection of y i is with respect to . Note that
d i () is analogous to classical standardization, where each
value x i () is standardized to x i ()
, with and being
the mean and the standard deviation of x i (), respectively.
By replacing the mean with the median, and the standard
deviation with the MAD, d i () is more robust against the inuence
of outliers than the value obtained by classical standardization
Robustness is achieved by rst identifying outlying points,
and then downweighting their in
uence. Step 1 computes for
each point and each angle , the degree of outlyingness of
the point with respect to . As a measure of how outlying
each point is over all possible angles, step 2 computes, for
each point, the maximum degree of outlyingness over all
possible 's. In step 3, if this maximum degree for a point
is too high (our threshold is 2.5), the in
uence of this point
is weakened by a decreasing weight function. Finally, with
all points weighted accordingly, the location center ^ R and
the covariance matrix ^
R are computed.
3. KEY PROPERTIES OF THE DSE
In this section, we examine whether the estimator is useful
for distance-based operations in KDD applications. In
Section 6, we provide experimental results showing the difference
the estimator can make. But rst, in this section,
we conduct a more detailed examination of the properties
of the estimator. We show that the estimator possesses the
Euclidean property and the stability property, both of which
are essential for database applications.
Euclidean Property In this section, we show that once
the DSE transformation has been applied, the Euclidean
distance function becomes readily applicable. This is what
we call the Euclidean property.
Lemma 1. The Donoho-Stahel estimator of scatter, ^
R ,
is a positive denite matrix.
The proof is omitted for brevity. According to standard
matrix algebra [2], the key implication of the above lemma is
that the matrix ^
R can be decomposed into: ^
where is a diagonal matrix whose entries are the eigen-
values, and Q is the matrix containing the eigenvectors of
R . This decomposition is critical to the following lemma.
It says that the quadratic form distance wrt ^
R between
two vectors x and y is the same as the Euclidean distance
between the transformed vectors in the transformed space.
Lemma 2. Let x; y be two vectors in the original space.
Suppose they are transformed into the space described by ^
R ,
Then, the quadratic form distance wrt ^
R is equal to the
Euclidean distance between xR and yR .
Proof:
R
The proof is rather standard, but we include it to provide
a context for these comments:
For each vector x in the original space (or tuple in the
relation), each vector is transformed only once, i.e.,
Future operations do not require any
extra transformations. For example, for indexing, all
tuples are transformed once and can be stored in an
indexing structure. When a query point z is given, z
is similarly transformed to zR . From that point on,
the Euclidean distance function can be used for the
transformed vectors (e.g., xR and zR ).
Furthermore, many existing distance-based structures
are the most e-cient or eective when dealing with
Euclidean-based calculations. Examples include R-trees
and variants for indexing [13, 4], and the outlier
detection algorithm studied in [16].
The key message here is that space transformation wrt ^
R
is by itself not expensive to compute, and can bring further
e-ciency/eectiveness to subsequent processing.
Stability Property The second property we analyze
here for the DSE concerns stability. A transformation is
stable if the transformed space does not lose its usefulness|
even in the presence of frequent updates. This is an important
issue for database applications. If an amount of
eort x was spent in setting up an index in the transformed
space, we certainly would not like to spend another amount
x after every single update to the index. In statistics, there
is the notion of a breakdown point of an estimator, which
quanties the proportion of the dataset that can be contaminated
without causing the estimator to become \arbitrarily
absurd" [27]. But we do not pursue this formal approach
regarding breakdown points; instead, we resort to experimental
evaluation.
In our experiments, we used a real dataset D and computed
the
R (D). We then inserted or deleted tuples
from D, thereby changing D to Dnew . To measure stabil-
ity, we compared matrix ^
R (D) with ^
R(Dnew ). In the
numerical computation domain, there are a few heuristics
for measuring the dierence between matrices; but there is
no universally agreed-upon metric [9]. To make our comparison
more intuitive, we instead picked a distance-based
operation|outlier detection|and compared the results. Section
6 gives the details of our experiments, but in brief, we
proceeded as follows: (a) We used the old estimator ^
to transform the space for Dnew and then found all the
outliers in Dnew ; and (b) We used the updated estimator
(Dnew ) to transform the space for Dnew , and then found
all the outliers in Dnew .
To measure the dierence between the two sets of detected
outliers, we use standard precision and recall [23], and we
dene: (i) the answer set as the set of outliers found by
a given algorithm, and (ii) the target set as the \o-cial"
set of outliers that are found using a su-ciently exhaustive
search (i.e., using the Fixed-angle algorithm with a relatively
small angular increment). Precision is the percentage of the
answer set that is actually found in the target set. Recall
is the percentage of the target set that is in the answer set.
Ideally, we want 100% precision and 100% recall.
Fig. 2 shows the results when there were 25%, 50%, 75%
and 100% new tuples added to D, and when 25%, 50% and
75% of the tuples in D were deleted from D. The new tuples
were randomly chosen and followed the distribution of
the tuples originally in D. The deleted tuples were randomly
chosen from D. The second and third columns show
the number of outliers found, with the third column giving
the \real" answer, and the second column giving the
\approximated" answer using the old transformation. The
fourth and fth columns show the precision and recall. They
clearly show that the DSE transformation is stable. Even
a 50% change in the database does not invalidate the old
transformation, and re-computation appears unnecessary.
For the results shown in Fig. 2, the newly added tuples
followed the same distribution as the tuples originally in D.
For the results shown in Fig. 3, we tried a more drastic sce-
nario: the newly added tuples, called junk tuples, followed
a totally dierent distribution. This is re
ected by the relatively
higher numbers in the second and third columns of
Fig. 3. Nevertheless, despite the presence of tuples from two
distributions, the precision and recall gures are still close
to 100%. This again shows the stability of the DSE.
% Change in # of Outliers in
Dnew , using: Precision Recall
(Dnew
25% Inserts 17 15 88.2% 100%
50% Inserts 17
75% Inserts
100% Inserts 37 29 78.4% 100%
25% Deletes 13 13 100% 100%
50% Deletes
75% Deletes 15 19 100% 78.9%
Figure
2: Precision and Recall: Same Distribution
of Outliers in
Inserted when: Dnew , using: Precision Recall
(Dnew
25% 53 52 94.3% 96.2%
37.5% 74 70 91.9% 97.1%
50% 95 90 92.6% 97.8%
62.5% 108 100 90.7% 98.0%
Figure
3: Precision and Recall: Drastically Dierent
Distribution
4. K-D SUBSAMPLING ALGORITHM
In the previous section, we showed that the DSE possesses
the desirable Euclidean and stability properties for KDD
applications. The remaining question is whether the associated
cost is considerable. Let us consider how to compute
R e-ciently, for k > 2 dimensions.
Complexity of the Fixed-angle Algorithm Recall
that Fig. 1 gives the 2-D Fixed-angle algorithm proposed
by Donoho and Stahel. The extension of this algorithm to
3-D and beyond is straightforward. Instead of using a unit
circle, we use a unit sphere in 3-D. Thus, there are two
angles|1 and 2|through which to iterate. Similarly, in
k-D, we deal with a unit hypersphere, and there are k 1
angles through which to iterate:
To understand the performance of the Fixed-angle algo-
rithm, we conduct a complexity analysis. In step 1 of Fig. 1,
each angle requires nding the median of N values, where
N is the size of the dataset. Finding the median takes O(N)
time, which is the time that a selection algorithm can partition
an array to nd the median entry. (Note that sorting is
not needed.) Thus, in 2-D, if there are a increments to iterate
through, the complexity of the rst step is O(aN ). For
k-D, there are k 1 angles to iterate through. If there are
a increments for each of these angles, the total complexity
of the rst step is O(a k 1 kN ).
In step 2, in the k-D case, there are a k 1 projection
vectors to evaluate. Thus, the complexity of this step is
nds a robust center, which can be done
in O(kN) time. Step 4 sets up the k k robust covariance
matrix, which takes O(k 2 N) time. Hence, the total complexity
of the Fixed-angle algorithm is O(a k 1 kN ). Su-ce
it to say that running this basic k-D algorithm is impractical
for larger values of a and k.
Intuition behind the Subsampling Algorithm in 2-D
The rst two steps of the Fixed-angle algorithm compute,
for each point y i , the degree of \outlyingness" d i . The value
of d i is obtained by taking the maximum value of d i () over
all 's, where d i () measures how outlying the projection
of y i is wrt . In the Fixed-angle algorithm, there is an
exhaustive enumeration of 's. For high dimensions, this
approach is infeasible.
Let us see if there is a better way to determine \good" projection
vectors. Consider points A, B, and C in Fig. 4(a),
which shows a 2-D scenario involving correlated attributes.
Fig. 4(b) shows the projection of points onto a line orthogonal
to the major axis of the ellipse. (Not all points are
projected in the gure.) Note that B's projection appears
to belong to the bulk of the points projected down from
the ellipse; it does not appear to be outlying at all in this
projection. Also, although A is outlying on the projection
Algorithm Subsampling
For is the number of iterations
chosen, do:
(a) Select k 1 random points from the dataset. Together
with the origin, these points form a hyper-plane
through the origin, and a subspace V .
i. Compute a basis for the orthogonal complement
of V .
ii. Choose a unit vector u i from the orthogonal
complement to use as vector u in the Fixed-
angle algorithm.
(b) For
(c) (continue with step 1(b) and beyond of the Fixed-
angle algorithm shown in Fig. 1, where i takes the
role of )
Figure
5: DSE Subsampling Algorithm for k-D
line, C is not. In Fig. 4(c), A and C are not outlying, but B
clearly is. Fig. 4(d) shows yet another projection.
As can be seen, the projection vectors which are chosen
greatly in
uence the values for d i in the Donoho-Stahel al-
gorithm. In applying subsampling, our goal is to use lines
orthogonal to the axes of an ellipse (or ellipsoid, in k-D),
to improve our odds of obtaining a good projection line.
While there may be better projection vectors in which to
identify outliers, these are good choices. There is an increased
chance of detecting outliers using these orthogonal
lines because many outliers are likely to stand out after the
orthogonal projection (see Fig. 4). Non-outlying points, especially
those within the ellipsoids, are unlikely to stand
out because they project to a common or relatively short
interval on the line.
If we knew what the axes of the ellipse were, then there
would be no need to do subsampling. However, since: (a)
we do not know the parameters of the ellipsoid, and (b) in
general, there will be too many points and too many dimensions
involved in calculating the parameters of the ellipsoid,
we use the following approach called subsampling. In 2-D,
the idea is to rst pick a random point P from the set of
N input points. Then compute a line orthogonal to the line
joining P and the origin. Note that, with reasonable proba-
bility, we are likely to pick a point P in the ellipse, and the
resulting orthogonal line may approximate one of the axes
of the ellipse. This is the essence of subsampling.
More Details of the Subsampling Algorithm in k-D
In k-D, we rst nd a random sample of k 1 points. Together
with the origin, they form a subspace V . Next, we
need to nd a subspace that is orthogonal to V , which is
called the orthogonal complement of V [2]. From this point
on, everything else proceeds as in the Fixed-angle algorithm.
Fig. 5 outlines the Subsampling algorithm for k-D DSE computation
One key detail of Fig. 5 deserves elaboration: how to compute
m. To determine m, we begin by analyzing the probability
of getting a \bad" subsample. For each subsample,
are randomly chosen. A subsample is likely to
be good if all k 1 points are within the ellipsoid. Let (a
user-chosen parameter) be the fraction of points outside the
ellipsoid. Typically, varies between 0.01 to 0.5; the bigger
the value, the more conservative or demanding the user is
on the quality of the subsamples.
A
(a)
A
(b)
A
(c)
A
(d)
Figure
4: Bivariate Plots Showing the Eect of Dierent Projection Lines. (a) Data points only. (b)
Projection onto a line orthogonal to the major axis of the ellipse. (c) Projection onto a line orthogonal to
the minor axis. (d) Projection onto another line.
Let us say that m can be the smallest number of subsamples
such that there is at least a 95% probability that we
get at least one good subsample out of the m subsamples.
Given , the probability of getting a \good" subsample is
the probability of picking all k 1 random points within the
ellipsoid, which is (1 Conversely, the probability of
getting a bad subsample is 1 (1 . Thus, the probability
of all m subsamples being bad is (1 (1
Hence, we can determine a base value of m by solving the
following inequality for m: 1 (1 (1 0:95. For
example, In Section 6,
we show how the quality of the estimator varies with m.
Complexity of the Subsampling Algorithm In k-D,
we can determine a basis for the orthogonal complement of
a hyperplane through the origin and through k 1 non-zero
points in O(k 3 ) time, using Gauss-Jordan elimination [2, 9].
Using this basis, we simply pick any unit vector u as our projection
vector, and then continue with the basic Fixed-angle
algorithm. Recall from Section 4 that the basic algorithm
runs in O(a k 1 kN) time for step 1 and O(k 2 N) time for the
remaining steps. For the Subsampling algorithm, however,
we perform a total of m iterations, where each iteration consists
of k 1 randomly selected points, and thus step 1 of
the Subsampling algorithm runs in O(mk 3 ) time. Thus, following
the analysis in Section 4, the entire algorithm runs
in O(mk 3
5. K-D RANDOMIZED ALGORITHMS
The Subsampling algorithm is more scalable with respect
to k than the Fixed-angle algorithm is, but the mk 3 complexity
factor is still costly when the number of subsamples
m is large (i.e., for a high quality estimator). Thus, in
this section, we explore how the k-D DSE estimator can be
computed more e-ciently. First, we implement a simple alternative
to the Fixed-angle algorithm, called Pure-random.
After evaluating its strengths and weaknesses, we develop a
new algorithm called Hybrid-random, which combines part
of the Pure-random algorithm with part of the Subsampling
algorithm. In Section 6, we provide experimental results,
showing eectiveness and e-ciency.
Pure-random Algorithm Recall from Fig. 1 that in
the Fixed-angle algorithm, the high complexity is due to the
a k 1 factor, where a k 1 denotes the number of projection
unit vectors examined. However, for any given projection
unit vector, the complexity of step 1 reduces drastically to
O(kN ). It is certainly possible for an algorithm to do well
if it randomly selects r projections to examine, and if some
of those projections happen to be \good" or in
uential pro-
jections. A skeleton of this algorithm called Pure-random
Algorithm Pure-random
For where r is the number of projection
vectors chosen, do:
(a) Select a k-D projection unit vector u i randomly
(i.e., pick k 1 random angles)
(b) For
(c) (continue with step 1(b) and beyond of the Fixed-
angle algorithm, where i takes the role of )
Figure
is presented in Fig. 6. Following the analysis shown in Section
4, it is easy to see that the complexity of the Pure-
random algorithm is O(rkN
randomization is also used in the Subsampling algorithm.
But there, each random \draw" is a subspace V formed by
1 points from the dataset, from which the orthogonal
complement of V is computed. In the Pure-random case,
however, each random draw is a projection vector. In order
for the Pure-random algorithm to produce results comparable
to that of the Subsampling algorithm, it is very likely
that r m.
Hybrid-random Algorithm Conceptually, the Pure-
random algorithm probes the k-D space blindly. This is
the reason why the value of r may need to be high for acceptable
quality. The question is whether random draws
of projection vectors can be done more intelligently. More
specically, are there areas of the k-D space over which the
randomization can skip, or equivalently, are there areas on
which the randomization should focus?
In a new algorithm that we develop called Hybrid-random,
we rst apply the Subsampling algorithm for a very small
number of subsamples. Consider the orthogonal complement
of V that passes through the origin. Imagine rotating
this line through a small angle anchored at the origin, thus
creating a cone. This rotation yields a \patch" on the surface
of a k-D unit hypersphere. From the Fixed-angle al-
gorithm, we know that projection vectors too close to each
other do not give markedly dierent results. So, in the second
phase of the Hybrid-random algorithm, we will restrict
the random draws of projection vectors to stay clear of previously
examined cones/patches.
Using the Euclidean inner product and the Law of Cosines,
a collision between two vectors a and b occurs if dist 2 (a;
is the radius of a
patch on the surface of the k-D unit hypersphere. To determine
-, we used the following heuristic. We say that vectors
a and b are too close to each other if cos 0:95, where is
the angle between the vectors. Thus, (2-)
hence, as an upper bound,
we use
0:1
Two observations are in order.
First, patches that are too large are counterproductive because
many promising projection vectors may be excluded.
Second, although increasing the number of patches improves
accuracy, favourable results can be obtained with relatively
few patches (e.g., 100), as will be shown in Section 6.
Fig. 7 gives a skeleton of the Hybrid-random algorithm.
Steps 1 to 3 use the Subsampling algorithm to nd some
initial projection vectors (including the eigenvectors of the
scatter matrix) and keep them in S. In each iteration of
step 4, a new random projection vector is generated in such
a way that it stays clear of existing projection vectors.
Algorithm Hybrid-random
1. Run the Subsampling algorithm for a small number m
of iterations (e.g.,
2. Compute the k eigenvectors of the resulting scatter
matrix. This gives us an approximation for the axes
of the ellipsoid.
3. Initialize the set S of previously examined projection
vectors to consist of the m projection vectors from step
1 and the k eigenvectors from step 2.
4. For where r is the number of extra random
patches desired, do:
(a) From S, randomly select 2 unique vectors a and
b that are at least 2- radians apart.
(b) Compute a new vector u i that is a linear combination
of a and b. In particular, u
)b,
where
is randomly chosen between [-, 1-].
(c) If u i is within - radians from an existing vector
in S, then redo the previous step with a new
If these two vectors are still too close during the
second attempt, then go back to step 4(a).
(d) Normalize u i so that it is a unit vector, and add
it to S.
(f) (continue with step 1(b) and beyond of the Fixed-
angle algorithm, where i takes the role of )
Figure
7: DSE Hybrid-random Algorithm for k-D
Recall from our earlier discussion that the complexity
of the Subsampling algorithm is O(m1k 3
m1 is the number of subsamples taken. As for the Pure-
random algorithm, the complexity is O(r1kN ), where r1 is
the number of random projections probed. It is easy to see
that the Hybrid-random algorithm requires a complexity of
We expect that m2 m1 , and r2 r1 .
Experimental results follow.
6. EXPERIMENTAL EVALUATION
Experimental Setup To evaluate the Donoho-Stahel
transformation, we picked the distance-based outlier detection
operation described in [16]. As explained in Section 3,
we use precision and recall [23], to compare the results.
Our base dataset is an 855-record dataset consisting of
1995-96 National Hockey League (NHL) player performance
statistics. These publicly available statistics can be down-loaded
from sites such as the Professional Hockey Server
at http://maxwell.uhh.hawaii.edu/hockey/. Since this real-life
dataset is quite small, we created a number of synthetic
datasets mirroring the distribution of statistics within the
NHL dataset. Specically, we determined the distribution of
each attribute in the original dataset by using a 10-partition
histogram. Then, we generated datasets containing up to
100,000 tuples|whose distribution mirrored that of the base
dataset. As an optional preprocessing step, we applied the
Box and Cox transformation to normality [8] to nd appropriate
parameters p and D for the distance-based outliers
implementation. Unless otherwise stated, we used a 5-D
case of 100,000 tuples as our default, where the attributes
are goals, assists, penalty minutes, shots on goal, and games
played.
Our tests were run on Sun Microsystems Ultra-1 proces-
sor, running SunOS 5.7, and having 256 MB of main mem-
ory. Of the four DSE algorithms presented, only the Fixed-
angle algorithm is deterministic. The other three involve
randomization, so we used the median results of several runs.
Precision was almost always 100%, but recall often varied.
Usefulness of Donoho-Stahel Transformation In
the introduction, we motivated the usefulness of the Donoho-
Stahel transformation by arguing that the identity transformation
(i.e., raw data), as well as the normalization and
standardization transformations, may not give good results.
In the experiment reported below, we show a more concrete
situation based on outlier detection. Based on the 1995-96
NHL statistics, we conducted an experiment using the two
attributes: penalty-minutes and goals-scored. We note
that the range for penalty-minutes was [0,335], and the
range for goals-scored was [0,69].
Fig. 8 compares the top outliers found using the identity,
standardization, and Donoho-Stahel transformations. Also
shown are the actual penalty-minutes and goals-scored
by the identied players. With the identity transformation
(i.e., no transformation), players with the highest penalty-
minutes dominate. With classical standardization, the dominance
shifts to the players with the highest goals-scored
(with Matthew Barnaby appearing on both lists). How-
ever, in both cases, the identied outliers are \trivial", in
the sense that they are merely extreme points for some at-
tribute. Barnaby, May, and Simon were all in the top-5
for penalty-minutes; Lemieux and Jagr were the top-2 for
goals-scored.
With the Donoho-Stahel transformation, the identied
outliers are a lot more interesting and surprising. Donald
Transform- Top Outliers Penalty-mins. Goals-scored
ation Found (raw data) (raw data)
Matthew Barnaby 335 15
Chris Simon 250
Matthew Barnaby 335 15
Standard- Jaromir Jagr 96 62
ization Mario Lemieux 54 69
Matthew Barnaby 335 15
Donoho- Donald Brashear 223 0
Stahel Jan Caloun 0 8
Joe Mullen 0 8
Figure
8: Identied Outliers: Usefulness of Donoho-
Stahel Transformation
Brashear was not even in the top-15 as far as penalty-
minutes goes, and his goals-scored performance was unim-
pressive, that is, penalty-minutes = 223 and goals-scored
Yet, he has a unique combination. This is because to
amass a high number of penalty minutes, a player needs to
play a lot, and if he plays a lot, he is likely to score at least
some goals. (Incidentally, 0 goals is an extreme univariate
point; however, well over 100 players share this value.)
Similar comments apply to Jan Caloun and Joe Mullen;
both had 0 penalty-minutes but 8 goals-scored. While
their raw gures look unimpressive, the players were exceptional
in their own ways. 1 The point is, without an appropriate
space transformation, these outliers would likely be
missed.
Internal Parameters of the Algorithms Every algorithm
presented here has key internal parameters. In the
Fixed-angle case, it is the parameter a, the number of angles
tested per dimension. For the randomization algorithms,
there are m, the number of subsamples, and r, the number
of random projection vectors. Let us now examine how the
choices of these parameters aect the quality of the estimator
computed. Precision and recall will be used to evaluate
quality. However, for the results presented below, precision
was always at 100%. Thus, we only report the recall values.
The four graphs in Fig. 9 each contrast: (i) CPU times,
(ii) recall values, and (iii) number of iterations (or patches
used) for one of the four algorithms. The left hand y-axis
denes CPU times (in minutes for the top two graphs, and in
seconds for the bottom two graphs). The right hand y-axis,
in conjunction with the recall curve (see each gure's legend)
denes recall values. Note, however, that the recall range
varies from one graph to another. Fig. 9(a) measures CPU
time in minutes, and shows that the Fixed-angle algorithm
can take a long time to nish, especially as the number of
random angles a tested increases. The horizontal axis is in
tens of thousands of iterations. Recall that a small decrease
in the angle increment for each dimension can cause a very
large number of additional iterations to occur. For many of
our datasets, it was necessary to use increments as small as
degrees (e.g., 75 hours of CPU time, for 100,000 tuples
in 5-D), before determining the number of outliers present.
We omit these very long runs from our graphs, to allow us
to more clearly contrast CPU times and recall values.
Compared to the Fixed-angle algorithm, the Pure-random
algorithm achieves a given level of recall more quickly, al-
though, as Fig. 9(b) shows, it can still take a long time to
achieve high levels of recall.
Recall that, for the Subsampling algorithm, a key issue
was how many subsamples to use. Based on the heuristic
presented in Section 4, the base value of m was determined
to be 47, and multiples of 47 subsamples were used. From
the recall curve in Fig. 9(c), it is clear that below 47 sub-
samples, the recall value is poor. But even with 3 *
141 subsamples, the recall value becomes rather acceptable.
This is the strength of the Subsampling algorithm, which
1 Actually, we did not even hear of Jan Caloun before our experiment.
During 1995-96, Caloun played a total of 11 games, and scored 8
goals|almost a goal per game, which is a rarity in the NHL. A search
of the World Wide Web reveals that Caloun played a grand total of
13 games in the NHL|11 games in 1995-96, and 2 games in 1996-
97|before disappearing from the NHL scene. We also learned that
he scored on his rst four NHL shots to tie an NHL record.
can give acceptable results in a short time. But, the recall
curve has a diminishing rate of return, and it may take
a very long time for Subsampling to reach a high level of
recall, as conrmed in Fig. 10.
Since the Hybrid-random algorithm uses the Subsampling
algorithm in its rst phase (with d 47
expected that the Hybrid-random algorithm behaves about
as well as the Subsampling algorithm, at the beginning, for
mediocre levels of recall, such as 70-75% (cf: Fig. 10). But,
as shown in Fig. 9(d), if the Hybrid-random algorithm is allowed
to execute longer, it steadily and quickly improves the
quality of its computation. Thus, in terms of CPU time, we
start with the Subsampling curve, but quickly switch to the
Pure-random curve to reap the benets of a fast algorithm
and pruned randomization.
Achieving a Given Rate of Recall The above experiment
shows how each algorithm trades o e-ciency with
quality. Having picked a reasonable set of parameter values
for each algorithm, let us now compare the algorithms head-
to-head. Specically, for xed recall rates, we compare the
time taken for each algorithm to deliver that recall rate. Because
the run time of the Fixed-angle algorithm is typically
several orders of magnitude above the others (for comparable
quality), we omit the Fixed-angle algorithm results from
now on.
Fig. 10 compares the Hybrid-random algorithm with both
the Pure-random and Subsampling algorithms, for higher
rates of recall. In general, the Subsampling algorithm is
very eective for quick, consistent results. However, to improve
further on the quality, it can take a very long time. In
contrast, when the Hybrid-random algorithm is allowed to
run just a bit longer, it can deliver steady improvement on
quality. As a case in point, to achieve about 90% recall in
the current example, it takes the Subsampling algorithm almost
14 hours to achieve the same level of recall produced by
the Hybrid-random algorithm in about two minutes. Never-
theless, we must give the Subsampling algorithm credit for
giving the Hybrid-random algorithm an excellent base from
which to start its computation.
In Fig. 10, the Pure-random algorithm signicantly out-performs
the Subsampling algorithm, but this is not always
the case. We expect the recall rate for Pure-random to be
volatile, and there are cases where the Pure-random algorithm
returns substantially dierent outliers for large numbers
of iterations. The Hybrid-random algorithm tends to
be more focused and consistent.
Scalability in Dimensionality and Dataset Size Fig.
11(a) shows scalability of dimensionality for the Subsampling
and Hybrid-random algorithms. We used moderate
levels of recall (e.g., 75%) and 60,000 tuples for this anal-
ysis. High levels of recall would favor the Hybrid-random
algorithm. The results shown here are for 282 iterations for
the Subsampling algorithm, and 90 Patches for the Hybrid-
random algorithm. Our experience has shown that these
numbers of iterations and patches are satisfactory, assuming
we are satised with conservative levels of recall. Fig. 11(a)
shows that both algorithms scale well, and this conrms our
complexity analysis of Section 4.
Fig. 11(b) shows how the Subsampling and Hybrid-random
algorithms scale with dataset size, in 5-D, for conservative
levels of recall. Again, both algorithms seem to scale well,
and again the Hybrid-random algorithm outperforms the
Run Time and Recall for the Fixed-angle Algorithm: 5-D, 100,000 Tuples
Number of Angles Tested (Iterations)
CPU
Time
in
Minutes
CPU Time
Recall
Recall
Run Time and Recall for the Pure-random Algorithm: 5-D, 100,000 Tuples
Number of Random Angles (Iterations)
CPU
Time
in
Minutes
CPU Time
Recall
Recall
5002060100Run Time and Recall for the Subsampling Algorithm: 5-D, 100,000 Tuples
Number of Subsamples (Iterations)
CPU
Time
in
Seconds
CPU Time
Recall
Recall
700100Run Time and Recall for the Hybrid-random Algorithm: 5-D, 100,000 Tuples; delta=0.0800
Number of Patches
CPU
Time
in
Seconds
CPU Time
Recall
Recall
Figure
9: Plots of Run Time and Recall: (a) Top left: Fixed-angle. (b) Top right: Pure-random. (c) Bottom
left: Subsampling. (d) Bottom right: Hybrid-random.
9050150250350Run Times to Achieve a Given Level of Recall, for 3 Algorithms, 5-D, & 100,000 Tuples
Percent Recall
CPU
Time
in
Seconds
Pure-random
Subsampling
Hybrid-random
Figure
10: Run Time vs. Recall for Subsampling,
Pure-random, and Hybrid-random Algorithms
Subsampling algorithm. High levels of recall would favor
the Hybrid-random algorithm, even more so than shown.
7.
AND CONCLUSION
The results returned by many types of distance-based
KDD operations/queries tend to be less meaningful when
when no attention is paid to scale, variability, correlation,
and outliers in the underlying data. In this paper, we presented
the case for robust space transformations to support
operations such as nearest neighbor search, distance-based
clustering, and outlier detection. An appropriate space is
one that: (a) preserves the Euclidean property, so that ecient
Euclidean distance operations can be performed without
sacricing quality and meaningfulness of results, and (b)
is stable in the presence of a non-trivial number of updates.
We saw that distance operations which ordinarily would be
inappropriate when operating on the raw data (and even on
normalized or standardized data), are actually appropriate
in the transformed space. Thus, the end user sees results
which tend to be more intuitive or meaningful for a given
application. We presented a data mining case study on the
detection of outliers to support these claims.
After considering issues such as eectiveness (as measured
by precision and recall, especially the latter) and efciency
(as measured by scalability both in dimensionality
and dataset size), we believe that the Hybrid-random algorithm
that we have developed in this paper is an excellent
choice among the Donoho-Stahel algorithms. In tens of
seconds of CPU time, a robust estimator can be computed
which not only accounts for scale, variability, correlation,
and outliers, but is also able to withstand a signicant number
of database updates (e.g., 50% of the tuples) without
losing eectiveness or requiring re-computation. For many
cases involving high levels of recall, the randomized algo-
rithms, and in particular, the Hybrid-random algorithm can
be at least an order of magnitude faster (and sometimes
several orders of magnitude faster) than the alternatives. In
conclusion, we believe that our results have shown that robust
estimation has a place in the KDD community, and can
nd value in many KDD applications.
8.
--R
Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications.
Elementary Linear Algebra: Applications Version.
Outliers in Statistical Data.
LOF: Identifying Density-Based Local Outliers
Box and D.
Numerical Analysis.
A fast algorithm for robust principal components based on projection pursuit.
A Density-based Algorithm for Discovering Clusters in Large Spatial Databases with Noise
R-trees: a dynamic index structure for spatial searching.
Algorithms for Mining Distance-Based Outliers in Large Datasets
Bias robust estimation of scale.
The behaviour of the Stahel-Donoho robust multivariate estimator
Robust Regression and Outlier Detection.
Introduction to Modern Information Retrieval.
Breakdown of Covariance Estimators.
STING: A statistical information grid approach to spatial data mining.
High breakdown point estimates of regression by means of the minimization of an e-cient scale
--TR
Robust regression and outlier detection
The R*-tree: an efficient and robust access method for points and rectangles
Efficient and effective querying by image content
Distance-based indexing for high-dimensional metric spaces
Automatic subspace clustering of high dimensional data for data mining applications
Density-based indexing for approximate nearest-neighbor queries
Efficient algorithms for mining outliers from large data sets
Introduction to Modern Information Retrieval
R-trees
Algorithms for Mining Distance-Based Outliers in Large Datasets
Efficient and Effective Clustering Methods for Spatial Data Mining
Efficient User-Adaptable Similarity Search in Large Multimedia Databases
--CTR
Peng Sun , Robert M. Freund, Computation of Minimum-Volume Covering Ellipsoids, Operations Research, v.52 n.5, p.690-706, Sep. - Oct. 2004
S. Cateni , V. Colla , M. Vannucci, A fuzzy logic-based method for outliers detection, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, p.561-566, February 12-14, 2007, Innsbruck, Austria
Leejay Wu , Christos Faloutsos, Making every bit count: fast nonlinear axis scaling, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, July 23-26, 2002, Edmonton, Alberta, Canada
Jaideep Vaidya , Chris Clifton, Privacy-preserving k-means clustering over vertically partitioned data, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C. | data mining;robust statistics;space transformations;outliers;robust estimators;distance-based operations |
502544 | Evaluating the novelty of text-mined rules using lexical knowledge. | In this paper, we present a new method of estimating the novelty of rules discovered by data-mining methods using WordNet, a lexical knowledge-base of English words. We assess the novelty of a rule by the average semantic distance in a knowledge hierarchy between the words in the antecedent and the consequent of the rule - the more the average distance, more is the novelty of the rule. The novelty of rules extracted by the DiscoTEX text-mining system on Amazon.com book descriptions were evaluated by both human subjects and by our algorithm. By computing correlation coefficients between pairs of human ratings and between human and automatic ratings, we found that the automatic scoring of rules based on our novelty measure correlates with human judgments about as well as human judgments correlate with one another. @Text mining | Introduction
A data-mining system may discover a large body of rules; however, relatively few of these may
convey useful new knowledge to the user. Several metrics for evaluating the \interestingness" of
mined rules have been proposed [BA99, HK01]. These metrics can be used to lter out a large
percentage of the less interesting rules, thus yielding a more manageable number of higher quality
rules to be presented to the user. However, most of these measure simplicity (e.g. rule size),
certainty (e.g. condence), or utility (e.g. support). Another important aspect of interestingness
is novelty: does the rule represent an association that is currently unknown. For example, a
text-mining system we developed that discovers rules from computer-science job announcements
posted to a local newsgroup [NM00] induced the rule: \SQL ! database". A knowledgeable
computer scientist may nd this rule uninteresting because it conveys a known association.
Evaluating the novelty of a rule requires comparing it to an existing body of knowledge the user
is assumed to already possess.
For text mining [Hea99, Fel99, Mla00], in which rules consist of words in natural language, a
relevant body of common knowledge is basic lexical semantics, i.e. the meanings of words and the
semantic relationships between them. A number of lexical knowledge bases are now available.
WordNet [Fel98] is a semantic network of about 130,000 English words linked to about 100,000
lexical senses (synsets) that are interconnected by relations such as antonym, generalization
(hypernym), and part-of (holonym). We present and evaluate a method for measuring the
novelty of text-mined rules using such lexical knowledge.
We dene a measure of the semantic distance, d(w words based on the
length of the shortest path connecting w i and w j in WordNet. The novelty of a rule is then
dened as the average value of d(w all pairs of words (w is in the
antecedent and w j is in the consequent of the rule. Intuitively, the semantic dissimilarity of
the terms in a rule's antecedent and in its consequent is an indication of the rule's novelty. For
example, \beer ! diapers" would be considered more novel than \beer ! pretzels" since beer
and pretzels are both food products and therefore closer in WordNet.
We present an experimental evaluation of this novelty metric by applying it to rules mined
from book descriptions extracted from Amazon.com. Since novelty is fundamentally subjective,
we compared the metric to human judgments. We have developed a web-based tool that allows
human subjects to enter estimates of the novelty of rules. We asked multiple human subjects to
score random selections of mined rules and compared the results to those obtained by applying
our metric to the same rules. We found that the average correlation between the scoring of our
algorithm and that of the human users, using both raw score correlation (Pearson's metric) and
rank correlation (Spearman's metric), was comparable to the average score correlation between
the human users. This suggests that the algorithm has a rule scoring judgment similar to that
of human users.
Background
2.1 Text Mining
Traditional data mining algorithms are generally applied to structured databases, but text mining
algorithms try to discover knowledge from unstructured or semi-structured textual data, e.g.
web-pages. Text mining is a relatively new research area at the intersection of natural language
processing, machine learning and information retrieval. Various new useful techniques are being
developed by researchers for discovering knowledge from large text corpora, by appropriately
integrating methods from these dierent disciplines. DiscoTEX [NM00] is one such system, that
discovers prediction rules from natural language corpora using a combination of information
extraction and data mining. It learns an information extraction system to transform text into
more structured data, and this structured data is then mined for interesting relationships.
For our experiments, we have used rules mined by DiscoTEX from book descriptions extracted
from Amazon.com, in the \science", \romance" and \literature" categories. DiscoTEX
rst extracts a structured template from the Amazon.com book description web-pages. It constructs
a template for each book description, with pre-dened slots (e.g. title, author, subject,
etc.) that are lled with words extracted from the text. DiscoTEX then uses a rule mining
technique to extract prediction rules from this template database. An example extracted rule
is shown in Figure 1, where the slot is predicted from the other slots. For our
purpose, we only use the ller words in the slot, ignoring the slotnames | in our algorithm,
the rule in Figure 1 would be used in the form \daring love woman romance historical ction
story read wonderful".
daring, love
woman
romance, historical, fiction
->
story, read, wonderful
Figure
1: DiscoTEX rule mined from Amazon.com \romance" book descriptions
2.2 WordNet
WordNet [Fel98] is an online lexical knowledge-base of 130,000 English words, developed at
Princeton University. In WordNet, English nouns, adjectives, verbs and adverbs are organized
into synonym sets or synsets, each representing an underlying lexical concept. A synset contains
words of similar meaning pertaining to a common semantic concept. But since a word can have
dierent meanings in dierent contexts, a word can be present in multiple synsets. A synset
contains associated pointers representing its relation to other synsets. WordNet supports many
pointer types e.g. antonyms, synonyms, etc. The pointer types we used in our algorithm are
explained below:
1. Synonym: This pointer is implicit. Since words in the same synset are synonymous, e.g.
life and existence, the synonym of a synset is itself.
2. Antonym: This pointer type refers to another synset that is quite opposite in meaning to
the given synset, e.g. front is the antonym of back.
3. Attribute: This pointer type refers to another synset that is implicated by this synset, e.g.
benevolence is an attribute of good.
4. Pertainym: This pointer refers to a relation from a noun to an adjective, an adjective to a
noun, or an adverb to an adjective, indicating morphological relation, e.g. alphabetical is
a pertainym of alphabet.
5. Similar: This pointers refers to another adjective that is very close in terms of meaning to
the current adjective, although not enough to be part of the same synset, e.g. unquestioning
is similar to absolute.
6. Cause: This pointer type refers to a cause and eect relation, e.g. kill is cause to die.
7. Entailment: This pointer refers to the implication of another action e.g. breathe is an
entailment of inhale.
8. Holonym: This pointer refers to a part in a part-whole relation, e.g. chapter is a holonym
of text. There are three kinds of holonyms | by member, by substance and by part.
9. Meronym: This pointer refers to a whole in a part-whole relation, e.g. computer is a
meronym of cpu. There are three kinds of meronyms | by member, by substance and by
part.
10. Hyponym: This pointer refers to a specication of the concept, e.g. fungus is a hyponym
of plant.
11. Hypernym: This pointer refers to a generalization of the concept, e.g. fruit is a hypernym
of apple.
2.3 Semantic Similarity of Words
Several measures of semantic similarity based on distance between words in WordNet have been
used by dierent researchers. Leacock and Chodorow [LC98] have used the negative logarithm
of the normalized shortest path length as a measure of similarity between two words, where
the path length is measured as the number of nodes in the path between the two words and
the normalizing factor is the maximum depth in the taxonomy. In this metric, the greater
the semantic distance between two words in the WordNet hierarchy, the less is their semantic
similarity. Lee et al. [LKY93] and Rada et al. [RMBB89] have used conceptual distance, based
on an edge counting metric, to measure similarity of a query to documents. Resnick [Res92]
observed that two words deep in the WordNet are more closely related than two words higher
up in the tree, both pairs having the same path length (number of nodes) between them.
Sussna [Sus93] took this into account in his semantic distance measure that uses depth-relative
scaling. Hirst et al. [HSO98] classied the relations of WordNet into the three broad directional
categories and used a distance measure where they took into account not only the path length but
also the number of direction changes in the semantic relations along the path. Resnick [Res95]
has used an information-based measure instead of path length to measure the similarity, where
the similarity of two words is estimated from the information content of the least probable class
to which both words belong.
3 Scoring the Novelty of Rules
3.1 Semantic Distance Measure
We have dened the semantic distance between two words w i and w j as:
where is the distance along path p according to
our weighting scheme, Dir(p) is the number of direction changes of relations along path p, and
K is a suitably chosen constant.
The second component of the formula is derived from the denition of Hirst et al. [HSO98],
where the relations of WordNet are divided into three direction classes | \up", \down" and
\horizontal", depending on how the two words in the relation are lexically related. Table 1
summarizes the direction information for the relation types we use. The more direction changes
in the path from one word to another, the greater the semantic distance between the words,
since changes of direction along the path re
ect large changes in semantic context.
The path distance component of the above formula is based on the semantic distance de-
nition of Sussna [Sus93]. It is dened as the shortest weighted path between w i and w j , where
every edge in the path is weighted according to the WordNet relation corresponding to that
edge, and is normalized by the depth in the WordNet tree where the edge occurs. We have
used 15 dierent WordNet relations in our framework, and we have assigned dierent weights
to dierent link types, e.g. hypernym represents a larger semantic change than synonym, so
hypernym has a higher weight than synonym. The weight chosen for the dierent relations are
given in Table 1.
One point to note here is that Sussna's denition of semantic distance calculated the weight
of an edge between two nouns w i and w j as the average of the two relations w
corresponding to the edge, relation r 0 being the inverse of relation r. This made
the semantic distance between two words a symmetric measure. He had considered the noun
hierarchy, where every relation between nouns has an inverse relation. But in our framework,
where we have considered all the four types of words in WordNet (nouns, adverbs, adjectives
and verbs) and 15 dierent relation types between these words, all of these relations do not have
inverses, e.g. the entailment relation has no direct inverse. So, we have used only the weight of
the relation w as a measure of the weight of the edge between w i and w j . This gives a
directionality to our semantic measure, which is also conceptually compatible with the fact that
w i is a word in the antecedent of the rule and w j is a word in the consequent of the rule.
3.2 Rule Scoring Algorithm
The scoring algorithm of rules according to novelty is outlined in Figure 2. The algorithm
calculates the semantic distance d(w is in the
antecedent and w j is in the consequent of the rule, based on the length of the shortest path
Relation Direction Weight
Synonym, Attribute, Pertainym, Similar Horizontal 0.5
Antonym Horizontal 2.5
Hypernym, (MemberjPartjSubstance) Meronym Up 1.5
Hyponym, (MemberjPartjSubstance) Holonym, Down 1.5
Cause, Entailment
Table
1: Direction and weight information for the 15 WordNet relations used
connecting w i and w j in WordNet. The novelty of a rule is then calculated as the average value
of all pairs of words (w
The noun hierarchy of the WordNet is disconnected | there are 11 trees with distinct root
nodes. The verb hierarchy is also disconnected, with 15 distinct root nodes. For our purpose,
following the method of Leacock and Chodorow [LC98], we have connected the 11 root nodes of
the noun hierarchy to a single root node R noun so that a path can always be found between two
nouns. Similarly, we have connected the verb root nodes by a single root node R verb . R noun
and R verb are further connected to a top-level root node, R top . This connects all the verbs
and nouns in the WordNet database. Adjectives and adverbs are not hierarchically arranged
in WordNet, but they are related to their corresponding nouns. In this composite connected
hierarchy derived from the WordNet hierarchy, we nd the shortest weighted path between two
words by performing a branch and bound search.
In this composite word hierarchy, any two words are connected by a path. However, we
have used 15 dierent WordNet relations while searching for the path between two words |
this creates a combinatorial explosion while performing the branch and bound search on the
composite hierarchy. So, for e-cient implementation, we have a user-specied time-limit (set
to 3 seconds in our experiments) within which we try to nd the shortest path between the
words w i and w j . If the shortest path cannot be found within the time-limit, the algorithm
nds a default path between w i and w j by going up the hierarchy from both w i and w j , using
hypernym links, till a common root node is reached.
The function PathViaRoot in Figure 2 computes the distance of the default path. For nouns
and verbs, the PathViaRoot function calculates the distance of the path between the two words
as the sum of the path distances of each word to its root. If the R noun or the R verb node are
For each rule in a rule le
set of antecedent words,
set of consequent words
For each word w
and w j
are not a valid words in WordNet
Score (w
Elseif w j is not a valid word in WordNet
Score (w
is not a valid word in WordNet
Score (w
Elseif path not found between w i and w j (in
user-specied time-limit)
Score (w
Else
Score (w
Score of rule = Average of all (w
Sort scored rules in descending order
Figure
2: Rule Scoring Algorithm
a part of this path, it adds a penalty term POSRootPenalty = 3.0 to the path distance. If the
R top node is a part of this path, it adds a larger penalty TopRootPenalty = 4.0 to the path
distance. These penalty terms re
ect the large semantic jumps in paths which go through the
root nodes R noun , R verb and R top .
If one of the words is an adjective or an adverb, and the shortest path method does not
terminate within the specied time-limit, then the algorithm nds the path from the adjective
or adverb to the nearest noun, through relations like \pertainym", \attribute", etc. It then nds
the default path up the noun hierarchy, and the PathViaRoot function incorporates the distance
of the path from the adjective or adverb to the noun form into the path distance measurement.
Some of the words extracted from the rules are not valid words in WordNet e.g. abbrevi-
ations, names like Philip, domain specic terms like booknews, etc. We assigned such words
the average depth of a word (d avg in Figure 2) in the WordNet hierarchy, which was estimated
by sampling techniques to be about 6, and then estimated its path distance to the root of the
combined hierarchy by using the PathViaRoot function.
4 Experimental Results
We performed experiments to compare the novelty judgment of human users to the automatic
ratings of our algorithm. The objective here is that if the automatic ratings correlate with human
High score (9.5):
romance love heart -> midnight
Medium score (5.8):
author romance -> characters love
Low
astronomy science -> space
Figure
3: Examples of rules scored by our novelty measure
judgments about as well as human judgments correlate with each other, then the novelty metric
can be considered successful.
4.1 Methodology
For the purpose of our experiments, we took rules generated by DiscoTEX from 9000 Ama-
zon.com book descriptions: 2000 in the \literature" category, 3000 in the \science" category
and 4000 in the \romance" category. From the total set of rules, we selected a subset of rules
that had less than a total of 10 words in the antecedent and consequent of the rule | this
was done so that the rules were not too large for human users to rank. Further pruning was
performed to remove duplicate words from the rules. For the Amazon.com book description do-
main, we also created a stoplist of commonly occurring words, e.g. book, table, index, content,
etc., and removed them from the rules. There were 1258 rules in the nal pruned rule-set.
We sampled this pruned rule-set to create 4 sets of random rules, each containing 25 rules.
We created a web-interface, which the subjects used to rank these rules with scores in the range
from (least interesting) to 10.0 (most interesting), according to their judgment. The 48
subjects were randomly divided into 4 groups and each group scored one of the rule-sets.
For each of the rule-sets, two types of average correlation were calculated. The rst average
correlation was measured between the human subjects, to nd the correlation in the judgment of
novelty between human users. The second average correlation measure was measured between
the algorithm and the users in each group, to nd the correlation between the novelty scoring
of the algorithm and that of the human subjects. We used both Pearson's raw score correlation
metric and Spearman's rank correlation metric to compute the correlation measures.
One of the rule-sets was used as a training set, to tune the parameters of the algorithm. The
results on the 3 other rule-sets, used as test sets for our experiment, are summarized in Table 2.
Human - Human Algorithm - Human
Correlation Correlation
Raw Rank Raw Rank
Group2
Average
Table
2: Summary of experimental results
4.2 Results and Discussion
Some of the rules scorings generated by our algorithm are shown in Figure 3. The high-scoring
rule and the low-scoring rule were rated by the human subjects, on the average, as high scoring
and low-scoring too.
From the results, considering both the raw and the rank correlation measures, we see that
the correlation between the human subjects and the algorithm is comparable to that between
the human subjects, averaging over the three random rule-sets considered. The average raw
correlation values among the human subjects and between the human subjects and the algorithm
are both not very high. This is because for some rules, the human subjects diered a lot in
their novelty assessment. This is also due to the fact that these are initial experiments, and
we are working on improving the methodology. In later experiments, we intend to apply our
method to domains where we can expect human users to agree more in their novelty judgment
of rules. However, it is important to note that it is very unlikely that these correlations are
due to random chance, since both the average raw correlation values are above the minimum
signicant r at the p < 0:1 level of signicance determined by a t-test.
The correlation between the human subjects and the algorithm was low for the rst rule-
set. For the second and the third rule-sets, the algorithm-human correlation is better than
the human-human correlation. On closer analysis of the results of Group1, we noticed that
this rule-set contained many rules involving proper names. Our algorithm currently uses only
semantic information from WordNet, so it's scoring on these rules diered from that of human
subjects. For example, one rule many users scored as uninteresting was \ieee society ! science
mathematics", but since WordNet does not have an entry for \ieee", our algorithm gave the
overall rule a high score. Another rule to which some users gave a low score was \physics science
nature ! john wiley publisher sons", presumably based on their background knowledge about
publishing houses. In this case, our algorithm found the name John in the WordNet hierarchy
(synset lemma: disciple of Jesus), but there was no short path between John and the words in
the antecedent of the rule. As a result, the algorithm gave this rule a high score. A point to
note here is that some names like Jesus, John, James, etc. have entries in WordNet, but others
like Sandra, Robert, etc. do not | this makes it di-cult to use any kind of consistent handling
of names using lters like name lists.
In the training rule-set, we had also noticed that the rule \sea ! oceanography" had been
given a large score by our algorithm, while most subjects in that group had rated that rule as
uninteresting. This happened because there is no short path between sea and oceanography in
WordNet | these two words are related thematically, and WordNet does not have thematic
connections, an issue which is discussed in detail in Section 6.
5 Related Work
Soon after the Apriori algorithm for extracting association rules was proposed, researchers in
the data mining area realized that even modest settings for support and condence typically
resulted in a large number of rules. So much eort has gone into reducing such rule-sets by
applying both objective and subjective criteria. Klemettinen et al. [KMR proposed the
use of rule templates to describe the structure of relevant rules and constrain the search space.
Another notable attempt in using objective measures was by Bayardo and Agrawal [BA99], who
dened a partial order, in terms of both support and condence, to identify a smaller set of
rules that were more interesting than the rest. Sahar [Sah99] proposed an iterative elimination
of uninteresting rules, limiting user interaction to a few simple classication questions. Hussain
et al. [HLSL00] developed a method for identifying exception rules, with the interestingness of
a rule being estimated relative to common sense rules and reference rules. In a series of papers,
Tuzhilin and his co-researchers [ST96, PT98, AT99] argued the need for subjective measures for
the interestingness of rules. Rules that were not only actionable but also unexpected in that they
con
icted with the existing system of beliefs of the user, were preferred. Liu et al. [LHMH99]
have further built on this theme, implementing it as an interactive, post-processing routine.
They have also analyzed classication rules, such as those extracted from C4.5, dening a
measure of rule interestingness in terms of the syntactic distance between a rule and a belief. A
rule and a belief are \dierent" if either the consequents of the rule and the belief are \similar"
but the antecedents are far apart, or vice versa.
In contrast, in this paper we have analyzed information extracted from unstructured or
semi-structured data such as web-pages, and extracted rules depicting important relations and
regularities in such data. The nature of rules as well as of prior domain knowledge is quite
dierent from those extracted, say, from market baskets. We have proposed an innovative use of
WordNet to estimate the semantic distance between the antecedents and consequents of a rule,
which is used as an indication of the novelty of the rule. Domain-specic concept hierarchies have
previously been used to lter redundant mined rules [HF95, FD95]; however, to our knowledge
they have not been used to evaluate novelty quantitatively, or applied to rules extracted from
text data.
6 Future Work
An important issue that we want to address in future is the selection of the parameters of
the algorithm, e.g. the weights of the relations, and values of K, POSRootPenalty and Top-
RootPenalty. These constants are now chosen experimentally. We would like to learn these
parameters automatically from training data, using a machine learning technique. The novelty
score could then be adaptively learnt for a particular user and tailored to suit the user's
expectation.
We are using the average of the pairwise word similarity measures as the novelty score of a
rule. The average measure smoothes out the skewing eect due to large distances between any
two pairs of word in a rule. This is ne for most rules, except for some special cases, e.g. if we
have a rule \science ! scientic home", then the distance between \science" and \scientic"
is small, but that between \science" and \home" is large. Using average here gives the whole
rule a medium novelty score, which does not re
ect the fact that a part of the rule involving
the words \science" and \home" is highly interesting, while the other part involving the words
\science" and \scientic" is uninteresting. In this case, a combination method like maximum
might be more useful. A suitable combination of the average and the maximum metrics would
hopefully give a better novelty scoring.
Unfortunately, WordNet fails to capture all semantic relationships between words, such
as general thematic connections like that between \pencil" and \paper". However, other approaches
to lexical semantic similarity, such as statistical methods based on word co-occurrence
[MS99], can capture such relationships. In these methods, a word is typically represented by a
vector in which each component is the number of times the word co-occurs with another specied
word within a particular corpus. Co-occurrence can be based on appearing within a xed-size
window of words, or in the same sentence, paragraph, or document. The similarity of two words
is then determined by a vector-space metric such as the cosine of the angle between their corresponding
vectors [MS99]. In techniques such as Latent Semantic Analysis
the dimensionality of word vectors is rst reduced using singular value decomposition (SVD)
in order to produce lexical representations with a small number of highly-relevant dimensions.
Such methods have been shown to accurately model human lexical-similarity judgments [LD97].
By utilizing a co-occurrence-based metric for d(w rules could be ranked by novelty using
statistical lexical knowledge. In the end, some mathematical combination of WordNet and
co-occurrence based metrics may be the best approach to measuring lexical semantic distance.
To the extent that the names of relations, attributes, and values in a traditional database are
natural-language words (or can be segmented into words), our approach could also be applied
to traditional data mining as well as text mining. The algorithm can be easily generalized for
scoring the novelty of other types of rules, e.g. association rules derived from market-basket data.
In that case, we would require a knowledge-base for the corresponding domain, e.g. a concept
hierarchy of the company products. The domain-specic concept hierarchies and knowledge-bases
could be used to nd semantic connections between rule antecedents and consequents and
thereby contribute to evaluating novelty.
Finally, the overall interestingness of a rule might be best computed as a suitable mathematical
combination of novelty and more traditional metrics such as condence and support.
7 Conclusion
This paper proposes a methodology for extracting, analyzing and ltering rules extracted from
unstructured or semi-structured data such as web pages. These rules can underscore novel
and useful relations and regularities in textual sources of information such as web pages, email
and usenet postings. Note that the nature of rules as well as of prior domain knowledge is
quite dierent from those extracted, say, from market baskets. A salient contribution of this
paper is a new approach for measuring the novelty of rules mined from text data, based on the
lexical knowledge in WordNet. This algorithm can also be extended to rules in other domains,
where a domain-specic knowledge hierarchy is available. We have also introduced a systematic
method of empirically evaluating interestingness measures for rules, based on average correlation
statistics, and have successfully shown that the automatic scoring of rules based on our novelty
measure correlates with human judgments about as well as human judgments correlate with
each other.
Acknowledgments
We would like to thank Un Yong Nahm for giving us the DiscoTEX rules sets on which we ran
our experiments. We are grateful to John Didion for providing the JWNL Java interface to
WordNet, which we used to develop the software, and for giving us useful feedback about the
package. We are also grateful to all the people who volunteered to take part in our experiments.
The rst author was supported by the Microelectronics and Computer Development (MCD)
Fellowship, awarded by the University of Texas at Austin, while doing this research.
--R
User pro
Bayardo Jr.
Indexing by latent semantic analysis.
Knowledge discovery in textual databases (KDT).
An Electronic Lexical Database.
Untangling text data mining.
Discovery of multiple-level association rules from large databases
Data Mining: Concepts and Techniques.
Exception rule mining with a relative interestingness measure.
Lexical chains as representations of context for the detection and correction of malapropims.
Finding interesting rules from large sets of discovered association rules.
Combining local context and WordNet similarity for word sense identi
Finding interesting patterns using user expectations.
Information retrieval based on a conceptual distance in IS-A heirarchy
Dunja Mladeni
A mutually bene
A belief-driven method for discovering unexpected patterns
WordNet and distribution analysis: A class-based approach to lexical discovery
Using information content to evaluate semantic similarity in a taxon- omy
Development and application of a metric on semantic nets.
Interestingness via what is not interesting.
What makes patterns interesting in knowledge discovery systems.
Word sense disambiguation for free-text indexing using a massive semantic network
--TR
Word sense disambiguation for free-text indexing using a massive semantic network
Finding interesting rules from large sets of discovered association rules
Foundations of statistical natural language processing
Mining the most interesting rules
Interestingness via what is not interesting
Data mining
What Makes Patterns Interesting in Knowledge Discovery Systems
Finding Interesting Patterns Using User Expectations
Discovery of Multiple-Level Association Rules from Large Databases
Exception Rule Mining with a Relative Interestingness Measure
A Mutually Beneficial Integration of Data Mining and Information Extraction
--CTR
Xin Chen , Yi-fang Brook Wu, Web mining from competitors' websites, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Raz Tamir , Yehuda Singer, On a confidence gain measure for association rule discovery and scoring, The VLDB Journal The International Journal on Very Large Data Bases, v.15 n.1, p.40-52, January 2006
B. Shekar , Rajesh Natarajan, A Framework for Evaluating Knowledge-Based Interestingness of Association Rules, Fuzzy Optimization and Decision Making, v.3 n.2, p.157-185, June 2004
Combining Information Extraction with Genetic Algorithms for Text Mining, IEEE Intelligent Systems, v.19 n.3, p.22-30, May 2004 | wordnet;interesting rules;knowledge hierarchy;novelty;semantic distance |
502555 | Generalized clustering, supervised learning, and data assignment. | Clustering algorithms have become increasingly important in handling and analyzing data. Considerable work has been done in devising effective but increasingly specific clustering algorithms. In contrast, we have developed a generalized framework that accommodates diverse clustering algorithms in a systematic way. This framework views clustering as a general process of iterative optimization that includes modules for supervised learning and instance assignment. The framework has also suggested several novel clustering methods. In this paper, we investigate experimentally the efficacy of these algorithms and test some hypotheses about the relation between such unsupervised techniques and the supervised methods embedded in them. | INTRODUCTION
AND MOTIVATION
Although most research on machine learning focuses on induction
from supervised training data, there are many situations
in which class labels are not available and which thus
require unsupervised methods. One widespread approach
to unsupervised induction involves clustering the training
cases into groups that reflect distinct regions of the decision
space. There exists a large literature on clustering methods
(e.g., Everitt [3]), a long history of their development,
and increasing interest in their application, yet there is still
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
KDD-2001 San Francisco, CA USA
Copyright 2001 ACM . $5.00.
little understanding of the relation between supervised and
unsupervised approaches to induction.
In this paper, we begin to remedy that oversight by examining
situations in which a supervised induction method
occurs as a subroutine in a clustering algorithm. This suggests
two important ideas. First, one should be able to generate
new clustering methods from existing techniques by
replacing the initial supervised technique with a different
supervised technique. Second, one would expect the resulting
clustering methods to behave well (e.g., form desirable
clusters) in the same domains for which their supervised
components behave well, provided the latter have labeled
training data available.
In the pages that follow, we explore both ideas in the context
of iterative optimization, a common scheme for clustering
that includes K-means and expectation maximization as
special cases. After reviewing this framework in Section 2,
we describe an approach to embedding any supervised algorithm
and its learned classifier in an iterative optimizer, and
in Section 3 we examine four supervised methods for which
we have taken this step. In Section 4, we report on experimental
studies designed to test our hypotheses about the
relations between behavior of the resulting clustering methods
and that of their supervised components. In closing, we
review related work on generative frameworks for machine
learning and consider some directions for future research.
Update
class
models
Assign
instances
to classes
Clustered
data
Data
instances
labeled data
model parameters
Figure
1: The iterative optimization procedure.
2. GENERALIZED CLUSTERING
Many clustering systems rely on the notion of iterative
optimization. As Figure 1 depicts, such a system iterates
between two steps - class model creation and data reassignment
- until reaching a predetermined iteration limit or until
no further changes occur in reassignments. There are many
variations within this general framework, but the basic idea
is best illustrated with some well-known example methods.
2.1 K-means and EM as Iterative Optimizers
Two clustering algorithms that are popular for their simplicity
and flexibility are K-means [2] and expectation maximization
(EM) [1]. Both methods have been studied experimentally
on many problems and have been used widely
in applied settings. Here we review the algorithms briefly,
note their key similarities, and show how their differences
suggest a more general clustering framework.
The K-means algorithm represents each class by a cen-
troid, which it computes by taking the mean for each attribute
over all the instances belonging to that class. In
geometric terms, this corresponds to finding the center of
mass for the cases associated with that class. Data reassignment
involves assigning each instance to the class of the
closest centroid.
In contrast, EM models each class by a probability distribution
that it extracts from the training data in the class
model creation step. If the data are continuous, each class is
generally modeled by an n-dimensional Gaussian distribution
that consists of a mean and variance for each attribute.
In the discrete case, P (a extracted for each
possible combination of class ck , attribute a j , and attribute
value v jl . In both cases, when finding these parameters,
the contribution of each instance x i is weighted by P (ck jx i ).
Data reassignment is done by recalculating P (ck jx i ) for each
instance x i and class ck using the new class models.
2.2 A General Framework
Although both of the above clustering algorithms incorporate
iterative optimization, they employ different methods
for developing class models. Thus, we can view them
as invoking a different supervised learning technique to distinguish
among the classes. The two algorithms also differ
in how they assign instances to classes: K-means assigns
each instance to a single class, whereas EM uses partial as-
signment, in that each instance is distributed among the
classes. We will refer to the absolute method as the "strict"
paradigm and to the partial method as "weighted".
These observations lead to a general framework for clustering
that involves selecting a supervised learning algorithm
and selecting one of these assignment paradigms. In the
context of K-means and EM, this framework immediately
suggests some variants. By using the weighted paradigm
with the K-means classifier, we obtain a weighted K-means
algorithm. Similarly, combining EM's probabilistic classifier
with the strict paradigm produces a variant in which
each instance is assigned entirely to its most probable class.
This variant has been explored under the name of "strict-
assignment EM", although the partial assignment method
is more commonly used.
Although the classifiers utilized in K-means and EM can
be easily modified to operate with either assignment method,
other supervised algorithms can require more sophisticated
adaptations, as we will see shortly.
3. SUPERVISED LEARNING METHODS
As we have argued, it should be possible to embed any supervised
learning method within our generalized clustering
framework. However, our evaluation has focused on four
simple induction algorithms that have limited representational
power [6], because the clustering process itself aims
to generate the disjoint decision regions that more powerful
supervised methods are designed to produce. Below we
describe these algorithms in some detail, including the adaptations
we made for use in the weighted paradigm. These
adaptations involve altering model production to take into
account the weights of instances and revising instance reassignment
to generate class weights for every instance, which
are then used to produce the next generation of class models.
3.1 Prototype Modeler
Our first supervised algorithm, which plays a role in K-
means, creates a prototype [13] or centroid for each class by
extracting the mean of each attribute from training cases for
that class. Such a prototype modeler classifies an instance
by selecting the class with the centroid closest to it in n-dimensional
space. Because the distance metric is sensitive
to variations in scale, our version normalizes all data to values
between zero and one before creating the prototypes.
In the weighted paradigm, the mean for each attribute becomes
a weighted average of the training cases. The relative
proximity of each instance to a given centroid determines
the associated weight for that centroid's class. Formally, we
can express this by
where jCj is the number of classes. The new centroid is then
composed of the weighted mean for each attribute, with the
mean of attribute a j for cluster ck being calculated by
is the value of the jth attribute of instance x i and
where jXj is the total number of instances.
3.2 Naive Bayesian Modeler
We selected naive Bayes [2] as our second induction algo-
rithm. As described in the context of EM, this technique
models each class as a probability distribution described by
for each class ck , attribute a j , and
attribute value v jl . For nominal attributes, naive Bayes represents
as a discrete conditional probability
distribution, which it estimates from counts in the training
data, and it estimates the class probability P (ck ) in a similar
manner. For continuous attributes, it typically uses a conditional
Gaussian distribution that it estimates by computing
the mean and variance for each attribute from training data
for each class. To calculate the relative probability that a
new instance belongs to a given class ck , naive Bayes employs
the expression
Y
which assumes that the distribution of values for each attribute
are independent given the class.
When operating normally as a strict classifier, naive Bayes
returns the class with the highest probability for each in-
stance. In the weighted case, the conditional distributions
are calculated using a weighted sum rather than a strict
sum, while the expression
determines the weight used in the data reassignment process.
3.3 Perceptron List Modeler
Another simple induction method, the perceptron algorithm
[12], also combines evidence from attributes during
classification, but uses the expression
ae
threshold
to assign a test case to the positive (1) or negative (0) class.
Each weight wm specifies the relative importance of an attribute
m; taken together, these weights determine a hyper-plane
that attempts to separate the two classes. The learning
algorithm invokes an error-driven scheme to adjust the
weights associated with each attribute. 1 Because a perceptron
can only differentiate between two classes, we employed
an ordered list of perceptrons that operates much like a decision
list. The algorithm first learns to discriminate between
the majority class and others, generating the first percep-
tron. Instances in the majority class are removed, and the
system trains to distinguish the new majority class from the
rest, producing another perceptron. This process continues
until one class remains, which is treated as a default.
Although the perceptron traditionally assumes all-or-none
assignment, it seems natural to interpret the scaled difference
between the sum and the threshold as a likelihood. The
weighted variant multiplies the update for each attribute
weight by the weight for each instance, so that an instance
with a smaller weight has a smaller effect on learning. To
prevent small weights from causing endless oscillations, it
triggers an updating cycle through the data only if an incorrectly
classified instance has a weight of greater than 0.5,
although all instances are used for the actual update.
In reassignment, the weighted method calculates the difference
between the instance value and the threshold, scaled
by the sigmoid
which produces bounds on the weight size. If an instance
were evaluated as being perfectly at the threshold, the function
would return 0.5. The factor 5 in the exponent of e
distributes the resulting weights over a larger range, so the
algorithm will not give a weight close to 0.5 for all instances.
Otherwise the sigmoid is not tight enough to be useful for a
generally small range of values.
3.4 Decision Stump Modeler
For our final supervised learning algorithm, we selected
decision-stump induction (e.g., Holte [4]), which differs from
the others in selecting a single attribute to classify instances.
To this end, it uses the information-theoretic measure
is the frequency of class ck in a training
set S with jCj classes. If the attribute is continuous, the
algorithm orders its observed values and considers splitting
between each successive pair, selecting the split with the
highest score. The method applies this process recursively to
1 For the purposes of this study, we used a learning rate of
iterations through the training data, which did
well on all our classification tasks.
the values in each subset, continuing until further divisions
gain no more information, as measured by
where T is the training set, Tm is a given subset of T , and
jP j is the number of branches. If the attribute is nominal,
the algorithm creates a separate branch for each attribute
value. Each branch of the stump is then associated with the
majority class of those training cases that are sorted to that
branch.
To accommodate weighted assignment, we adjust the equations
above to sum over the weights of instances, rather than
over strict frequencies, and keep simple statistical information
for each branch. The reassignment weight given to each
instance for class ck is calculated by
where jBj is the number of instances associated with the
branch to which that instance is sorted.
4. EXPERIMENTAL STUDIES
We had two intuitions about our clustering framework
that suggested corresponding formal hypotheses. 2 First, we
expected that each algorithm would exhibit a "preference"
for one of the data assignment paradigms by demonstrating
better performance in that paradigm across different data
sets. Second, we anticipated that, across data sets, high
(low) predictive accuracy by a supervised method would be
associated with relatively high (low) accuracy for the corresponding
clustering algorithm. In this section, we describe
our designs for the experiments to test these hypotheses and
the results we obtained.
4.1 Experiments with Natural Data
To test these hypotheses, we ran the generalized clustering
system with each algorithm-paradigm combination on a battery
of natural data sets. We also evaluated each supervised
algorithm independently by training it and measuring its
predictive accuracy on a separate test set. The independent
variables were the assignment paradigm (for the clustering
tests), the supervised learning algorithm, the data set, and
the number of instances used in training. The dependent
variables were the classification accuracies on unseen data.
We used a standard accuracy metric to evaluate both the
supervised classifiers and the clustering algorithms:
where T is the test set, and where ffi(x classified
correctly and 0 otherwise.
When evaluating accuracy, we trained each classifier on
the labeled data set with the test set removed. Because the
clustering algorithms create their own classes, we added a
step in which each completed cluster is assigned the actual
Naturally, we also expected that no single algorithm combination
would outperform all others on all data sets, but
this is consistent with general findings in machine learning,
and so hardly deserves the status of an hypothesis.
Table
1: Supervised accuracies on four data sets.
Prototype Bayes Perceptron Stump
Promoters 86.0 87.0 76.0 70.0
Iris 49.3 94.7 46.0 93.3
Hayes-Roth 32.3 61.5 79.2 43.1
Glass 84.8 79.0 39.0 97.6
class of its majority population. For example, if a given
cluster consists of instances that are actually class A and
10 that are actually class B, all instances in the cluster will
be declared members of class A, with an accuracy of 75% for
that cluster. This approach loses detail, but it let us evaluate
each clustering algorithm against the "correct" clusters.
We selected four data sets from the UCI repository - Pro-
moters, Iris, Hayes-Roth, and Glass - that involved different
numbers of classes (two to seven), different numbers of attributes
(five to 57), and different attribute types (nominal,
continuous, or mixed). Another factor in their selection was
that each led to high classification accuracy for one of the
supervised methods but (typically) to lower accuracy for the
others, as shown with bold font in Table 1. This differentiation
on supervised training data seemed a prerequisite
for testing the predicted correlation between accuracies for
supervised learning and clustering.
Moreover, remember that our four supervised methods
each has restricted representational power that is generally
limited to one decision region per class. As a result, the
fact that one such method obtains high accuracy in each of
these domains suggests that each of their classes maps onto
to a single cluster. This lets us assume that the number of
classes in each data set corresponds to the number of clus-
ters, further increasing the chances of meaningful results.
For each data set, we collected a learning curve using ten-fold
cross-validation, recording results for each increment
of 25 data points. Typically, clustering accuracy ceased to
improve early in the curve, although the supervised accuracy
often continued to increase. The results we report here all
involve accuracy as measured at the last point on each curve.
Table
2: Unsupervised accuracies for two alternative
data assignment paradigms (strict/weighted).
Prototype Bayes Perceptron Stump
Promoters 62.0/77.0 52.0/41.0 49.0/57.0 19.0/26.0
Iris 27.3/51.3 83.3/88.0 26.7/32.0 55.3/53.3
Hayes-Roth 37.7/39.2 30.0/40.0 38.5/38.5 34.6/36.2
Glass 84.8/51.0 44.8/61.9 26.2/34.3 77.1/74.3
Recall that our first hypothesis predicted each supervised
method would construct more accurate clusters when combined
with its preferred data assignment paradigm. The
results in Table 2, which shows the classification accuracies
for each method-paradigm combination on the four do-
mains, disconfirms this hypothesis. In general, each supervised
algorithm sometimes did better with one assignment
scheme and sometimes with the other, depending on the do-
main. Both naive Bayes and the prototype learner showed
Supervised accuracy2060100
Unsupervised
accuracy
Decision stump
Perceptron list
Naive Bayes
Prototype
Figure
2: Supervised and unsupervised accuracies,
using strict data assignment, for four algorithms on
four natural data sets
large shifts of this sort, though swings for the decision-stump
learner were less drastic. Only the perceptron list method
showed any support for our prediction, favoring weighted assignment
on three data sets and a tied result on the fourth.
After addressing our first hypothesis, we proceeded to test
our second claim, that relatively higher (lower) accuracy in
supervised mode is associated with relatively higher (lower)
accuracy on unsupervised data, i.e., that they are correlated
positively. Our original plan was to measure the unsupervised
accuracy of each learning algorithm when combined
with its preferred data assignment paradigm. Having rejected
the notion of such preference, we resorted instead
to measuring the relation between supervised accuracy and
that achieved by clustering with strict assignment, followed
by a separate measure between the accuracy of supervised
learning and weighted assignment.
To this end, we computed the correlation between the supervised
accuracies using the 16 algorithm-domain combinations
in Table 1 and the analogous strict accuracies from
Table
2. The resulting correlation coefficient,
was significant at the 0.01 level and explained 55 percent
of the variance. Figure 2 shows that supervised accuracy is
a reasonable predictor of unsupervised accuracy, thus generally
supporting our hypothesis. We also calculated the
correlation between supervised accuracies and the weighted
accuracies from Table 2. Here the correlation was
which was also significant at the 0.01 level and explained 43
percent of the variance.
4.2 Experiments with Synthetic Data
Our encouraging results with natural data sets show that
our framework has relevance to real-world clustering prob-
lems, but they can give only limited understanding for the
reasons underlying the phenomena. For this reason, we decided
to carry out another study that employed synthetic
data designed to reveal the detailed causes of these effects.
One standard explanation for some induction methods
outperforming others relies on the notion of inductive bias,
which reflects the fact that some formalisms can represent
certain decision regions more easily than others. Since our
four supervised learning methods have quite different inductive
biases, we designed four separate learning tasks, each
intended to be easily learned by one of these methods but
not by others.
Each learning task incorporated two continuous variables
and three classes, with a single contiguous decision region
for each class. Thus, the domain designed with decision
stumps in mind involved splits along one relevant attribute,
the prototype-friendly domain involved three distinct proto-
types, and so forth. The naive Bayesian classifier is difficult
to foil, but for every other supervised method, we had at
least one domain on which it should do relatively poorly.
For each domain, we devised a generator that produced 125
random instances from either a uniform or, for the Bayes-
friendly domain, a Gaussian distribution for every class, creating
the same number of instances for each one.
The geometric metaphor clarifies one reason that a given
method should outperform others in both supervised and
unsupervised mode, but it also suggests a reason why the
correlation between behavior on these two tasks is imper-
fect. Conventional wisdom states that clustering is easy
when clusters are well separated but difficult when they are
not. Thus, our data generator also included a parameter S
that let us vary systematically the separation between the
boundaries of each class. The predictive variables for each
domain ranged from 0 to 1, so we varied the separation distance
from
Although we expected our synthetic domains to reproduce
the positive correlation we observed with natural data,
we also predicted that cluster separation should influence
this effect. In particular, we thought the correlation would
be lower when the gap was small, since iterative optimization
would have difficulty assigning instances to the "right"
unlabeled classes, whereas supervised learning would have
no such difficulty. However, the correlation should increase
monotonically with cluster distance, since the process of
finding well-separated clusters should then be dominated by
the inductive bias of the supervised learning modules.
Our experimental runs with synthetic data did not support
these predictions. 3 Despite our attempts to design
data sets that would distinguish among the supervised learning
methods, correlations between supervised and unsupervised
accuracies when cluster separation considerably
lower strict and
weighted) than for our studies with natural domains, though
still marginally significant at the 0.1 level. Moreover, our
experiments showed no evidence that correlation increases
with cluster separation, giving strict and
weighted when
for strict and weighted when
Figure
3, which plots the accuracies for strict unsupervised
learning against supervised accuracy when cluster separation
suggests one reason for this negative result.
Apparently, the correlations are being reduced by a "ceiling
effect" in which the supervised accuracies (generally much
higher than for our results on natural domains) show little
variation, whereas the unsupervised accuracies still range
widely. The supervised methods typically learn very accurate
classifiers across all four synthetic domains, even though
3 This study also revealed no evidence for a preferred data assignment
scheme, with the best combinations shifting across
both domain and separation level.
Supervised accuracy2060100
Unsupervised
accuracy
Decision stump
Perceptron list
Naive Bayes
Prototype
Figure
3: Supervised and unsupervised accuracies,
using strict data assignment, for four algorithms
with four synthetic data sets
we did our best to design them otherwise. Analogous plots
for higher values of the separation parameter S show even
stronger versions of this effect, indicating that supervised
induction benefits more from cluster separation than does
unsupervised clustering, which explains why the correlation
does not increase as predicted.
Our expectations rested on the intuition that inductive
bias and cluster separation are the dominant factors in determining
the behavior of an iterative optimizer. From these
negative results, and from the high correlations on natural
domains, we can infer that other factors we did not vary
in this experiment play an equal or more important role.
Likely candidates include the number of relevant attributes,
the number of irrelevant attributes, the amount of attribute
noise, and the number of classes, all of which are known to
affect the predictive accuracy of learned classifiers. These
domain characteristics should be varied systematically in
future studies that draw on synthetic data to explore the
relation between clustering and supervised learning.
5. RELATED WORK
As we noted earlier, there exists a large literature on clustering
that others (e.g., Everitt [3]) have reviewed at length.
Much of this work relies on iterative optimization to group
training cases, and there exist many variants beyond the
K-means and expectation-maximization algorithms familiar
to most readers. For instance, Michalski and Stepp's [11]
used logical rule induction to characterize its
clusters and assign cases to them. More recently, Zhang,
Hsu, and Dayal [15] have described the K-harmonic means
method, which operates like K-means but invokes a different
distance metric that usually speeds convergence. How-
ever, despite this diversity, researchers have not proposed
either theoretical frameworks for characterizing the space of
iterative optimization methods or software frameworks to
support their rapid construction and evaluation.
In the broader arena, there have been some efforts to link
methods for supervised and unsupervised learning. For ex-
ample, Langley and Sage [8] adapted a method for inducing
univariate decision trees to operate on unsupervised data
and thus generate taxonomy, and, more recently, Langley [6]
and Liu et al. [9] have described similar but more sophisticated
approaches. The relationship between supervised and
unsupervised algorithms for rule learning is more transpar-
Martin [10] has reported one approach that adapts supervised
techniques to construct association rules from unlabeled
data. But again, such research has focused on specific
algorithms rather than on general or generative frameworks.
However, other areas of machine learning have seen a few
frameworks of this sort. Langley and Neches [7] developed
Prism, a flexible language for production-system architectures
that supported many combinations of performance and
learning algorithms, and later versions of Prodigy [14] included
a variety of mechanisms for learning search-control
knowledge. For classification problems, Kohavi et al.'s [5]
MLC++ supported a broad set of supervised induction algorithms
that one could invoke with considerable flexibility.
The generative abilities of MLC++ are apparent from its
use for feature selection and its support for novel combinations
of existing algorithms. This effort comes closest to our
own in spirit, both in its goals and its attempt to provide a
flexible software infrastructure for machine learning.
6. CONCLUDING REMARKS
In this paper, we presented a framework for iterative optimization
approaches to clustering that lets one embed any
supervised learning algorithm as a model-construction com-
ponent. This approach produces some familiar clustering
techniques, like K-means and EM, but it also generates some
novel methods that have not appeared in the literature. The
framework also let us evaluate some hypotheses about the
relation between the resulting clustering methods and their
supervised modules, which we tested using both natural and
synthetic data.
Our first hypothesis, that each supervised method had a
preferred data assignment scheme with which it produced
more accurate clusters, was not borne out the experiments.
Clustering practitioners can continue to combine prototype
learning with strict assignment (giving K-means) and naive
Bayes with weighted assignment (giving EM), but we found
no evidence that these combinations are superior to the al-
ternatives. However, our experiments did support our second
hypothesis by revealing strong correlations between the
accuracy of supervised algorithms on natural data sets and
the accuracy of iterative optimizers in which they were em-
bedded. We augmented these results with experiments on
synthetic data, which gave us control over decision regions
and separation of clusters. These studies also produced positive
correlations between supervised and unsupervised ac-
curacy, but failed to reveal an effect of cluster separation.
Clearly, there remains considerable room for additional
research. The framework supports a variety of new clustering
algorithms, each interesting in its own right but also
important for testing further our hypotheses about relations
between supervised and unsupervised learning. We should
also carry out experiments with synthetic data that vary
systematically other factors that can affect predictive accu-
racy, such as irrelevant features and attribute noise. Finally,
we should explore further the role of cluster separation and
the reason it had no apparent influence in our studies.
Although our specific results are intriguing, we attach
more importance to the framework itself, which supports
a new direction for studies of clustering mechanisms. We
encourage other researchers to view existing techniques as
examples of some generative framework and to utilize that
framework both to explore the space of clustering methods
and to reveal underlying relations between supervised
and unsupervised approaches to induction. Ultimately, this
strategy should produce a deeper understanding of the clustering
process and its role in the broader science of machine
learning.
7.
--R
Maximum likelihood from incomplete data via the EM algo- rithm
Pattern Classification and Scene Analysis.
Analysis.
Very simple classification rules perform well on most commonly used data sets.
Elements of Machine Learning.
Prism user's manual.
Conceptual clustering as discrimination learning.
Clustering through decision tree construction.
Focusing attention for observational learn- ing: The importance of context
Learning from ob- servation: Conceptual clustering
Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.
Categories and Concepts.
Derivational analogy in Prodigy: Automating case acquisition
--TR
Very Simple Classification Rules Perform Well on Most Commonly Used Datasets
Derivational Analogy in PRODIGY
Elements of machine learning
Clustering through decision tree construction
K-Harmonic Means - A Spatial Clustering Algorithm with Boosting
--CTR
Tadashi Nomoto , Yuji Matsumoto, Supervised ranking in open-domain text summarization, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, July 07-12, 2002, Philadelphia, Pennsylvania
Greg Hamerly , Charles Elkan, Alternatives to the k-means algorithm that find better clusterings, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Shi Zhong , Joydeep Ghosh, A unified framework for model-based clustering, The Journal of Machine Learning Research, 4, p.1001-1037, 12/1/2003 | iterative optimization;clustering;supervised learning |
502567 | Detecting graph-based spatial outliers. | of outliers can lead to the discovery of unexpected, interesting, and useful knowledge. Existing methods are designed for detecting spatial outliers in multidimensional geometric data sets, where a distance metric is available. In this paper, we focus on detecting spatial outliers in graph structured data sets. We define statistical tests, analyze the statistical foundation underlying our approach, design several fast algorithms to detect spatial outliers, and provide a cost model for outlier detection procedures. In addition, we provide experimental results from the application of our algorithms on a Minneapolis-St.Paul(Twin Cities) traffic dataset to show their effectiveness and usefulness. | Introduction
Data mining is a process to extract nontrivial, previously unknown and potentially useful infor-
mation(such as knowledge rules, constraints, regularities) from data in databases [11, 4]. The
explosive growth in data and databases used in business management, government administra-
tion, and scientific data analysis has created a need for tools that can automatically transform
the processed data into useful information and knowledge. Spatial data mining is a process of
discovering interesting and useful but implicit spatial patterns. With the enormous amounts of
spatial data obtained from satellite images, medical images, GIS, etc., it is a nontrivial task for
humans to explore spatial data in detail. Spatial data sets and patterns are abundant in many
application domains related to NASA, the National Imagery and Mapping Agency(NIMA), the
National Cancer Institute(NCI), and the Unite States Department of Transportation(USDOT).
Data Mining tasks can be classified into four general categories: (a) dependency detection
(e.g., association rules) (b) class identification (e.g., classification, clustering) (c) class description
(e.g., concept generalization), and (d) exception/outlier detection [9]. The objective of the
first three categories is to identify patterns or rules from a significant portion of a data set.
On the other hand, the outlier detection problem focuses on the identification of a very small
subset of data objects often viewed as noises, errors, exceptions, or deviations. Outliers have
been informally defined as observations which appear to be inconsistent with the remainder of
that set of data [2], or which deviate so much from other observations so as to arouse suspicions
that they were generated by a different mechanism [6]. The identification of outliers can lead to
the discovery of unexpected knowledge and has a number of practical applications in areas such
as credit card fraud, the performance analysis of athletes, voting irregularities, bankruptcy, and
weather prediction.
Outliers in a spatial data set can be classified into three categories: set-based outliers, multi-dimensional
space-based outliers, and graph-based outliers. A set-based outlier is a data object
whose attributes are inconsistent with attribute values of other objects in a given data set regardless
of spatial relationships. Both multi-dimensional space-based outliers and graph-based
outliers are spatial outliers, that is, data objects that are significantly different in attribute values
from the collection of data objects among spatial neighborhoods. However, multi-dimension
space-based outliers and graph-based outliers are based on different spatial neighborhood defini-
tions. In multi-dimensional space-based outlier detection, the definition of spatial neighborhood
is based on Euclidean distance, while in graph-based spatial outlier detections, the definition is
based on graph connectivity.
Many spatial outlier detection algorithms have been recently proposed; however, spatial
outlier detection remains challenging for various reasons. First, the choice of a neighborhood
is nontrivial. Second,the design of statistical tests for spatial outliers needs to account for the
distribution of the attribute values at various locations as well as the aggregate distribution
of attribute values over the neighborhoods. In addition, the computation cost of determining
parameters for a neighborhood-based test can be high due to the possibility of join computations
In this paper, we formulate a general framework for detecting outliers in spatial graph data
sets, and propose an efficient graph-based outlier detection algorithm. We provide cost models
for outlier detection queries, and compare underlying data storage and clustering methods
that facilitate outlier query processing. We also use our basic algorithm to detect spatial
and temporal outliers in a Minneapolis-St.Paul(Twin Cities) traffic data set, and show the
correctness and effectiveness of our approach.
1.1 An Illustrative Application Domain: Traffic Data Set
In 1995, the University of Minnesota and the Traffic Management Center(TMC) Freeway Operations
group started the development of a database to archive sensor network measurements
from the freeway system in the Twin Cities. The sensor network includes about nine hundred
stations, each of which contains one to four loop detectors, depending on the number of lanes.
Sensors embedded in the freeways monitor the occupancy and volume of traffic on the road.
At regular intervals, this information is sent to the Traffic Management Center for operational
purposes, e.g., ramp meter control, and research on traffic modeling and experiments. Figure 1
shows a map of the stations on highways within the Twin-Cities metropolitan area, where each
polygon represents one station. Interstate freeways include I-35W, I35E, I-94, I-394, I-494, and
I-694. State trunk highways include TH-100, TH-169, TH-212, TH-252, TH-5, TH-55, TH-62,
TH-65, and TH-77. I-494 and I-694 together forming a ring around the Twin Cities. I-94 passes
from East to North-West, while I-35W and I-35E run in a North-South direction. Downtown
Minneapolis is located at the intersection of I-94, I-394, and I-35W, and downtown Saint Paul
is located at the intersection of I-35E and I-94.
I-
I-
I-
I-
I-
I-
I-
I-
I-
I-3
Figure
1: Detector map in station level
Figure
2(a) demonstrates the relationship between a station and its encompassing detectors.
For each station, there is one detector installed in each lane. The traffic flow information
measured by each detector can then be aggregated to the station level. Figure 2(b) shows the
three basic data tables for the traffic data. The station table stores the geographical location
and some related attributes for each station. The relationship between each detector and its
corresponding station is captured in the detector table. The value table records all the volume
and occupancy information within each 5-minute time slot at each particular station.
Detector 50
Detector 51
Detector 52
Station 20
(a) Relationship between detectors
and stations2
Detector Station1
Detector Table
Time
Detector Volume Occupancy260 12Value
Table
Polygon
Boundary
(3,5),(4,10),.
(5,7),(6,4),.
Station Location
26th St.
28th St.
Freeway
Direction
Q4
Q4
Station Table
Zone
(b) Three basic tables
Figure
2: Detector-station Relationship and Basic Tables
In this application, each station exhibits both graph and attribute properties. The topological
space is the map, where each station represents a node and the connection between each
station and its surrounding stations can be represented as an edge. The attribute space for
each station is the traffic flow information (e.g., volume, occupancy) stored in the value table.
In this application, we are interested in discovering the location of stations whose measurements
are inconsistent with those of their graph-based spatial neighbors and the time periods
when those abnormalities arise. This outlier detection task is to:
ffl Build a statistical model for a spatial data set
ffl Check whether a specific station is an outlier
ffl Check whether stations on a route are outliers
We use three neighborhood definitions in this application as shown in Figure 3. First, we
define a neighborhood based on spatial graph connectivity as a spatial graph neighborhood. In
Figure
are the spatial neighbors of (s are connected to
s 2 in a spatial graph. Second, we define a neighborhood based on time series as a temporal
neighborhood. In Figure 3, (s are the temporal neighbors of (s
t 3 are consecutive time slots. In addition, we define a neighborhood based on both space and
time series as a spatial-temporal neighborhood. In Figure 3, (s
are the spatial-temporal neighbors of (s
are connected to s 2 in a spatial graph, and t 1 , t 2 , and t 3 are consecutive time slots.
1.2 Problem Formulation
In this section, we formally define the spatial outlier detection problem. Given a spatial frame-work
S for the underlying spatial graph G, an attribute f over S, and neighborhood relationship
R, we can build a model and construct statistical tests for spatial outliers based on a spatial
graph according to the given confidence level threshold. The problem is formally defined as
follows.
Spatial Outlier Detection Problem
Given:
ffl A spatial graph is a spatial framework consisting of locations
Temporal
Neighborhood
Spatial-Temporal
Neighborhood
Time
Space
Legend
Neighbors
Spatial
Temporal
Neighbors
Additional Neighbors
in Spatial-Temporal
Neighborhood
Spatial-Temporal
Window of
Neighborhood
Neighborhood
Figure
3: Spatial and Temporal outlier in traffic data
\Theta S) is a collection of edges between locations in S.
ffl A neighborhood relationship R ' S \Theta S consistent with E
ffl An attribute function f a set of real numbers
ffl An aggregate function f aggr : R N ! a set of real numbers to summarize values of
attribute f over a neighborhood relationship R N ' R
ffl Confidence level threshold '
Find: A set O of spatial outliers O
Objective:
ffl Correctness: outliers identified by a method have significantly different
attribute values with those of their neighborhood
ffl Efficiency: to minimize the computation time
Constraints:
ffl Attribute values for different locations in S have a normal distribution
ffl The size of the data set is much greater than the main memory size
ffl The range of attribute function f is the set of real numbers
The formulation shows two subtasks in this spatial outlier detection problem: (a) the design
of a statistical model M and a test for spatial outliers (b) the design of an efficient computation
method to estimate parameters of the test, test whether a specific spatial location is an outlier,
and test whether spatial locations on a given path are outliers.
1.3 Paper Scope and Outline
This paper focuses on graph-based spatial outlier detection using a single attribute. Outlier
detection in a multi-dimensional space using multiple attributes is outside the scope of this
paper.
The rest of the paper is organized as follows. Section 2 reviews related work and discusses our
contributions. In Section 3, we propose our graph-based spatial outlier detection algorithm and
discuss its computational complexity. The cost models for different outlier query processing
are analyzed in Section 4. Section 5 presents our experimental design. The experimental
observation and results are shown in Section 6. We summarize our work in Section 7.
Related Work and Our Contribution
Many outlier detection algorithms [1, 2, 3, 8, 9, 12, 14, 16] have been recently proposed. As
shown in Figure 4, these methods can be broadly classified into two categories, namely set-based
outlier detection methods and spatial-set-based outlier detection methods. The set-based outlier
detection algorithms [2, 7] consider the statistical distribution of attribute values, ignoring the
spatial relationships among items. Numerous outlier detection tests, known as discordancy
tests [2, 7], have been developed for different circumstances, depending on the data distribution,
the number of expected outliers, and the types of expected outliers. The main idea is to fit the
data set to a known standard distribution, and develop a test based on distribution properties.
Outlier Detection Methods
Distance-based
Wavelet
Depth threshold
Distance to k-th
Neighbor
Density in
Neighborhood
Multi-dimensional
metric spatial data set
Graph-based
spatial data set
attribute value
Statistical
distribution of
based Spatial set based
based
Spatial Graph
Detection
Based Outlier
Figure
4: Classification of outlier detection methods
Spatial-set-based outlier detection methods consider both attribute values and spatial rela-
tionships. They can be further grouped into two categories, namely multi-dimensional metric
space-based methods and graph-based methods. The multi-dimensional metric space-based
methods model data sets as a collection of points in a multidimensional space, and provide
tests based on concepts such as distance, density, convex-hull depth. We discuss different example
tests now. Knorr and Ng presented the notion of distance-based outliers [8, 9]. For a
dimensional data set T with N objects; an object O in T is a DB(p;D)-outlier if at least
a fraction p of the objects in T lies greater than distance D from O. Ramaswamy et al. [13]
proposed a formulation for distance-based outliers based on the distance of a point from its
th nearest neighbor. After ranking points by the distance to its k th nearest neighbor, the top
n points are declared as outliers. Breunig et al. [3] introduced the notion of a "local" outlier
where the outlier-degree of an object is determined by taking into account the clustering structure
in a bounded neighborhood of the object, e.g., k nearest neighbors. They formally defined
the outlier factor to capture this relative degree of isolation or outlierness. Their notions of
outliers are based on the same theoretical foundation as density-based cluster analysis [1]. In
computational geometry, some depth-based approaches [14, 12] organize data objects in convex
hull layers in data space according to their peeling depth [12], and outliers are expected to be
found from data objects with a shallow depth value. Conceptually, depth-based outlier detection
methods are capable of processing multidimensional datasets. However, with the best case
computational complexity of \Omega\Gamma N dk=2e ) for computing a convex hull, where N is the number of
objects and k is the dimensionality of the dataset, depth-based outlier detection methods may
not be applicable for high dimensional data sets. Yu et al. [16] introduced an outlier detection
approach, called FindOut, which identifies outliers by removing clusters from the original data.
Its key idea is to apply signal processing techniques to transform the space and find the dense
regions in the transformed space. The remaining objects in the non-dense regions are labeled
as outliers.
Methods for detecting outliers in multi-dimensional Euclidean space have some limitation.
First, multi-dimensional approaches assume that the data items are embedded in a isometric
metric space and do not capture the spatial graph structure. Consider the application domain
of traffic data analysis. A multi-dimensional method may put a detector station in the neighborhood
of another detector even if they are on opposite sides of the highway (e.g., I-35W north
bound at exit 230, and I-35W south bound at exit 230), leading to the potentially incorrect
identification of bad detector. Secondly, they do not exploit apriori information about the statistical
distribution of attribute data. Last, they seldom provide a confidence measure of the
discovered outliers.
In this paper, we formulate a general framework for detecting spatial outliers in a spatial data
set with an underlying graph structure. We define neighborhood-based statistics and validate
the statistical distribution. We then design a statistically correct test for discovering spatial
outliers, and develop a fast algorithm to estimate model parameters, as well as to determine
the results of a spatial outlier test on a given item. In addition, we evaluate our method in
Twin Cities traffic data set and show the effectiveness and usefulness of our approach.
3 Our Approach: Spatial Outlier Detection Algorithm
In this section, we list the key design decisions and propose an I/O efficient algorithm for spatial
graph based outliers.
3.1 Choice of Spatial Statistic
For spatial statistics, several parameters should be pre-determined before running the spatial
outlier test. First, the neighborhood can be selected based on a fixed cardinality or a fixed graph
distance or a fixed Euclidean distance. Second, the choice of neighborhood aggregate function
can be mean, variance, and auto-correlation. Third, the choice for comparing a location with
its neighbors can use either just a number or a vector of attribute values. Finally, the statistic
for base distribution can be selected from various choices.
The statistics we used are is the attribute value
for a data record x, N(x) is the fixed cardinality set of neighbors of x, and E y2N(x) (f(y)) is the
average attribute value for neighbors of x. Statistic S(x) denotes the difference of the attribute
value of each data object x and the average attribute value of x 0 s neighbors.
3.2 Characterizing the Distribution of the Statistic
normally distributed if attribute
value f(x) is normally distributed.
Proof:
Given the definition of neighborhood, for each data record x, the average attribute values
neighbors can be calculated. Since attribute values f(x) are normally
distributed and an average of normal variables is also normally distributed, the average attribute
values neighbors is also a normal distribution for a fixed cardinality
neighborhood.
Since the attribute value and the average attribute value over neighbors are two normal
variable, the distribution of the difference of S(x) of each data object x and the average
attribute value of x 0 s neighbors is also normally distributed. \Xi
3.3 Test for Outlier Detection
The test for detecting an outlier can be described as j S(x)\Gamma- s
oe s
For each data object x
with an attribute value f(x), the S(x) is the difference of the attribute value of data object x
and the average attribute value of its neighbors; - s is the mean value of all S(x), and oe s is the
standard deviation of all S(x). The choice of ' depends on the specified confidence interval.
For example, a confidence interval of 95 percent will lead to ' - 2.
3.4 Computation of Test Parameters
We now propose an I/O efficient algorithm to calculate the test parameters, e.g., mean and standard
deviation for the statistics, as shown in Algorithm 1. The computed mean and standard
deviation can then be used to detect the outlier in the incoming data set.
Given an attribute data set V and the connectivity graph G, the TPC algorithm first
retrieves the neighbor nodes from G for each data object x. It then computes the difference of
the attribute value of x and the average of the attribute values of x 0 s neighbor nodes. These
different values are then stored as a set in the AvgDist Set. Finally, the AvgDist Set is used to
get the distribution value - s and oe s . Note that the data objects are processed on a page basis
to reduce redundant I/O.
3.5 Computation of Test Results
The neighborhood aggregate statistics value, e.g., mean and standard deviation, computed in
the TPC algorithm can be used to verify the outliers in an incoming data set. The two verification
procedures are Route Outlier Detection(ROD) and Random Node Verification(RNV). The
Test Parameters Computation(TPC) Algorithm
Input: S is the multidimensional attribute space;
D is the attribute data set in S;
F is the distance function in
ND is the depth of neighbor;
E) is the spatial graph;
O i =Get One Object(i,D); /* Select each object from D */
NNS=Find Neighbor Nodes Set(O i ,ND,G);
Find neighbor nodes of O i from G */
Accum Dist=0;
O k =Get One Object(j,NNS); /* Select each object from NNS */
Accum Dist += F (O
Dist / jNNSj;
Add Element(AvgDist Set,i); /* Add the element to AvgDist Set */
Mean(AvgDist
Dev(AvgDist Set); /* Compute Standard Deviation */
return (- s ,oe s ).
Algorithm 1: Pseudo-code for test parameters computation
ROD procedure detects the spatial outliers from a user specified route, as shown in Algorithm 2.
The RNV procedure check the outlierness from a set of randomly generated nodes. Given route
RN in the data set D with graph structure G, the ROD algorithm first retrieves the neighboring
nodes from G for each data object x in the route RN , then it computes the difference
S(x) between the attribute value of x and the average of attribute values of x 0 s neighboring
nodes. Each S(x) can then be tested using the spatial outlier detection test j S(x)\Gamma- s
oe s
' is predetermined by the given confidence interval. The steps to detect outliers in both ROD
and RNV are similar, except that the RNV has no shared data access needs across tests for
different nodes. The I/O operations for Find Neighbor Nodes Set() in different iterations are
independent of each other in RNV. We note that the operation Find Neighbor Nodes Set() is
executed once in each iteration and dominates the I/O cost of the entire algorithm. The storage
of the data set should support the I/O efficient computation of this operation. We discuss the
choice for storage structure and provide an experimental comparison in Section 5 and 6.
The I/O cost of ROD and RNV are also dominated by the I/O cost of
Find Neighbor Nodes Set() operation.
Analytical Evaluation and Cost Models
In this section, we provide simple algebraic cost models for the I/O cost of outlier detection
operations, using the Connectivity Residue Ratio(CRR) measure of physical page clustering
methods. The CRR value is defined as follows.
Route Outlier Detection(ROD) Algorithm
Input: S is the multidimensional attribute space;
D is the attribute data set in S;
F is the distance function in
ND is the depth of neighbor;
E) is the spatial graph;
CI is the confidence interval;
are mean and standard deviation calculated in TPC;
RN is the set of node in a route;
Output: Outlier Set.
O i =Get One Object(i,D); /* Select each object from D */
NNS=Find Neighbor Nodes Set(O i ,ND,G);
Find neighbor nodes of O i from G */
Accum Dist=0;
O k =Get One Object(j,NNS); /* Select each object from NNS */
Accum Dist += F (O
oe s
/*Check the normal distribution table */
Check Normal Table(T value ,CI)== True)f
Add Element(Outlier Set,i); /* Add the element to Outlier Set */
return Outlier Set.
Algorithm 2: Pseudo-code for route outlier detection
number of unsplit edges
otal numbe of edges
The CRR value is determined by the page clustering method, the data record size, and
the page size. Figure 5 gives an example of CRR value calculation. The blocking factor, i.e.,
the number of data records within a page is three, and there are nine data records. The data
records are clustered into three pages. There are a total of nine edges and six unsplit edges.
The CRR value of this graph can be calculated as
Table
1 lists the symbols used to develop our cost formulas. ff is the CRR value. fi denotes
the blocking factor, which is the number of data records that can be stored in one memory
page. is the average number of nodes in the neighbor list of a node. N is the total number
of node in the data set, L is the number of node along a route, and R is the number of nodes
randomly generated by users for spatial outlier verification.
Page B
Page C
Page A
Figure
5: Example of CRR
Symbol Meaning
ff The CRR value
Average blocking factor
Total number of nodes
L Number of nodes in a route
R Number of nodes in a random set
Average number of neighbors for each node
Table
1: Symbols used in Cost Analysis
4.1 Cost Modeling for Test Parameters Computation(TPC) Algorithm
The TPC algorithm is a nest loop index join. Suppose that we use two memory buffers. If one
memory buffer stores the data object x used in the outer loop and the other memory buffer is
reserved for processing the neighbors of x, we get the following cost function to estimate the
number number of page accesses.
The outer loop retrieves all the data records on a page basis, and has an aggregated cost
of N
fi . For each node x, on average, ff neighbors are in the same page as x, and can be
processed without redundant I/O. Additional data page accesses are needed to retrieve the
other neighbors, and it takes at most (1 \Gamma ff) data page accesses. Thus the
expected total cost for the inner loop is N
4.2 Cost Modeling for Route Outlier Detection(ROD) algorithm
We get the following cost function to estimate the number of page accesses with two memory
buffers for ROD algorithm. One memory buffer is reserved for processing the node x to be
verified, and the other is used to process the neighbors of x.
For each node x, on the average, its successor node y is on the same page as x with probability
ff, and can be processed with no redundant page accesses. The cost to access all the nodes
along a route is L To process the neighbors of each node, ff neighbors are on
the same page as x. Additional data page accesses are needed to retrieve the other
neighbors, and it takes at most (1 \Gamma ff) data page accesses.
4.3 Cost Modeling for Random Node Verification(RNV) algorithm
We get the following cost function to estimate the number of page accesses with two memory
buffers for the RNV algorithm. One memory buffer is reserved for processing the node x to be
verified, the other is used to process the neighbors of x.
Since the memory buffer is assumed to be cleared for each consecutive random node, we
need R page accesses to process all these random nodes. For each node x, ff neighbors are
on the same page as x, and can be processed without extra I/O. Additional data page accesses
are needed to retrieve the other neighbors, and it takes at most (1 \Gamma ff) data page
accesses. Thus, the expected total cost to process the neighbor of R nodes is R
5 Experiment Design
In this section, we describe the layout of our experiments and then illustrate the candidate
clustering methods.
5.1 Experimental Layout
The design of our experiments is shown in Figure 6. Using the Twin Cities Highway Connectivity
Graph(TCHCG), we took data from the TCHCG and physically stored the data set into
data pages using different clustering strategies and page sizes. These data pages were then
processed to generate the global distribution or sampling distribution, depending on the size of
the data sets.
We compared different data page clustering schemes: CCAM
[15], Z-ordering [10], and Cell-tree [5]. Other parameters of interest were the size of the memory
buffer, the buffering strategies, the memory block size(page size), and the number of neighbors.
The measures of our experiments were the CRR values and I/O cost for each outlier detection
procedure.
Clustering
method
Page
Size
Sets of pages
of data
Sets of pages
of data
Highway
Connectivity
Graph
Twin-Cities
Buffering
Size
Buffer No of
Neighbors
Buffering strategy
No of neighbors
Buffer size
Z-order
Cell-tree
Test
Parameters
Test Parameters
Computation (TPC)
(Nest loop index join) CRR
I/O Cost
Route Outlier
Detection(ROD).
Verification (RNV).
Random Node
Figure
Experimental Layout
The experiments were conducted on many graphs. We present the results on a representative
graph, which is a spatial network with 990 nodes that represents the traffic detector stations for
a 20-square-mile section of the Twin Cities area. This data set is provided by the Minnesota
Dept. of Transportation(MnDot).
We used a common record type for all the clustering methods. Each record contains a node
and its neighbor-list, i.e., a successor-list and a predecessor-list. We also conducted performance
comparisons of the I/O cost for outlier-detection query processing.
5.2 Candidate Clustering Methods
In this section we describe the candidate clustering methods used in the experiments.
Connectivity-Clustered Access Method(CCAM): CCAM [15] clusters the nodes of
the graph via graph partitioning, e.g., Metis. Other graph-partitioning methods can also be
used as the basis of our scheme. In addition, an auxiliary secondary index is used to support
query operations. The choice of a secondary index can be tailored to the application. We used
the tree with Z-order in our experiments, since the benchmark graph was embedded in
graphical space. Other access methods such as the R-tree and Grid File can alternatively be
created on top of the data file, as secondary indices in CCAM to suit the application.
Linear Clustering by Z-order: Z-order [10] utilizes spatial information while imposing
a total order on the points. The Z-order of a coordinate (x,y) is computed by interweaving
the bits in the binary representation of the two values. Alternatively, Hilbert ordering may be
used. A conventional one-dimensional primary index (e.g. B + -tree) can be used to facilitate
the search.
Cell Tree: A cell tree [5] is a height-balanced tree. Each cell tree node corresponds, not
necessarily to a rectangular box, but to a convex polyhedron. A cell tree restricts polyhedra
to partitions of a BSP(Binary Space Partitioning), to avoid overlaps among sibling polyhedra.
Each cell tree node corresponds to one disk space, and the leaf nodes contain all the information
required to answer a given search query. The cell tree can be viewed as a combination of a
BSP- and R + -tree, or as a BSP-tree mapped on paged secondary memory.
6 Experimental Results
In this section, we illustrate the outlier examples detected in the traffic data set, present the
results of our experiments, and test the effectiveness of the different page clustering methods.
To simplify the comparison, the I/O cost represents the number of data pages accessed. This
represents the relative performance of the various methods for very large databases. For smaller
databases, the I/O cost associated with the indices should be measured. Here we present the
evaluation of I/O cost for the TPC algorithm. The evaluations of I/O cost for the RNV and
ROD algorithms are available in the full version paper.
6.1 Outliers Detected
We tested the effectiveness of our algorithm on the Twin Cities traffic data set and detected
numerous outliers, as described in the following examples.
Time
Volume
Traffic Volume v.s. Time for Station 138 on 1/12 1997
(a) Station 138
Volume
Traffic Volume v.s. Time for Station 139 on 1/12 1997
(b) Outlier Station 139
Time
Volume
Traffic Volume v.s. Time for Station 140 on 1/12 1997
(c) Station 140
Figure
7: Outlier station 139 and its neighbor stations on 1/12 1997
In
Figure
7, the abnormal station(Station 139) was detected with volume values significantly
inconsistent with the volume values of its neighboring stations 138 and 140. Note that our basic
algorithm detects outlier stations in each time slot; the detected outlier stations in each time
slot are then aggregated on a daily basis.4080120160Average Traffic Volume(Time v.s. Station)
Time
Station
ID(North
Bound)
(a) I-35W North
Bound2060100140Average Traffic Volume(Time v.s. Station)
Time
Station
ID(South
Bound)
(b) I-35W South
Bound
Figure
8: An example of outliers
Figure
8 shows another example of traffic flow outliers. Figures 8(a) and (b) are the traffic
volume maps for I-35W North Bound and South Bound, respectively, on 1/21/1997. The X-axis
is a 5-minute time slot for the whole day and the Y-axis is the label of the stations installed
on the highway, starting from 1 on the north end to 61 on the south end. The abnormal white
line at 2:45pm and the white rectangle from 8:20am to 10:00am on the X-axis and between
stations 29 to 34 on the Y-axis can be easily observed from both (a) and (b). The white line
at 2:45pm is an instance of temporal outliers, where the white rectangle is a spatial-temporal
outlier. Moreover, station 9 in Figure 8(a) exhibits inconsistent traffic flow compared with its
neighboring stations, and was detected as a spatial outlier.
6.2 Testing Statistical Assumption
In this traffic data set, the volume values of all stations at one moment are approximately a
normal distribution. The histogram of stations on different volumes are shown in Figure 9(a)
with a normal probability distribution superimposed. As can be seen in Figure 9(a), the normal
distribution approximates the volume distribution very well. We calculated the interval of
v and oe are the mean and standard
deviation of the volume distribution, and the percentages of measurements falling in the three
intervals are equal to 68.27%, 95.45%, and 99.73%, respectively. This pattern fits well with a
normal distribution since the expected values in a normal distribution are 68%, 95%, and 100%.
Moreover, we plot the normal probability plot in Figure9(b), and it appears linear. Hence the
volume values of all stations at the same time are approximately a normal distribution.
Given the definition of neighborhood, we then calculate the average volume value ( -
around its k neighbors according to topological relationship for each station. Since the volume
values are normally distributed, the average of the normal variables is also a normal distribution.
Volume Distribution at 10:00am on 1/15/1997
Volume Value
Number
of
Stations
(a) Histogram of traffic volume
distribution
Probability
Volume Normal Distribution Probability Plot at 10:00am on 1/15/1997
(b) Normal probability plot
for traffic volume distribu-
tion
4100Normalized Volume Difference over Spatial Neighborhood at 10:00am on 1/15/1997
Normalized Volume Difference over Spatial Neighborhood
Number
of
Stations
(c) Histogram of volume difference
over neighborhood
Figure
9: Verification of normal distribution for traffic volumes and volume difference over
neighbors
Since the volume values and the average volume values over neighborhoods are normally dis-
tributed, the difference(v \Gamma -
v) between these volumes and their corresponding average volume
values over neighborhoods is also a normal distribution since the difference of the two random
normal random variables is always normal, as shown in Figure 9(c). Given the confidence level
100(1-ff)%, we can calculate the confidence interval for the difference distributions, i.e., 100(1-
ff) percentage of difference value distribution lies between \Gammaz ff=2 and z ff=2 standard deviation
of the mean in the sample space. So we can classify the spatial outliers at the given confidence
level threshold.
6.3 Evaluation of Proposed Cost Model
We evaluated the I/O cost for different clustering methods for outlier detection procedures,
namely, Test Parameters Computation(TPC), Route Outlier Detection(ROD) and Random
Node Verification(RNV). The experiments used Twin-Cities traffic data with page size 1K
bytes, and two memory buffers. Table 2 shows the number of data page accesses for each
procedure under various clustering methods. The CRR value for each method is also listed in
the table. The cost function for TPC is C
ff). The cost function for
RNV is ff). The cost function for ROD is
as described in Section 4.2.
Clustering Parameters Computation Random Node Verification Route Outlier Detect
Method Actual Predicted Actual Predicted Actual Predicted CRR
Zord 1263 1269 349 357 78 79 0.31
Table
2: The actual I/O cost and predicted cost model for different clustering methods
As shown in Table 1, CCAM produced the lowest number of data page accesses for the
outlier detection procedures. This is to be expected, since CCAM generated the highest CRR
value.
6.4 Evaluation of I/O cost for TPC algorithm
In this section, we present the results of our evaluation of the I/O cost and CRR value for
alternative clustering methods while computing the test parameters. The parameters of interest
are buffer size, page size, number of neighbors, and neighborhood depth.
6.4.1 The effect of page size and CRR value
Figures
(a) and (b) show the number of data pages accessed and the CRR values respectively,
for different page clustering methods as the page sizes change. The buffer size is fixed at
Kbytes. As can be seen, a higher CRR value implies a lower number of data page accesses, as
predicted in the cost model. CCAM outperforms the other competitors for all four page sizes,
and CELL has better performance than Z-order clustering.
6.4.2 The effect of neighborhood cardinality
We evaluated the effect of varying the number of neighbors and the depth of neighbors for
different page clustering methods. We fixed the page size at 1K, and the buffer size at 4K,
and used the LRU buffering strategy. Figure 11 shows the number of page accesses as the
number of neighbors for each node increases from 2 to 10. CCAM has better performance
than Z-order and CELL. The performance ranking for each page clustering method remains the
same for different numbers of neighbors. Figure 11 shows the number of page accesses as the
Number
of
page
Page Size (K bytes)
Zord
(a) Page Accesses0.40.60.811 2 3 4 5 6 7 8
CRR
Value
Page Size (K Bytes)
Zord
(b) CRR
Figure
10: Effect of page size on data page accesses and CRR (buffer size = 32K)
neighborhood depth increases from 1 to 5. CCAM has better performance than Z-order and
CELL for all the neighborhood depths.5001500250035004500
Number
of
page
No of Neighbors
Zord
(a) Number of Neighbors1000300050007000
Number
of
page
Neighborhood Depth
Zord
(b) Neighborhood Depth
Figure
11: Effect of neighborhood cardinality on data page accesses (Page size = 1K, Buffer
Conclusions
In this paper, we focused on detecting outliers in spatial graph data sets. We proposed the
notion of a neighbor outlier in graph structured data sets, designed a fast algorithm to detect
outliers, analyzed the statistical foundation underlying our approach, provided the cost models
for different outlier detection procedures, and compared the performance of our approach
using different data clustering approaches. In addition, we provided experimental results from
the application of our algorithm on Twin Cities traffic archival to show its effectiveness and
usefulness.
We have evaluated alternative clustering methods for neighbor outlier query processing,
including model construction, random node verification, and route outlier detection. Our experimental
results show that the CCAM, which achieves the highest CRR, provides the best
overall performance.
Acknowledgment
We are particularly grateful to Professor Vipin Kumar, and our Spatial Database Group members, Weili Wu,
Yan Huang, Xiaobin Ma,and Hui Xiong for their helpful comments and valuable discussions. We would also like
to express our thanks to Kim Koffolt for improving the readability and technical accuracy of this paper.
This work is supported in part by USDOT grant on high performance spatial visualization of traffic data, and
is sponsored in part by the Army High Performance Computing Research Center under the auspices of the Department
of the Army, Army Research Laboratory cooperative agreement number DAAH04-95-2-0003/contract
number DAAH04-95-C-0008, the content of which does not necessarily reflect the position or the policy of the
government, and no official endorsement should be inferred. This work was also supported in part by NSF grant
#9631539.
--R
Optics: Ordering points to identify the clustering structure.
Outliers in Statistical Data.
Optics-of: Identifying local outliers.
Advances in Knowledge Discovery and Data Mining.
The Design of the Cell Tree: An Object-Oriented Index Structure for Geometric Databases
of Outliers.
Applied Multivariate Statistical Analysis.
A unified notion of outliers: Properties and computation.
Algorithms for mining distance-based outliers in large datasets
A Class of Data Structures for Associative Searching.
Knowledge Discovery in Databases.
Computatinal Geometry: An Introduction.
Efficient algorithms for mining outliers from large data sets.
Computing depth contours of bivariate point clouds.
A connectivity-clustered access method for aggregate queries on transportation networks-a summary of results
Finding outliers in very large datasets.
--TR
Computational geometry: an introduction
Applied multivariate statistical analysis
Computing depth contours of bivariate point clouds
OPTICS
Efficient algorithms for mining outliers from large data sets
A class of data structures for associative searching
The Design of the Cell Tree
OPTICS-OF
Algorithms for Mining Distance-Based Outliers in Large Datasets
--CTR
Yan Huang , Hui Xiong , Shashi Shekhar , Jian Pei, Mining confident co-location rules without a support threshold, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
a <u>L</u>inear <u>S</u>emantic <u>S</u>can <u>S</u>tatistic technique for detecting anomalous windows, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Sanjay Chawla , Pei Sun, SLOM: a new measure for local spatial outliers, Knowledge and Information Systems, v.9 n.4, p.412-429, April 2006
Shashi Shekhar , Yan Huang , Judy Djugash , Changqing Zhou, Vector map compression: a clustering approach, Proceedings of the 10th ACM international symposium on Advances in geographic information systems, November 08-09, 2002, McLean, Virginia, USA
Ian Davidson , Goutam Paul, Locating secret messages in images, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Jeffrey Xu Yu , Weining Qian , Hongjun Lu , Aoying Zhou, Finding centric local outliers in categorical/numerical spaces, Knowledge and Information Systems, v.9 n.3, p.309-338, March 2006
Chang-Tien Lu , Yufeng Kou , Jiang Zhao , Li Chen, Detecting and tracking regional outliers in meteorological data, Information Sciences: an International Journal, v.177 n.7, p.1609-1632, April, 2007
Victoria Hodge , Jim Austin, A Survey of Outlier Detection Methodologies, Artificial Intelligence Review, v.22 n.2, p.85-126, October 2004 | outlier detection;spatial graphs;Spatial Data Mining |
502591 | Bipartite graph partitioning and data clustering. | Many data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, we propose a new data clustering method based on partitioning the underlying bipartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. We show that an approximate solution to the minimization problem can be obtained by computing a partial singular value decomposition (SVD) of the associated edge weight matrix of the bipartite graph. We point out the connection of our clustering algorithm to correspondence analysis used in multivariate analysis. We also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, we apply our clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency. | INTRODUCTION
analysis is an important tool for exploratory data
mining applications arising from many diverse disciplines.
Informally, cluster analysis seeks to partition a given data
set into compact clusters so that data objects within a cluster
are more similar than those in distinct clusters. The literature
on cluster analysis is enormous including contributions
from many research communities. (see [6, 9] for recent surveys
of some classical approaches.) Many traditional clustering
algorithms are based on the assumption that the given
dataset consists of covariate information (or attributes) for
each individual data object, and cluster analysis can be cast
as a problem of grouping a set of n-dimensional vectors each
representing a data object in the dataset. A familiar example
is document clustering using the vector space model
[1]. Here each document is represented by an n-dimensional
vector, and each coordinate of the vector corresponds to a
term in a vocabulary of size n. This formulation leads to
the so-called term-document matrix for the representation
of the collection of documents, where a ij is the
so-called term frequency, i.e., the number of times term i
occurs in document j. In this vector space model terms and
documents are treated asymmetrically with terms considered
as the covariates or attributes of documents. It is also
possible to treat both terms and documents as first-class
citizens in a symmetric fashion, and consider a ij as the frequency
of co-occurrence of term i and document j as is done,
for example, in probabilistic latent semantic indexing [12]. 1
In this paper, we follow this basic principle and propose a
new approach to model terms and documents as vertices
in a bipartite graph with edges of the graph indicating the
co-occurrence of terms and documents. In addition we can
optionally use edge weights to indicate the frequency of this
co-occurrence. Cluster analysis for document collections in
this context is based on a very intuitive notion: documents
are grouped by topics, on one hand documents in a topic
tend to more heavily use the same subset of terms which
form a term cluster, and on the other hand a topic usually
is characterized by a subset of terms and those documents
heavily using those terms tend to be about that particular
topic. It is this interplay of terms and documents which
gives rise to what we call bi-clustering by which terms and
documents are simultaneously grouped into semantically co-
Our clustering algorithm computes an approximate global
optimal solution while probabilistic latent semantic indexing
relies on the EM algorithm and therefore might be prone to
local minima even with the help of some annealing process.
herent clusters.
Within our bipartite graph model, the clustering problem
can be solved by constructing vertex graph partitions.
Many criteria have been proposed for measuring the quality
of graph partitions of undirected graphs [4, 14]. In this pa-
per, we show how to adapt those criteria for bipartite graph
partitioning and therefore solve the bi-clustering problem.
A great variety of objective functions have been proposed
for cluster analysis without efficient algorithms for finding
the (approximate) optimal solutions. We will show that our
bipartite graph formulation naturally leads to partial SVD
problems for the underlying edge weight matrix which admit
efficient global optimal solutions. The rest of the paper
is organized as follows: in section 2, we propose a new criterion
for bipartite graph partitioning which tends to produce
balanced clusters. In section 3, we show that our criterion
leads to an optimization problem that can be approximately
solved by computing a partial SVD of the weight matrix of
the bipartite graph. In section 4, we make connection of
our approximate solution to correspondence analysis used
in multivariate data analysis. In section 5, we briefly discuss
how to deal with clusters with overlaps. In section 6,
we describe experimental results on bi-clustering a dataset
of newsgroup articles. We conclude the paper in section 7
and give pointers to future research.
2.
We denote a graph by G(V; E), where V is the vertex
set and E is the edge set of the graph. A graph G(V; E) is
bipartite with two vertex classes X and Y if
and each edge in E has one endpoint in X and
one endpoint in Y . We consider weighted bipartite graph
denotes the
weight of the edge between vertex i and j. We let
if there is no edge between vertices i and j. In the context
of document clustering, X represents the set of terms and
Y represents the set of documents, and w ij can be used to
denote the number of times term i occurs in document j. A
vertex partition of G(X;Y; W ) denoted by \Pi(A; B) is defined
by a partition of the vertex sets X and Y , respectively:
, where for a set S, S c denotes its
compliment. By convention, we pair A with B, and A c with
. We say that a pair of vertices x 2 X and y 2 Y is
matched with respect to a partition \Pi(A; B) if there is an
edge between x and y, and either x 2 A and y 2 B or
. For any two subsets of vertices
and T ae Y , define
i.e., W (S; T ) is the sum of the weights of edges with one
endpoint in S and one endpoint in T . The quantity W (S; T )
can be considered as measuring the association between the
vertex sets S and T . In the context of cluster analysis edge
weight measures the similarity between data objects. To
partition data objects into clusters, we seek a partition of
such that the association
unmatched vertices is as small as possible. One possibility
is to consider for a partition \Pi(A; B) the following quantity
(1)
Intuitively, choosing \Pi(A; B) to minimize cut(A; B) will give
rise to a partition that minimizes the sum of all the edge
weights between unmatched vertices. In the context of document
clustering, we try to find two document clusters B
and B c which have few terms in common, and the documents
in B mostly use terms in A and those in B c use terms
in A c . Unfortunately, choosing a partition based entirely
on cut(A; B) tends to produce unbalanced clusters, i.e., the
sizes of A and/or B or their compliments tend to be small.
Inspired by the work in [4, 5, 14], we propose the following
normalized variant of the edge cut in (1)
The intuition behind this criterion is that not only we want
a partition with small edge cut, but we also want the two
subgraphs formed between the matched vertices to be as
dense as possible. This latter requirement is partially satisfied
by introducing the normalizing denominators in the
above equation. 2 Our bi-clustering problem is now equivalent
to the following optimization problem
min
i.e., finding partitions of the vertex sets X and Y to minimize
the normalized cut of the bipartite graph G(X;Y; W ).
3. APPROXIMATESOLUTIONSUSING SINGULAR
VECTORS
Given a bipartite graph G(X;Y; W ) and the associated
partition \Pi(A; B). Let us reorder the vertices of X and Y
so that vertices in A and B are ordered before vertices in A c
respectively. The weight matrix W can be written
in a block format
i.e., the rows of W11 correspond to the vertices in the vertex
set A and the columns of W11 correspond to those in
B. Therefore G(A;B;W11 ) denotes the weighted bipartite
graph corresponding to the vertex sets A and B. For any
i.e., s(H) is the sum of all the elements of H. It is easy to
see from the definition of Ncut,
2 A more natural criterion seems to be
However, it can be shown that it will leads to an SVD problem
with the same set of left and right singular vectors.
In order to make connections to SVD problems, we first
consider the case when W is symmetric. 3 It is easy to see
that with W symmetric (denoting Ncut(A; A) by Ncut(A)),
we have
Let e be the vector with all its elements equal to 1. Let D be
the diagonal matrix such that W
be the vector with
ae
It is easy to verify that
Then
and
Notice that (D \Gamma W scalar s, we have
(se
To cast (4) in the form of a Rayleigh quotient, we need to
find s such that
(se
it follows from the above equation
that is easy to see
that y T
Thus
min A
ae y T (D \Gamma W )y
oe
where
If we drop the constraints y let the
elements of y take arbitrary continuous values, then the optimal
y can be approximated by the following relaxed continuous
minimization problem,
min
ae y T (D \Gamma W )y
oe
Notice that it follows from W
D \Gamma1=2 WD \Gamma1=2 (D 1=2
3 A different proof for the symmetric case was first derived
in [14]. However, our derivation is simpler and more transparent
and leads naturally to the SVD problems for the
rectangular case.
and therefore D 1=2 e is an eigenvector of D \Gamma1=2 WD \Gamma1=2 corresponding
to the eigenvalue 1. It is easy to show that all the
eigenvalues of D \Gamma1=2 WD \Gamma1=2 have absolute value at most
1 (See the Appendix). Thus the optimal y in (5) can be
computed as
y, where -
y is the second largest eigenvector
of D \Gamma1=2 WD \Gamma1=2 .
Now we return to the rectangular case for the weight matrix
W , and let DX and DY be diagonal matrices such that
e: (6)
Consider a partition \Pi(A; B), and define
ae
ae
Let W have the block form as in (2), and consider the augmented
22
If we interchange the second and third block rows and columns
of the above matrix, we obtain6 6 4
22 07 7 5 j
W22
and the normalized cut can be written as
a form that resembles the symmetric case (3). Define
Then we have
v. It is also easy to
see that
Therefore,
min
x6=0;y 6=0
ae 2x T Wy
oe
In [11], the Laplacian of -
W is used for partitioning a rectangular
matrix in the context of designing load-balanced
matrix-vector multiplication algorithms for parallel compu-
tation. However, the eigenvalue problem of the Laplacian of
W does not lead to a simpler singular value problem.
Ignoring the discrete constraints on the elements of x and
y, we have the following continuous maximization problem,
x6=0;y 6=0
ae 2x T Wy
oe
Without the constraints x T the above
problem is equivalent to computing the largest singular triplet
of D \Gamma1=2
Y (see the Appendix). From (6), we have
Y (D 1=2
(D \Gamma1=2
and similarly to the symmetric case, it is easy to show
that all the singular values of D \Gamma1=2
Y are at most 1.
Therefore, an optimal pair fx; yg for (8) can be computed
as
x and
y, where -
x and - y are the second
largest left and right singular vectors of D \Gamma1=2
Y ,
respectively (see the Appendix). With the above discus-
sion, we can now summerize our basic approach for bipartite
graph clustering incorporating a recursive procedure.
Algorithm. Spectral Recursive Embedding (SRE)
Given a weighted bipartite graph E) with
its edge weight matrix W :
1. Compute DX and DY and form the scaled weight
Y .
2. Compute the second largest left and right singular
vectors of -
x and - y.
3. Find cut points cx and cy for
x and
y, respectively.
4. Form partitions
vertex set Y .
5. Recursively partition the sub-graphs G(A;B) and
necessary.
Two basic strategies can be used for selecting the cut
points cx and cy . The simplest strategy is to set
and cy = 0. Another more computing-intensive approach
is to base the selection on Ncut: Check N equally spaced
splitting points of x and y, respectively, find the cut points
cx and cy with the smallest Ncut [14].
Computational complexity. The major computational
cost of SRE is Step 2 for computing the left and right singular
vectors which can be obtained either by power method
or more robustly by Lanczos bidiagonalization process [8,
Chapter 9]. Lanczos method is an iterative process for computing
partial SVDs in which each iterative step involves
the computation of two matrix-vector multiplications -
and -
vectors u and v. The computational cost
of these is roughly proportional to nnz( -
W ), the number of
nonzero elements of -
W . The total computational cost of
SRE is O(c sre k svd nnz( -
sre the the level of recursion
and k svd is the number of Lanczos iteration steps.
In general, k svd depends on the singular value gaps of -
W .
Also notice that nnz( -
is the average
number of terms per document and n is the total number
of document. Therefore, the total cost of SRE is in general
linear in the number of documents to be clustered.
4. CONNECTIONSTOCORRESPONDENCE
ANALYSIS
In its basic form correspondence analysis is applied to
an m-by-n two-way table of counts W [2, 10, 16]. Let
the sum of all the elements of W , DX and DY
be diagonal matrices defined in section 3. Correspondence
analysis seeks to compute the largest singular triplets of the
(DX (i; i)=w)(DY (j; j)=w)
The matrix Z can be considered as the correlation matrix
of two group indicator matrices for the original W [16]. We
now show that the SVD of Z is closely related to the SVD
of -
Y . In fact, in section 3, we showed
that D 1=2
Y e are the left and right singular vectors
of -
W corresponding to the singular value one, and it is also
easy to show that all the singular values of -
W are at most
1. Therefore, the rest of the singular values and singular
vectors of -
W can be found by computing the SVD of the
following rank-one modification of -
Y
Y k2
which has (i;
and is a constant multiple of the (i; j) element of Z. There-
fore, normalized-cut based cluster analysis and correspondence
analysis arrive at the same SVD problems even though
they start with completely different principles. It is worth-while
to explore more deeply the interplay between these two
different points of views and approaches, for example, using
the statistical analysis of correspondence analysis to provide
better strategy for selecting cut points and estimating the
number of clusters.
5. PARTITIONS WITH OVERLAPS
So far in our discussion, we have only looked at hard clus-
tering, i.e., a data object belongs to one and only one cluster.
In many situations, especially when there are much overlap
among the clusters, it is more advantageous to allow data
objects to belong to different clusters. For example, in document
clustering, certain groups of words can be shared by
two clusters. Is it possible to model this overlap using our
bipartite graph model and also find efficient approximate
solutions? The answer seems to be yes, but our results at
this point are rather preliminary and we will only illustrate
the possibilities. Our basic idea is that when computing
B), we should disregard the contributions of the
set of vertices that is in the overlap. More specifically, let
A and
B, where OX denotes the
Figure
1: Sparsity patterns of a test matrix before
clustering (left) and after clustering (right)
overlap between the vertex subsets A[OX and -
A[OX , and
OY the overlap between B [ OY and -
However, we can make Ncut(A; B; -
smaller simply by
putting more vertices in the overlap. Therefore, we need
to balance these two competing quantities: the size of the
overlap and the modified normalized cut
by minimizing
where ff is a regularization parameter. How to find an efficient
method for computing the (approximate) optimal solution
to the above minimization problem still needs to be
investigated. We close this section by presenting an illustrative
example showing that in some situations, the singular
vectors already automatically separating the overlap sets
while giving the coordinates for carrying out clustering.
Example 1. We construct a sparse m-by-n rectangular
matrix
so that W11 and W22 are relatively denser than W12 and
W21 . We also add some dense rows and columns to the matrix
W to represent row and column overlaps. The left panel
of
Figure
1 shows the sparsity pattern of -
W , a matrix obtained
by randomly permuting the rows and columns of W .
We then compute the second largest left and right singular
vectors of D \Gamma1=2
WD \Gamma1=2
Y , say x and y, then sort the rows
and columns of -
W according to the values of the entries in
Y y, respectively. The sparsity pattern of
this permuted -
W is shown on the right panel of Figure 1. As
can be seen that the singular vectors not only do the job of
clustering but at the same time also concentrate the dense
rows and columns at the boundary of the two clusters.
6. EXPERIMENTS
In this section we present our experimental results on clustering
a dataset of newsgroup articles submitted to 20 news-
groups. 5 This dataset contains about 20,000 articles (email
messages) evenly divided among the 20 newsgroups. We list
the names of the newsgroups together with the associated
group labels (the labels will be used in the sequel to identify
the newsgroups).
NG2: comp.graphics
NG3: comp.os.ms-windows.misc
NG4: comp.sys.ibm.pc.hardware
NG5:comp.sys.mac.hardware
NG7:misc.forsale
NG8: rec.autos
NG9:rec.motorcycles
NG10: rec.sport.baseball
NG11:rec.sport.hockey
NG12: sci.crypt
NG13:sci.electronics
NG14: sci.med
NG15:sci.space
soc.religion.christian
NG17:talk.politics.guns
NG18: talk.politics.mideast
NG20: talk.religion.misc
We used the bow toolkit to construct the term-document
matrix for this dataset, specifically we use the tokenization
option so that the UseNet headers are stripped, and we also
applied stemming [13]. Some of the newsgroups have large
overlaps, for example, the five newsgroups comp.* about
computers. In fact several articles are posted to multiple
newsgroups. Before we apply clustering algorithms to
the dataset, several preprocessing steps need to be consid-
ered. Two standard steps are weighting and feature selec-
tion. For weighting, we considered a variant of tf.idf weighting
scheme, tf log 2 (n=df); where tf is the term frequency
and df is the document frequency and several other variations
listed in [1]. For feature selection, we looked at three
approaches 1) deleting terms that occur less than certain
number of times in the dataset; 2) deleting terms that occur
in less than certain number of documents in the dataset;
selecting terms according to mutual information of terms
and documents defined as
x
where y represents a term and x a document [15]. In general
we found out that the traditional tf.idf based weighting
schemes do not improve performance for SRE. One possible
explanation comes from the connection with correspondence
analysis, the raw frequencies are samples of co-occurrence
probabilities, and the pre- and post-multiplication by D \Gamma1=2
and D \Gamma1=2
Y in D \Gamma1=2
Y automatically taking
into account of weighting. We did, however, found out that
trimming the raw frequencies can sometimes improve performance
for SRE, especially for the anomalous cases where
some words can occur in certain documents an unusual number
of times, skewing the clustering process.
5 The newsgroup dataset together with the bow
toolkit for processing it can be downloaded from
http://www.cs.cmu.edu/afs/cs/project/theo-11/www/
naive-bayes.html.
Table
1: Comparison of spectral embedding (SRE), PDDP, and K-means (NG1/NG2)
Mixture SRE PDDP K-means
50/100 90:57 \Sigma 3:11% 86:11 \Sigma 3:94% (86; 5;
Table
2: Comparison of spectral embedding (SRE), PDDP, and K-means (NG10/NG11)
Mixture SRE PDDP K-means
For the purpose of comparison, we consider two other clustering
methods: 1) K-means method [9]; 2) Principal direction
divisive partion (PDDP) method [3]. K-means method
is a widely used cluster analysis tool. The variant we used
employs the Euclidean distance when comparing the dissimilarity
between two documents. When applying K-means,
we normalize the length of each document so that it has
Euclidean length one. In essence, we use the cosine of the
angle between two document vectors when measuring their
similarity. We have also tried K-means without document
length normalization, the results are far worse and therefore
we will not report the corresponding results. Since K-means
method is an iterative method, we need to specify a stopping
criterion. For the variant we used, we compare the centroids
between two consecutive iterations, and stop when the difference
is smaller than a pre-defined tolerance.
PDDP is another clustering method that utilizes singular
vectors. It is based on the idea of principal component
analysis and has been shown to outperform several standard
clustering methods such as hierarchical agglomerative algorithm
[3]. First each document is considered as a multivariate
data point. The set of document is normalized to have
unit Euclidean length and then centered, i,e., let W be the
term-document matrix, and w be the average of the columns
of W . Compute the largest singular value triplet fu; oe; vg of
split the set of documents based on their
values of the simple scheme is to let
those with positive v i go into one cluster and those with
nonnegative v i inot another cluster. Then the whole process
is repeated on the term-document matrices of the two
clusters, respectively. Although both our clustering method
SRE and PDDP make use of the singular vectors of some
versions of the term-document matrices, they are derived
from fundamentally different principles. PDDP is a feature-based
clustering method, projecting all the data points to
the one-dimensional subspace spanned by the first principal
axis; SRE is a similarity-based clustering method, two co-occurring
variables (terms and documents in the context of
document clustering) are simultaneously clustered. Unlike
SRE, PDDP does not have a well-defined objective function
for minimization. It only partitions the columns of the term-document
matrices while SRE partitions both of its rows and
columns. This will have significant impact on the computational
costs. PDDP, however, has an advantage that it can
be applied to dataset with both positive and negative values
while SRE can only be applied to datasets with nonnegative
data values.
Example 2. In this example, we examine binary clustering
with uneven clusters. We consider three pairs of news-
groups: newsgroups 1 and 2 are well-separated, 10 and 11
are less well-separated and have a lot of overlap.
We used document frequency as the feature selection criterion
and delete words that occur in less than 5 documents
in each datasets we used. For both K-means and PDDP we
apply tf.idf weighting together with document length normalization
so that each document vector will have Euclidean
norm one. For SRE we trim the raw frequency so that the
maximum is 10. For each newsgroup pair, we select four
types of mixture of articles from each newsgroup: x=y indicates
that x articles are from the first group and y articles
are from the second group. The results are listed in Table 1
for groups 1 and 2, Table 2 for groups 10 and 11 and Table 3
for groups We list the means and standard deviations
for 100 random samples. For PDDP and K-means we
also include a triplet of numbers which indicates how many
of the 100 samples SRE performs better (the first number),
the same (the second number) and worse (the third num-
ber) than the corresponding methods (PDDP or K-means).
We should emphasize that K-means method can only find
local minimum, and the results depend on initial values and
stopping criteria. This is also reflected by the large standard
deviations associated with K-means method. From the three
tests we can conclude that both SRE and PDDP outperform
K-means method. The performance of SRE and PDDP are
similar in balanced mixtures, but SRE is superior to PDDP
in skewed mixtures.
Example 3. In this example, we consider an easy multi-cluster
case, we examine five newsgroups 2; 9; 10; 15;
was also considered in [15]. We sample 100 articles from each
newsgroups, we use mutual information for feature selection.
We use minimum normalized cut as cut point for each level
of the recursion. For one sample, Table 4 gives the confusion
matrix. The accuracy for this sample is 88:2%. We also
tested two other samples with accuracy 85:4% and 81:2%
which compare favorably with those obtained for three samples
with accuracy 59%, 58% and 53% reported in [15]. In
the following we also listed the top few words for each clusters
computed by mutual information.
Table
3: Comparison of spectral embedding (SRE), PDDP, and K-means (NG18/NG19)
Mixture SRE PDDP K-means
Table
4: Confusion matrix for newsgroups f2; 9; 10; 15; 18g
mideast graphics space baseball motorcycles
cluster
cluster
cluster
cluster
cluster
armenian israel arab palestinian peopl jew isra
iran muslim kill turkis war greek iraqi adl call
2:
imag file bit green gif mail graphic colour
group version comput jpeg blue xv ftp ac uk list
3:
univers space nasa theori system mission henri
moon cost sky launch orbit shuttl physic work
clutch year game gant player team hirschbeck
basebal won hi lost ball defens base run win
5:
bike dog lock ride don wave drive black
articl write apr motorcycl ca turn dod insur
7. CONCLUSIONS AND FUTURE WORK
In this paper, we formulate a class of clustering problems
as bipartite graph partitioning problems, and we show
that efficient optimal solutions can be found by computing
the partial singular value decomposition of some scaled
edge weight matrices. However, we have also shown that
there still remain many challenging problems. One area that
needs further investigation is the selection of cut points and
number of clusters using multiple left and right singular vec-
tors, and the possibility of adding local refinements to improve
clustering quality. 6 Another area is to find efficient
algorithms for handling overlapping clusters. Finally, the
treatment of missing data under our bipartite graph model
especially when we apply our spectral clustering methods to
the problem of data analysis of recommender systems also
deserves further investigation.
8.
ACKNOWLEDGMENTS
6 It will be difficult to use local refinement for PDDP because
it does not have a global objective function for minimization.
The work of Hongyuan Zha and Xiaofeng He was supported
in part by NSF grant CCR-9901986. The work of
Xiaofeng He, Chris Ding and Horst Simon was supported
in part by Department of Energy through an LBL LDRD
fund.
9.
--R
Finding Out About: A Cognitive Perspective on Search Engine Technology and the WWW.
Correspondence analysis handbook.
Principal Direction Divisive Partitioning.
Spectral Graph Theory.
An improved spectral bisection algorithm and its application to dynamic load balancing.
Analysis.
Algebraic connectivity of graphs.
Matrix computations
Second Edition
Correspondence analysis in practice.
Partitioning sparse rectangular and structurally nonsymmetric matrices for parallel computation.
Probabilistic Latent Semantic Indexing.
A toolkit for statistical language modeling
Normalized cuts and image segmentation.
Modern Applied Statistics with S-plus
--TR
An improved spectral bisection algorithm and its application to dynamic load balancing
Matrix computations (3rd ed.)
Probabilistic latent semantic indexing
Document clustering using word clusters via the information bottleneck method
Partitioning Rectangular and Structurally Unsymmetric Sparse Matrices for Parallel Processing
Finding out about
Principal Direction Divisive Partitioning
Normalized Cuts and Image Segmentation
--CTR
Bhushan Mandhani , Sachindra Joshi , Krishna Kummamuru, A matrix density based algorithm to hierarchically co-cluster documents and words, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Hongyuan Zha , Xiang Ji, Correlating multilingual documents via bipartite graph modeling, Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, August 11-15, 2002, Tampere, Finland
Wei Xu , Yihong Gong, Document clustering by concept factorization, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
S. H. Srinivasan, Features for unsupervised document classification, proceeding of the 6th conference on Natural language learning, p.1-7, August 31, 2002
Long , Zhongfei (Mark) Zhang , Xiaoyun W , Philip S. Yu, Spectral clustering for multi-type relational data, Proceedings of the 23rd international conference on Machine learning, p.585-592, June 25-29, 2006, Pittsburgh, Pennsylvania
Long , Xiaoyun Wu , Zhongfei (Mark) Zhang , Philip S. Yu, Unsupervised learning on k-partite graphs, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Chris Ding , Tao Li, Adaptive dimension reduction using discriminant analysis and K-means clustering, Proceedings of the 24th international conference on Machine learning, p.521-528, June 20-24, 2007, Corvalis, Oregon
Hua Yan , Keke Chen , Ling Liu, Efficiently clustering transactional data with weighted coverage density, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Zheng-Yu Niu , Dong-Hong Ji , Chew-Lim Tan, Document clustering based on cluster validation, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Bin Gao , Tie-Yan Liu , Guang Feng , Tao Qin , Qian-Sheng Cheng , Wei-Ying Ma, Hierarchical Taxonomy Preparation for Text Categorization Using Consistent Bipartite Spectral Graph Copartitioning, IEEE Transactions on Knowledge and Data Engineering, v.17 n.9, p.1263-1273, September 2005
Bin Gao , Tie-Yan Liu , Xin Zheng , Qian-Sheng Cheng , Wei-Ying Ma, Consistent bipartite graph co-partitioning for star-structured high-order heterogeneous data co-clustering, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Gne Erkan, Language model-based document clustering using random walks, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, p.479-486, June 04-09, 2006, New York, New York
Zheng-Yu Niu , Dong-Hong Ji , Chew Lim Tan, Using cluster validation criterion to identify optimal feature subset and cluster number for document clustering, Information Processing and Management: an International Journal, v.43 n.3, p.730-739, May, 2007
Long , Zhongfei (Mark) Zhang , Xiaoyun Wu , Philip S. Yu, Relational clustering by symmetric convex coding, Proceedings of the 24th international conference on Machine learning, p.569-576, June 20-24, 2007, Corvalis, Oregon
Navaratnasothie Selvakkumaran , George Karypis, Multi.Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.726, November 09-13,
Bin Gao , Tie-Yan Liu , Tao Qin , Xin Zheng , Qian-Sheng Cheng , Wei-Ying Ma, Web image clustering by consistent utilization of visual features and surrounding texts, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Chris Ding , Tao Li , Wei Peng , Haesun Park, Orthogonal nonnegative matrix t-factorizations for clustering, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Han , Lee Giles , Hongyuan Zha , Cheng Li , Kostas Tsioutsiouliklis, Two supervised learning approaches for name disambiguation in author citations, Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2004, Tuscon, AZ, USA
Ying Zhao , George Karypis , Usama Fayyad, Hierarchical Clustering Algorithms for Document Datasets, Data Mining and Knowledge Discovery, v.10 n.2, p.141-168, March 2005
Ying Zhao , George Karypis, Empirical and Theoretical Comparisons of Selected Criterion Functions for Document Clustering, Machine Learning, v.55 n.3, p.311-331, June 2004
Han , Hongyuan Zha , C. Lee Giles, Name disambiguation in author citations using a K-way spectral clustering method, Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2005, Denver, CO, USA | singular value decomposition;graph partitioning;correspondence analysis;document clustering;spectral relaxation;bipartite graph |
502609 | Using navigation data to improve IR functions in the context of web search. | As part of the process of delivering content, devices like proxies and gateways log valuable information about the activities and navigation patterns of users on the Web. In this study, we consider how this navigation data can be used to improve Web search. A query posted to a search engine together with the set of pages accessed during a search task is known as a search session. We develop a mixture model for the observed set of search sessions, and propose variants of the classical EM algorithm for training. The model itself yields a type of navigation-based query clustering. By implicitly borrowing strength between related queries, the mixture formulation allows us to identify the "highly relevant" URLs for each query cluster. Next, we explore methods for incorporating existing labeled data (the Yahoo! directory, for example) to speed convergence and help resolve low-traffic clusters. Finally, the mixture formulation also provides for a simple, hierarchical display of search results based on the query clusters. The effectiveness of our approach is evaluated using proxy access logs for the outgoing Lucent proxy. | INTRODUCTION
Searching for information on the Web can be tedious. Traditional
search engines like Lycos and Google now routinely
return tens of thousands of resources per query. Navigating
these lists can be time consuming and frustrating. In this
paper, we propose narrowing search results by observing the
browsing patterns of users during search tasks. The data to
support our work comes from devices in the network that
log requests as they serve content. We will focus primarily
on proxy access logs, but it is easy to see how these ideas
will carry over to the kinds of data collected by an Internet
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
service provider (ISP). From these logs, we first extract the
search path that a user follows. Then, we apply a statistical
model to combine related searches to help provide guidance
to people beginning searches on related topics.
We capture the interesting part of the search path in a
search session, which is a user's query together with the
URLs of the Web pages they visit in response to their query.
Implicit in our approach is a form of query clustering that
combines similar search terms on the basis of the Web pages
visited during a search session. These clusters are then used
to improve the display of search engine results. In this paper,
we illustrate the technique by creating a directory consisting
of two levels; each query is related to one or more groups
of URLs. When a user submits a query to a search engine,
we display the most relevant URLs from the most relevant
directory groups. Here "relevance" is based on the data
gathered from previous searches.
While clustering has been proposed previously in the IR
literature, our use of passively collected data to build search
sessions is new. With data of this kind we have access to
orders of magnitude more searching activity than is possible
with specialty search engines or opt-in systems. In addition
to exploring this new source of search session data, we
also propose techniques for leveraging existing (manually de-
rived) content hierarchies or labeled URLs to improve the
relevance of identified resources. Finally, to make this work
practical in a real proxy, we must consider a number of new
implementation issues, including online versions of our clustering
algorithms.
The balance of the paper is organized as follows. In Section
2 we discuss our search session extractor. In Section 3
we describe a particular mixture model for query cluster-
ing, and illustrate some of the groups it finds. In Section 4
we examine the kind of improvement our recommendations
represent over a standard search engine. Related work is
presented in Section 5, and we conclude with a discussion of
future work in Section 6.
2. SEARCH SESSION EXTRACTION
To formalize the extraction process, we introduce our basic
data element, the "search session." A search session is
the collection of words a user submits to a search engine
(also known as a "query string") together with the URLs
of the Web pages they visit in response to their request.
As users browse the Web, a proxy server will record every
URL they access, including HTML and PDF documents,
embedded GIF and JPEG images, and Java class files. Because
search engines deal primarily in Web pages, we filter
xxx.xxx.xxx.xxx 02/Dec/2000:01:48:55 "GET http://www.google.com/search?q=infocom+2001"
xxx.xxx.xxx.xxx 02/Dec/2000:01:49:03 "GET http://www.ieee-infocom.org/2001/"
xxx.xxx.xxx.xxx 02/Dec/2000:01:49:27 "GET http://www.ieee-infocom.org/2001/program.html"
Figure
1: A subset of the fields available in the proxy server log corresponding to a search session. The fields
are: the IP address of the client, a timestamp recording when the request was handled, the HTTP request
method, and the URL.
user requests by file type and keep only HTML documents.
Therefore, from this point on we will use the terms "Web
page," "HTML document" and "URL" synonymously. In
Figure
1 we present three lines of a proxy log that represent
a single search session. (The IP address of the user's
computer has been masked.) The query "infocom 2001" is
extracted from the URL in the first line of this figure using
a set of manually-derived rules for www.google.com. We
maintain a list of such rules for each of the most popular
search engines, allowing us to automatically tag and parse
log lines corresponding to search queries. Every time a user
posts a query to one of the search engines in our list, we
initiate a new search session.
Once a new session has been initiated, we then examine
the link structure of the URLs subsequently requested by
the user. Those pages that can be traced back to the list of
search results (either directly or indirectly through one or
more links) are included in the search session. We determine
that the session has ended if either the user makes another
request to a search engine, or if an hour has passed since
the last addition to the session. 1 In Figure 2 we list the
search session extracted from the log fragment in Figure 1.
We refer to a search session as completed if it contains at
least one URL. In the remaining cases, we assume the user
examined the search engine results and decided that none
of the Web pages were relevant.
Search session
session id 1001
user id 873
query infocom+2001 1914.59305
urls www.ieee./2001/ 1914.59340 1
Figure
2: Sample search session. The fields include
a session ID, user ID (based on the IP address of
the client), the query and the time it was submitted
(a truncated Julian date), and the requested URLs,
again with timestamps. The last column records
whether or not the URL was linked directly to the
search results page (1 or 0, respectively).
We have implemented a search session extractor that takes
as its input the access log file from a proxy server. By using
a historical log, however, we are forced to "replay" part
of a user's actions to reconstruct the path they followed.
1 Unfortunately, this process is complicated by the fact that
some users maintain several browser windows and launch
multiple concurrent (but related) search sessions. The simplified
algorithm presented here is sufficient for the purposes
of the present paper, but the precise details of our extraction
algorithm are given in [14].
This means we must re-retrieve pages previously visited and
hence the extractor process is usually scheduled when the
server is not busy handling primary requests. This overhead
can be reduced by examining pages as they are being
served by a proxy. The link structure as well as various
other aspects of the requested pages can be extracted by a
background process having access to the proxy's cache. We
are currently building a system of this kind and examining
how data collection impacts proxy performance.
We consider data collected by a proxy server handling all
of the outgoing HTTP requests made by a portion of the
Lucent employees based in Murray Hill, New Jersey. In the
log maintained by this proxy, we find an essentially complete
record of the sites visited by a population of 2,000
researchers, developers and support staff. Between January
and May of 2001, this server logged over 42 million requests.
In
Table
1 we list the ten most frequently accessed search engines
by the Lucent employees during this period. 2 We also
recorded 13,657 search sessions, of which 44% were com-
pleted. Roughly 60% of the completed sessions consisted of
only 1 or 2 URLs, with 20% having length 3 or 4. In gen-
eral, the longer the search sessions the larger the percentage
of pages that were not linked directly to the results page.
For sessions of length 5 or more (comprising 20% of the total
number of search sessions), 55% of the pages were not
among the search results seen by the user. This number is
for sessions of length 3 or 4, and only 12% for those
with only 1 or 2 URLs.
Table
1: A list of frequently accessed search engines
compiled from the Lucent proxy logs for January 1,
2001 through May 23, 2001.
% of queries Search engine
53.0 www.google.com
search.yahoo.com
11.2 www.altavista.com
4.4 hotbot.lycos.com
4.2 www.lycos.com
2.7 search.excite.com
0.3 search.msn.com
0.3 www.northernlight.com
In terms of time, of the queries that were exactly repeated
in our session data set, about 50% were issued within a day
of each other. These tend to be made by the same user, the
only exceptions being queries related to paper deadlines for
2 It is worth noting that www.google.com is much more
popular among this research community than one would
expect from standard ratings offered by Media Metrix
or NetRatings [15] which rank yahoo.com, msn.com and
search.aol.com as the clear market leaders.
Query: bridal+dresses bridesmaid+dress flower+girl+dresses
URLs: www.priscillaofboston.com www.martasbridal.com www.bestbuybridal.com
www.bestbuybridal.com weddingworld.net www.martasbridal.com
weddingworld.net
weddingworld.net/htm/.
www.ldsweddings.com/bridal dress.
www.usedweddingdresses.com
Figure
3: Three search sessions initiated by three different users. The query strings are all related to wedding
dresses of some kind, and the URLs visited by the users are similar.
major computer science conferences and searches for online
greeting card services near major holidays. On larger time
scales, 80% of the repeated queries were posted by different
users.
So far, we have focused on proxy logs as the primary
source of data for constructing search sessions. Many search
engines collect abbreviated navigation data to help improve
their service [6]. So-called "click-through" statistics record
the pages that users select from the list of search results. By
design, the redirection mechanism used to collect these data
cannot capture any information about a user's activities beyond
their selections from the search results list. Given the
fact that long search sessions consist mainly of pages not
returned by the search engine, click-through data does not
have the same potential to uncover relevant pages. To see
this, we applied the heuristic relevance measures introduced
at the end of this paper, and found that the "desired pages"
in over half of the long sessions were not linked directly to
the list of search results examined by a user. Even if a page
is included among the search results, click-through statistics
cannot identify requests for the page that are referred by
other URLs linked either directly or indirectly to the search
results list, missing valuable information about the relevance
of sources in the search engine database. For these reasons,
we see our approach as potentially much more powerful than
traditional schemes that rely on tracking click-throughs.
3. CLUSTERING QUERIES
Preliminary data analysis on our collection of search sessions
led to the simple observation that semantically related
query terms often draw users to the same sets of URLs. In
Figure
3 we present three search sessions (each initiated by
different users) that all relate to wedding dresses of different
kinds, and all produce requests for many of the same Web
pages. In this section, we consider improving search results
by first forming groups of queries based on the similarity
of their associated search sessions. In turn, by combining
search sessions with queries in a given group, we can better
identify relevant URLs.
In the IR literature, there are several examples in which
the pattern of retrieved items is used to form so-called "query
clusters," see [16, 17]. In our context, these schemes would
involve the queries submitted by users together with the
top L relevant pages returned by a given search engine. Our
technique is different in that we consider only those pages
that were actually selected by a user during a search task.
This has the effect of reducing spurious associations between
queries. In addition, we include pages that are not listed by
the search engine at all, effectively making up for deficiencies
in spidering. By including user choices, we have developed
an improved search scheme that is similar in spirit to collaborative
filtering. When this kind of approach has been
discussed previously in the IR literature, it has typically
required the user to report details of their search and manually
tag pages according to their relevance, for example, [9].
This level of involvement is unrealistic in the context of Web
searching, and hence the impact of these schemes is somewhat
limited. We avoid such problems by basing our work
on passively collected proxy logs.
Our initial intuition for navigation-based query clustering
came not from the IR literature, but rather from an often-cited
motivation for the popular PageRank algorithm [5].
Broadly speaking, PageRank is based on the amount of time
a "random surfer" would spend on each page. Random
walks on the link structure of the Web are also discussed
in [11]. We felt that actual user data would be a better
measure of importance than this random walk idea.
3.1 The mixture model
In this section, we consider models for both forming query
groups as well as determining the most relevant URLs for
each group. We construct a statistical mixture model to
describe the search session data. This model has as its parameters
the probability that a given query belongs to a
particular group as well as a set of group-specific relevance
weights assigned to collections of URLs. The algorithms we
present all attempt to fit the same model to the data. Our
first approach makes use of the standard EM (Expectation-
Maximization) algorithm to find maximum likelihood estimates
of the model parameters. At the end of the paper, we
discuss various alternatives that work in real time and can
be incorporated in a proxy implementation. To help guide
the cluster process, we also introduce labeled data from an
existing topic hierarchy that contains over 1.2 million Web
sites. We present an ad hoc algorithm for dealing with labeled
pages, and at the end of the paper discuss a more
formal statistical approach that uses the content hierarchy
to specify a prior distribution for the model's parameters.
Let q denote queries and u URLs. Each query q i is associated
with one or more groups, where the subscript i runs
from 1 to the number of queries seen in the training data,
I. For the moment, we assume that the number of groups,
K, is fixed. The group relation is captured by the triple
denotes a group ID and w ik is the probability
that q i belongs to group k. Then, for each group, we
identify a number of relevant URLs. This is described by
the triple (k; is a URL and -kj is a weight
that determines how likely it is that u j is associated with
the queries belonging to group k. We let the index j range
from 1 to the number of URLs seen in the training data, J
(which might include URLs from a content hierarchy). An
example of a query-group triple (q
while the associated group-relevance triples (k;
be
As mentioned above, sets of such triples constitute the parameters
in a statistical model for the search sessions. These
triples can be used by a search engine to improve page rank-
ings. When a user initiates a new search, we present them
with a display of query groups most related to their search
terms. For each such group, we select the most relevant
URLs arranged in a display like that in Figure 4. Here,
we arrange the query groups and URLs by weight, with the
most relevant appearing at the top. In this example, we
have used the data from a content hierarchy to name the
separate query groups (e.g., Medications in Figure 4). At
the end of the paper, we discuss model extensions that will
purely automated group naming.
Finally, we see our clustering as an addition to a standard
page of search results. By presenting the user with
a small, organized set of URLs from our system together
with a spider-based search list, we allow new resources to
be added to our system in a natural way. In fact, estimates
of confidence can be used to suppress our display entirely,
forcing the user to help train the system when we have insufficient
navigation data. Our clustering can also be used
to modify the rankings of results from a traditional search
engine; to guarantee that new resources will be visible to the
user, the modified rankings can be displayed only a fraction
of the time.
Figure
4: Example of cluster results for a query of
'hypertension'.
A mixture model is employed to form both the query
groups as well as the relevance weights. Assume that in our
dataset we have I queries which we would like to assign to
K groups, and, in turn, determine group-specific relevance
weights for each of J URLs. For the moment, we simplify
our data structure and let n ij denote the number of times
the URL u j was selected by some user during a search session
under the query q i . Let n the
vector of counts associated with query q i . We model this
vector as coming from a mixture of the form
where the terms ff k sum to one and denote the proportion of
the population coming from the kth component. Also associated
with the kth component in the mixture is a vector of
parameters From a sampling perspec-
tive, one can consider the entire dataset
as being generated in two steps: first, we pick one of the K
groups k according to the probabilities ff k and then use the
associated distribution p(\Deltaj- k ) to generate the vector n i .
We now consider the specification of each component in
the mixture (1). We assume that in the kth component the
data come from a Poisson distribution with mean -kj ,
where the counts for each different URL u j are independent.
Then, setting the likelihood of the kth
component associated with a vector of counts n i is given by
Y
e \Gamma- kj
To fit a model of this type, we introduce a set of unobserved
(or missing) indicator variables fl ik , where
group k, and zero otherwise. Then, the so-called complete
data likelihood for both the set of counts n i and the indicator
variables can be expressed
as
Y
Y
Y
e \Gamma- kj
We refer to the parameters -kj as relevance weights, and
use the probability that fl as the kth group weight
for query q i (the w ik mentioned at the beginning of this
section).
3.2 Basic EM algorithm
The Expectation-Maximization (EM) algorithm is a convenient
statistical tool for finding maximum likelihood estimates
of the parameters in a mixture model [8]. The EM algorithm
alternates between two steps; an E-step in which we
compute the expectation of the complete data log-likelihood
conditional on the observed data and the current parameter
estimates, and an M-step in which the parameters maximizing
the expected log-likelihood from the E-step are found.
In our context, the E-step consists of calculating the conditional
expectation of the indicator variables fl ik , which we
l -
ff l
where p(\Deltaj -k ) is given in (2). In this expression, the quantities
-k denote our current parameter estimates.
Note that - fl ik is an estimate of the probability that query q i
belongs to group k, and will be taken to be our query group
weights. Then, for the M-step, we substitute -
(3), and maximize with respect to the parameters ff k and
-k . In this case, a closed form expression is available, giving
science:math:statistics:conferences www.beeri.org.il/srtl/
computers:computer science:conferences www.computer.org/conferen/conf.htm
computers:computer science:conferences www.acm.org/sigmod/
computers:computer science:conferences www.acm.org/sigmm/Events/./sigmm conferences.html
computers:computer science:conferences www.acm.org/sigkdd/events.html
Figure
5: A subset of the fields available in the DMOZ data. The fields are directory label and URL.
us the updates
l
Clearly, these simple updates make the EM algorithm a convenient
tool for determining query group weights and relevance
weights. Unfortunately, the convergence of this algorithm
can be slow and it is guaranteed only to converge to
a local maximum. To obtain a good solution, we start the
EM process from several random initial conditions and take
the best of the converged fits.
In
Figure
6 we present an example of three of the groups
found by this standard EM approach. For each group we
display those query terms with the highest weights. It is
arguable that the last group is fairly loose, combining different
countries in Africa. From the standpoint of searches
typically conducted by the Lucent employees served by our
proxy, however, this degree of resolution is not surprising.
With proxy logs from an ISP, we would have sufficient sample
size to further subdivide this topic area.
3.3 Approximate algorithm with labeled data
The query groups formed by the mixture model introduced
in Section 3.2 allow us to borrow strength from search
sessions initiated with different, but semantically, related
query strings. The mixture approach, however, is highly unstructured
in the sense that we only incorporate user data to
learn the groups and relevance weights. In this section, we
consider a simple method for incorporating existing information
about related URLs from, say, a directory like DMOZ
(www.dmoz.com) or Yahoo!. Essentially, we use the directory
labels obtained from these sources to seed our query groups.
In
Figure
5 we present a subset of the data available from
the DMOZ hierarchy.
We assume that the data in such a structure can be represented
as a set of pairs (l; u lj ) where l indexes groups of
URLs and u lj is a URL in the lth group (here j runs from 1
to J l , the number of URLs in group l). The weights associated
with these data, - jl , are not specified in the directory so
we assume they have a value of ff. In Figure 7, we present a
simple algorithm to establish mappings between queries and
nj+transit bridal+dresses kenya
njtransit bridesmaid+dress tanzania
nj+train+warren flower+girl+dresses africa
new+jersey+train
Figure
Three query groups found by fitting the
mixture model via the standard EM algorithm. In
each case, only the top ranking queries per group
are displayed.
URLs when either the query or the URL has been seen in either
the labeled data or the sessions that have already been
processed. The remaining sessions are processed in a batch
using the basic EM algorithm in Section 3.2. The algorithm
can be tuned with a threshold value T (0 - T - 1) to force
more of the URLs in the session to exist in a previously
created group.
The approximate algorithm has the advantage of incorporating
labeled data, but has the disadvantage of slowly
adding to the set of clusters when a new topic is found in
the data. At the end of the paper, we describe a more formal
statistical approach to using content hierarchies that avoids
this problem.
4. EXPERIMENTAL RESULTS
To evaluate our methods, we used Lucent proxy data and
computed search session lengths with and without query
clustering. We began by selecting a "desired URL" from
each search session. Since the users did not provide relevance
feedback on the pages they viewed, we developed a
simple heuristic for making this choice. In our first set of
experiments, we took the desired URL to be the last URL
that the user visited before moving on to a new task. Since
users may continue to browse after they have found a relevant
page, this simple choice is not correct for all of the
search sessions. In subsequent experiments, we defined the
second-to-last URL to be the desired URL, and so on. We
found that the results of these experiments were all very sim-
ilar, each suggesting that clustering can considerably reduce
session length. Therefore, in this paper, we present only the
experiments with the the last URL being the desired URL.
We consider two metrics to judge the effectiveness of our
algorithms: the percent of queries where the desired URL is
in a cluster; and the position of the desired URL in a group
(i.e., the session length). Our claim is that the location of
the desired URL in our system's output can be compared
against the number of URLs that a user visited during their
assign each URL in labeled data to a URL group
based on its directory label with weight 0
for each session s
largest overlap with URLs
in s
if (query in s was NOT seen before) and
(# URLs in s existing in g/# URLs in s ! T)
add s to B
else
for each URL in s
if URL is not in g, add it
increment weight of URL by ff
if query is not in query groups associated
with g, add it
output mappings
cluster sessions in B using the BasicEM algorithm
Figure
7: Pseudo-code for approximate algorithm.
number
of
URLs
visited
Figure
8: Experiment results: the number of URLs
visited without and with clustering on 46 days of
testing data.
searching task to measure the improvement of our system.
Averaging these values across search sessions provides us
with a measure of expected search length.
To study our algorithms, we used a portion of the search
sessions extracted from the access logs of the Lucent Murray
Hill proxies to train with, and then tested with another
portion, distinct in time. Given our performance metrics,
we conducted a number of experiments involving different
time-based divisions into training and test sets and varying
the parameters required for the basic and approximate EM
algorithms. We then used our experience with these different
trials to choose a reasonable set of parameters for the
experiments (on new data) reported here. We used
groups when training with the basic EM algorithm. We decided
on 0:1, and the labeled
data from the DMOZ image from May 19, 2001 when training
with our approximate algorithm.
The results presented below are representative of our many
experiments. They correspond with training on 95 days
worth of data, and testing on 46 days of data. The percent
of queries where the desired URL is in a cluster is 93%.
Thus, 93% of the time, when both the query and desired
URL were seen in the training data, our algorithms display
the desired URL. Figure 8 presents our position results; we
see that both algorithms reduce the number of URLs visited
by 43% for the basic EM algorithm (38% for the approximate
algorithm). The means are slightly misleading due
to the presence of a small number of outliers (queries with
very large search sessions). Thus, we also computed median
positions; without our clustering algorithms, the median
number of URLs visited is 2, and with either clustering
algorithm, the median position of the desired URL is 1.
Perhaps most importantly, for 43% of the cases (29% for the
approximate algorithm) we provide a strictly shorter search
session length.
Figure
9 presents our results with testing with six months
of data. (The difference in the height of the "without" bars
is due to different ranges of training data being used.) We
found the value of the percentage of queries where the desired
URL is in a cluster is dependent on the length of the
number
of
URLs
visited
Figure
9: Experiment results: the number of URLs
visited without and with clustering on 6 months of
testing data.
testing data. For example, when 6 months of testing data
was used, the percent matched is 60-69%, and when only
were used, the percent matched is 93%. In some
sense, this effect is an obvious byproduct of our experimental
refitting the model periodically, the set of
URLs become out-of-date. In a real system, we would incorporate
frequent model updates. At the end of this paper, we
consider an online version of the EM algorithm that would
provide incremental updates with each search session.
While the results with the basic EM algorithm are en-
couraging, it does not scale well at all. Even the modest
data sizes entertained here can take many hours to con-
verge. For the approximate algorithm, roughly 60% of the
search sessions were passed into the basic EM. This allowed
the approximate algorithm to run in much less time (1 hour
compared to half a day). Since a smaller set of data is clustered
in the approximate setup, a smaller value of K can be
used. We note that the percentage of queries on the list for
the approximate algorithm is only a few points less than the
basic EM algorithm.
Varying the value of K affects both the time needed to
form the clusters and the quality of results. For example,
when clustering 142 days of data, twice as
long to form the clusters as
times as long.
5. RELATED WORK
There are many systems which involve users ranking or
judging Web sites as they visit them; [9, 2] are examples.
We feel that systems which require the users to explicitly
comment on the Web sites place too burden on the user.
Therefore, while the data can be considerably cleaner than
our search sessions, it is necessarily of limited coverage. The
database in [9], for example, contains very few of the queries
seen in the Lucent logs.
Many techniques exist for automatically determining the
category of a document based on its content (e.g., [18] and
its references, [10] and its references, [1]) and the in- and
out-links of the document (e.g., [7, 12]). We are currently
investigating techniques to include content in our clustering
algorithms, with the advantage that by working with the
proxy cache we do not require extra spidering.
Another approach to document categorization is "content
ignorant." [3], for example, uses click-through data to discover
disjoint sets of similar queries and disjoint sets of similar
URLs. Their algorithm represents each query and URL
as a node in a graph and creates edges representing the user
action of selecting a specified URL in response to a given
query. Nodes are then merged in an iterative fashion until
some termination condition is reached. While similar in
spirit to our algorithms, this algorithm forces a hard clustering
of queries, limiting its ability to easily incorporate prior
information. In addition, our system relies on much richer
data, namely proxy logs.
Finally, the literature contains a large number of distance-based
methods for clustering; [4, 19] present two well-known
algorithms for handling large amounts of data. The approach
taken by these types of algorithms might not work
well on our problem where there should be tens of thousands
of clusters.
6. CONCLUSIONS AND FUTURE WORK
We have developed a method to extract search-related
navigation information from proxy logs, and a mixture model
which uses these data to perform clustering of queries. We
have shown that this kind of clustering can improve the display
of search engine results by placing the target URL high
in the displayed list.
The basic mixture approach using the basic EM algorithm
is highly unstructured in the sense that we only incorporate
user data to learn the groups and relevance weights. Having
taken a probabilistic approach to grouping, we can easily
incorporate prior information about related URLs from
DMOZ in a more formal way. A natural prior for our coefficients
- lj (the relevance weights) is a Gamma distribution.
Suppose that for each URL contained in category l, we assign
- lj a prior Gamma distribution with parameters ff and
while those URLs not listed under the directory in category
l receive a Gamma distribution with parameters ff 0
and fi, where ff ff. The ingredients necessary for the EM
algorithm in Section 3 can be carried out under this simple
model as well. In this way, we can force a stronger tendency
toward maintaining the existing hierarchy topics, while still
allowing new URLs to be added. The approximate algorithm
presented in the paper can be viewed as a very rough
approximation to this approach.
Next, while the standard EM algorithm is sufficient for
the relatively small data sets presented in this paper, it is
known not to scale very well as either the number of queries
or the number of query clusters increases. To combat this,
we have started working with on-line versions of the EM algorithm
[13] that can process individual search sessions as
they arrive. This requires a reformulation of the model as
presented here, but we believe it will give reasonable perfor-
mance. We are also extending our model to treat the query
string as a collection of query terms and not simply a sorted
list as we have done here. The same kind of Poisson structure
used for the collection of URLs in a search session is
applied to the query terms. This allows us to be much more
flexible in how we form query clusters and how we treat
new queries. Finally, by extending our probability models
to include content of the pages beyond the link structures,
we hope to generate even greater improvements in cluster-
ing. In a proxy-based implementation, processing the pages
to extract content represents very little overhead as we are
already examining pages for links.
Another direction that we are exploring avoids the aggregation
across users that we described in Section 3. In this
case, a model for user's searching habits will replace the
simplifying assumption that URLs associated with a query
group are sampled independently. Directly describing how
a user moves within and between sites will improve our calculation
of relevance weights. In addition, user-level parameters
can be introduced that capture the time scale over
which a user is likely to be refining a query and not changing
topics.
Acknowledgments
. Tom Limoncelli and Tommy Reingold
were invaluable in helping us access the proxy server
logs that we used for our experiments.
7.
--R
Theseus: categorization by context.
Agglomerative clustering of a search engine query log.
Scaling clustering algorithms to large databases.
The anatomy of a large-scale hypertextual web search engine
User popularity ranked search engines.
Finding related web pages in the World Wide Web.
Maximum likelihood for incomplete data via the EM algorithm (with discussion).
Capturing human intelligence in the Net.
Information retrieval on the Web.
The stochastic approach for link-structure analysis (SALSA) and the TKC effect
Clustering hypertext with applications to web searching.
Mining Web proxy logs: a user model of searching.
Nielsen//netratings search engine ratings
Learning collection fusion strategies.
Multiple search engines in database merging.
BIRCH: an efficient data clustering method for very large databases.
--TR
Learning collection fusion strategies
Fab
Multiple search engines in database merging
The anatomy of a large-scale hypertextual Web search engine
A re-examination of text categorization methods
Finding related pages in the World Wide Web
Clustering hypertext with applications to web searching
Capturing human intelligence in the net
The stochastic approach for link-structure analysis (SALSA) and the TKC effect
Agglomerative clustering of a search engine query log
Information retrieval on the web
--CTR
Bernard J. Jansen , Amanda Spink , Chris Blakely , Sherry Koshman, Defining a session on Web search engines: Research Articles, Journal of the American Society for Information Science and Technology, v.58 n.6, p.862-871, April 2007
Mathias Gry , Hatem Haddad, Evaluation of web usage mining approaches for user's next request prediction, Proceedings of the 5th ACM international workshop on Web information and data management, November 07-08, 2003, New Orleans, Louisiana, USA
Mark Truran , James Goulding , Helen Ashman, Co-active intelligence for image retrieval, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Bernard J. Jansen , Amanda Spink, How are we searching the world wide web?: a comparison of nine search engine transaction logs, Information Processing and Management: an International Journal, v.42 n.1, p.248-263, January 2006 | web searching;proxy access logs;model-based clustering;query clustering;expectation-maximization algorithm |
502661 | Scaling replica maintenance in intermittently synchronized mobile databases. | To avoid the high cost of continuous connectivity, a class of mobile applications employs replicas of shared data that are periodically updated. Updates to these replicas are typically performed on a client-by-client basis--that is, the server individually computes and transmits updates to each client--limiting scalability. By basing updates on replica groups (instead of clients), however, update generation complexity is no longer bound by client population size. Clients then download updates of pertinent groups. Proper group design reduces redundancies in server processing, disk usage and bandwidth usage, and dimininishes the tie between the complexity of updating replicas and the size of the client population. In this paper, we expand on previous work done on group design, include a detailed I/O cost model for update generation, and propose a heuristic-based greedy algorithm for group computation. Experimental results with an adapted commercial replication system demonstrate a significant increase in overall scalability over the client-centric approach. | INTRODUCTION
Intermittently Synchronized Database (ISDB) systems allow
mobile data sharing applications to reduce cost by forgoing
continuous connectivity. To allow sharing, a dedicated
update server maintains the primary copy (global database)
of all data held by mobile clients while each client maintains
its own replica of some subset of the global database
schema. The update server maintains the primary copy of
the global database by receiving updates from the clients
and distributing them on demand to the clients based on
knowledge of their subscriptions. (See Figure 1.) Typically,
the server generates update les for clients on an individual
basis: the server scans through a set of updates, and for
each client and update, decides if the client should receive
the update. In this way, each client receives a customized
update le containing only the relevant data. This technique
is simple and straightforward but requires the server to do
work in direct proportion to the number of clients, limiting
the scalability of the system and resulting in greater time to
maintain local replicas.
In this paper, we build on previous work that proposed organizing
updates into groups shared by clients [5]. Using this
approach, the server manages update processing for a limited
and controllable number of groups, irrespective of the
number of clients. Instead of receiving updates customized
for its specic needs, each client accesses the updates for
the groups containing data it needs. The group-based approach
results in better server scalability because the work
of the server is decoupled from the actual number of clients
maintained [5, 11]. For example, a salesperson may enter
her o-ce in the morning before visiting customers and load
sales data from the server onto her laptop. Throughout the
day, the salesperson updates her local copy of the sales data
regardless of the connectivity with the server. When conditions
permit it, a reliable connection can be established with
the server to synchronize local data using updates generated
for the proper groups. Connectivity options include conventional
wireless modems, high speed wireless LANs (e.g.,
or docking the mobile device wired base station.
Furthermore, her coworkers may concurrently update their
replicated data.
In short, we develop a detailed cost model for group design
and oer a remedy to a scalability problem natural to
ISDBs that share data. We then present extensive experimental
results demonstrating the improved e-ciency using
our design techniques. We begin in Section 2 by placing our
work in the context of related work. We dene the architecture
and an I/O-based cost model for evaluating a spe-
cic data grouping in Section 3. In Section 4, we propose a
heuristic set of operators for modifying data groupings and
a greedy algorithm to apply these operators. We also outline
how to handle changes in the system conguration. In
Section 5, we compare our approach to intuitive grouping
alternatives, and demonstrate that our grouping algorithm
provides signicantly greater scalability in client population
size. Finally, we summarize our observations and describe
future work in Section 6. Note that for the sake of brevity,
we do not include the proofs of theorems in this paper, and
present only a subset of our experimental results. For more
detail, the reader may refer to [12].
2. RELATED WORK
An ISDB is an instance of a distributed computing sys-
tem. Multiple databases independently operate on shared
data. The assumptions that clients are mobile and commonly
suer long periods of disconnection from the server,
however, make traditional concurrency control protocols in-
applicable. Traditional distributed database systems use the
two-phase-commit protocol to help ensure the ACID properties
of transactions [7]. This protocol is communication
intensive and is therefore impractical when clients can be
unreachable for long periods of time{e.g., if a mobile client
powers down in order to save energy.
In response, researchers have proposed replicating data
among multiple clients and allowing them to operate on
the replicas independently [1]. This allows quicker response
time, reduces the possibility of deadlock, reduces the need
for energy-consuming communication with the server, and
allows mobility resistant to network outages, with the down-side
of relaxing the ACID properties [3].
To aid such functionality, the architecture must include
a dedicated centralized server that collects updates, and resolves
con
icts as described in [2]. This server increases
the availability and reliability of shared data, but may suffer
performance problems because the amount of work the
server must do increases with the number of clients served.
The architecture and goal of the CODA intermittently
connected le system is similar to those of ISDBs. Researchers
of CODA predicted (but did not experience) the
possibility of a reintegration storm, which occurs when multiple
clients simultaneously try to synchronize their local
caches with the server [9]. This results in unmanageably
long reintegration times for each client. A similar project
called DBMate supports intermittently connected database
systems. Experiments with DBMate show that server design
can improve synchronization performance [8]. The work in
this paper generalizes these results.
Group design is similar to materialized view design for
large database systems. Both try to reorganize monolithic
sets of data in order to speed up response time for clients.
However, view design has slightly dierent goals and as-
sumptions. The utility of view design is measured by how
closely the resultant views cover the expected queries; the
main cost is the disk space consumed in storing the views
[4]. In group design for ISDBs, the utility of groups is measured
by how quickly they are generated, and the cost is
roughly how much extra data must be transmitted to the
clients. Furthermore, since views are assumed to contain
subsets of relational data and groups contain updates, they
can be manipulated in dierent ways. For these reasons,
algorithms for view design are inapplicable to group design.
3. MODEL
Standard ISDB architecture includes the database server,
update server, le server, network, and clients. The database
server stores the global database. The update server generates
sets of updates for the replicas stored on the clients.
These updates are made available to clients through a set of
le servers. Clients intermittently connect to the le server
via a network. The client is composed of a client update
agent, and a local DBMS. When the client downloads up-
dates, the client update agent processes them by applying
them to the replicas contained on the local DBMS. Data
ow from client to server is not discussed in this paper, but
the interested reader may refer to [10] for more details.
The global database is divided into publications or frag-
ments, which are horizontal partitions of the database. Each
client subscribes to a subset of fragments based on its data
needs. The server maintains multiple, possibly overlapping
datagroups of fragments, designed based on client subscrip-
tions. The DBMS records updates from clients in a log,
which is modied so that it also stores the scope [3]{the
which the update aects{of each update.
An update session is dened as the process during which
the update server scans the log for outstanding updates,
and for each datagroup, generates an update le (called the
update log) containing the updates that are in that data-
group's scope. The resultant update logs are then placed
on a le server for use by the clients. The frequency of up-date
sessions is application dependent and controlled by the
system administrator. A client downloads the update log(s)
that correspond to its local subscription. The client's up-date
server then processes and applies the contents of the
update log(s) to its local fragment replicas. In the basic approach
to update processing practiced by industry, which we
call client-centric, one datagroup is dened for each client
to exactly match its local subscription. In the proposed
data-centric approach [5], datagroups are created according
to how data are shared and the number of datagroups is
generally independent of the number of clients.
We consider the following two steps critical in synchronizing
a client: generating update les at the server and transmitting
them to the clients. We do not emphasize client-side
processing because the availability of powerful mobile computing
platforms (e.g., Pentium-class laptops) means that
client-side processing is not a performance bottleneck, especially
when the client is able to \install" updates while
being disconnected.
These two critical steps are tightly coupled. If an up-date
server can generate update logs more quickly, then they
are available for download sooner, and if update logs take
more time to generate, a client must wait longer to down-load
them. It is therefore reasonable to increase the cost of
one of these steps if the decrease in the other is greater.
Problem Statement: We are given a global database,
divided into a set of fragments, g. The proportion
of updates applied to F i during an update session
is estimated by a weight, W i . Fragment weights can be determined
by either using database statistics or as a function
of the volume of data a fragment denition spans (i.e., the
fragment's size) and the number of clients that subscribe to
it (i.e., the fragment's subscription level):
subscription level) of fragment i
(size subscription level) of all fragments
The server stores a set of datagroups
generates an update log for each, containing
the updates to the group's fragments. The size of
each update log is a function of the sum of the weights of
its corresponding datagroup.
Each client, i, has a set of interests, C i where C i F .
Each client subscribes to a set of datagroups such that the
groups to which the client subscribe contain a superset of the
fragments of interest to the client. Stated formally, if C is
the set of all C i , given a mapping from clients to datagroups,
(the power set of G), for each client i, [(i)
. This is the covering constraint.
Our goal is to generate a set of datagroups G and a mapping
that minimizes the total cost function (see Equation
below). With client-centric grouping,
jCj. The data-centric grouping
approach determines its grouping based on the aggregated
interests of the entire client population and the capabilities
of the system architecture. Figure 2 gives a simple example
of data-centric redesign in the case where the interests of
one client are a subset of those of another. Intuitively, if the
number of fragments in the database is xed, as the number
of clients increases, the absolute amount of overlap in
interests increases. This suggests that data-centric redesign
increases in usefulness with a growing client population.
3.1 Cost Model
Our cost model assumes that I/O time is the dominant
cost factor and is therefore a good approximation of the over-all
update processing cost of a particular grouping scheme.
Three server and network activities make up the total cost:
update mapping (mapping of updates to their respective
datagroups 1 ), update log storage (storage of all update logs
An update operation is one of three data manipulating
onto disk), and update log propagation (loading and transmission
of update logs). Total cost is therefore:
Total Cost = Update Mapping Cost
Update Storage Cost Update Propagation Cost (2)
The components of Equation 2 are explicitly modeled below.
The variables we use are shown in the following table
Variable Description (units, if appropriate)
CS server disk seek time (secs)
CL server disk latency time (secs)
CD CS +CL
CT server disk transmission rate (secs/byte)
rate between client k and server
VB buer size for each update log le (bytes)
VD average update operation record size (bytes)
VP average fragment denition record size (bytes)
VT average temporary table record size (bytes)
VG average datagroup denition record size
(bytes)
VS number of operations in the update le
total size of update le
M number of datagroups
N number of clients
Update (to datagroup) mapping cost is the time required
to map updates to datagroups. We assume that the
log of updates to be distributed and an update-to-fragment
mapping table (the publication information) are sequentially
read into memory (2CD +CT (VF )). The results of
the join are saved into a temporary le (CD
The temporary le and the fragment-to-datagroup mapping
table (the subscription information) are then read (2CD
joined in memory to produce
the nal result. Hence,
Update Mapping Cost
Update storage (to disk I/O) cost measures the time
required to store all the update logs onto disk at the server.
We assume that a main-memory buer is maintained for
each update log, and whenever a buer is lled, its contents
are written to disk. This happens until all update logs are
The left term in the parentheses
below indicates the time spent on disk latencies that are
experienced for each update log when the buer is lled,
whereas the right term indicates how much time is required
to store the actual data on disk. Recall that the weight of
fragment estimates the proportion of operations that
commands, namely INSERT, UPDATE and DELETE. Combinations
of these operations constitute the transactions contained
in update logs.
2 The term \record" in the table refers to the data structure
(e.g., row) containing the respective information
Salesman
Northeast
Manager
Salesman
Northeast
Manager
Data Data Data Data Data Data Data Data
North East South West North East South West
Database
Server
F
Definition
G
Mapping
F
Client
Population
CLIENT-CENTRIC GROUPING DATA-CENTRIC GROUPING
REDESIGN
Figure
2: Example of client-centric to data-centric redesign using aggregated interests.
are applied to that fragment.
Update Storage Cost
(Server to client) update propagation cost measures
the time required to load into memory and then transmit
the appropriate update logs to all clients, assuming unicast
communication between the server and each client. Each
log the client downloads is sequentially read, then joined in
memory, then transmitted over the network at the client's
bandwidth.
Update Propagation Cost
In the equation above, disk latencies are experienced while
reading each client k's update logs (CD j(k)j). For each
client k, the volume of data read and transmitted are the
same (VF
although the server's
disk rate is xed (CT ), the clients' transmission rates are independent
3.2 Some Potential Cost Reducing Alternative
For illustrative purposes, we now discuss certain extreme
datagroup design strategies. A single large datagroup
the amount of disk resources
used. However, all clients must then receive all updates
regardless of their particular interests. If transmission costs
are high, this is a very costly solution. At the other ex-
treme, one datagroup can be generated for each fragment
This is similar to the server-side organization
proposed in [8]. Note that if the number of fragments
is high, then managing all the datagroups becomes
costly. Another solution is to generate one datagroup per
client C). This is the client-centric solution
standard used by industry that, as shown in [5], results in
high levels of redundant work as the client population grows.
The alternative we propose lies somewhere in between these
solutions and is based on how the clients share data.
Another option is to ameliorate the system architecture
itself with multiple update servers, in which each can work
in parallel to generate the update logs for a subset of clients,
as described in [10]. Instead of precluding it, such architectural
solutions complement data-centric design. For exam-
ple, clients can be allocated to each update server based
on the potential eectiveness of the resultant data-centric
grouping. One way to do this is to cluster clients based on
interest a-nities as in [6]. However, because we assume a
single update server, such work is outside the scope of this
paper.
4. A HEURISTIC APPROACH TO GROUPING
Because of the complexity of the grouping problem (it can
be modeled as an NP complete mathematical programming
problem), we oer a heuristic algorithm. We start by introducing
three operators that perform grouping operations
on fragments. Each has dierent cost proles, and thus are
applicable in dierent situations. At the end of this section,
we introduce a means of greedily applying these operators.
For brevity, we do not include a formal cost analysis but one
can be found in [12].
Based on the cost equations introduced in Section 3.1,
certain conclusions can be drawn about ways of manipulating
the grouping in order to reduce update processing costs.
Namely, we can manipulate the number of datagroups, the
composition of datagroups, and the subscription of clients
to datagroups in order to change update processing costs.
Common ways of redesigning datagroups, although with different
side eects, include merging, splitting, and subtracting
overlapping datagroups [7]. Although similar operators
are used in materialized view design, these operators are
based on algorithms that, as we explained in Section 2, are
generally inapplicable here. We therefore dene our own
operators and give their denitions, side-eects, and applicability
below. See Figure 3.
4.1 Operators for Redesigning Datagroups
Merging two datagroups involves replacing them with
their union. Clients subscribing to at least one of the merged
datagroups instead subscribe to their union, preserving the
covering constraint (See Section 3). If there is overlap between
the merged datagroups, then storage cost is reduced
in proportion to the size of the overlap. If a client originally
subscribed to only one of the merged datagroups, the client
must receive super
uous updates for fragments contained in
the \other" merged datagroup, resulting in increased update
log transmission costs.
Applicability of Merge - Because this operator typically
increases the amount of data that must be transmitted to
a client, we do not expect this operator to be used much,
unless the network bandwidth is very high, or the degree of
overlap between two datagroups is very high.
Splitting involves nding two non-totally overlapping, but
partially intersecting datagroups, and splitting
to form a third datagroup. Subscribers to either
datagroup must also subscribe to the third datagroup. Splitting
reduces overlap in datagroups, but does not increase the
amount of data transmitted to the respective subscribers.
There is, however, overhead in terms of disk seeks and latencies
for each additional datagroup generated.
Applicability of Split - Splitting should increase in relevance
as the volume of updates increases or the degree of
overlap between two datagroups increases, because the time
saved in not rewriting large sets of updates osets the increase
in disk seek and scan times.
Subtracting two datagroups applies only if one is a sub-set
(either proper or not) of the other. The smaller of the
two is subtracted from the larger one, eliminating their over-
lap. If the datagroup subtracted from becomes empty (in
the case where the subset relationship is not proper), then
it is discarded. Subscribers to the larger datagroup must
also subscribe to the smaller one (if a smaller one exists);
overhead in terms of disk latencies is incurred by clients having
to subscribe to additional datagroups. The number of
datagroups is not increased by this operation, and savings
are proportional to the degree of overlap between the subtracted
datagroups. Subscribers to these datagroups do not
need to receive extra data.
Applicability of Subtract - This operator diers from the
other two, because of the subset restriction on the operands.
Nonetheless, this operator typically has fewer side eects: It
neither increases the amount of data sent to clients the way
merge does, nor the number of datagroups the way split
does.
Example: Consider two datagroups, A and B, that each
generate update logs of 10KB. If, due to the denitions of
A and B, these update logs always have a single byte dif-
ference, then it may make sense to merge A and B. This
saves the server some work by maintaining one fewer data-
group and not storing its corresponding update log. The
cost however, is that clients subscribing to one of A or B,
must download an additional byte. But at 19Kbps, the additional
byte adds less than a millisecond in transfer time.
Alternatively, splitting A and B forces the server to maintain
the denition of an additional datagroup and generate
an additional byte-sized update log, which, depending on
the server load, may not be cost eective.
On the other hand, if A and B both generate update logs
of 10KB, which only have 5KB in common, it may make
sense to split the datagroups. Although splitting in the
above case would force the server to maintain the denition
of an additional datagroup and gererate an additional byte-
sized update log, here it would save the server some storage
time (because of the magnitude of the overlap) without increasing
the volume of data sent to the client. The drawback
is that splitting forces the server to maintain an additional
datagroup. Savings in storage, however, increase in importance
as datagroups grow. If these two datagroups were
merged instead, then clients subscribing to one of A or B
would have to spend 2 seconds downloading super
uous
data in addition to the 4 seconds spent downloading pertinent
data at 19Kbps.
4.2 Greedy Heuristic
The application strategy proposed here pursues local cost
minima until no cost reduction can be made by applying
the above operators. The greedy algorithm starts with the
client-centric solution and applies all possible subtraction
operations on the set of datagroups until no cost-reduction
is possible with this operator. We then search for the most
cost-eective merge or split. If one exists, we perform it,
and repeat the cycle:
GREEDY HEURISTIC
while TRUE do
Perform all possible cost-reducing subtraction
operations
Let m be the most benecial merge and let s be
the most benecial split.
If either m or s results in a cost reduction, then
perform the one that reduces cost the most.
neither m nor s is performed, then quit.
od
Subtraction is given precedence because it typically has
the fewest side-eects in terms of cost penalties and reduces
the search space for the other two operators. Furthermore,
merge and split can increase the applicability of subtract.
4.3 Redesign
As time passes, an ISDB changes its conguration. New
clients may be periodically added to the ISDB, with a given
subscription. The problem is deciding the proper data-
groups to assign to them. This problem is similar to the NP -
hard set-covering problem. Moreover, the existing clients
may change their subscriptions or their type of connection
with the server may change. A client may move from one
location to another and change its subscription to match its
locale, or another client may acquire a faster, higher band-width
connection.
The problem of redesign therefore has two levels: redesign
only subscriptions (groups remain xed); redesign both the
groups as well as the subscriptions. We address these problems
heuristically. Such solutions are necessary because
the greedy heuristic described in this paper has complexity
We therefore oer techniques that reduce
the need of running it. These techniques and relevant
proofs are fully described in [12].
4.3.1 Addition of Clients
In this section, we roughly describe how to map data-
groups to the subscriptions of new clients. This problem is
similar to the NP -complete weighted set-covering problem,
A
A
A'
A
A'
merge split subtract
Figure
3: The Merge, Split, and Subtract Operators. The boxes represent datagroups. The dashed lines
indicate overlap.
and we solve the client-addition problem in a similar way.
To each subscription, we greedily map the datagroup that
has the best (lowest) cost-eectiveness. Cost-eectiveness
is the ratio of the size of the datagroup (size is dened in
Section and the total size of the fragments covered for the
rst time by the datagroup. This process is repeated until
the entire subscription is covered. We have a ratio bound
for the results of this algorithm:
Theorem 4.1. The solution achieved by the greedy algorithm
is within a factor of HN approximation of the minimal
cost cover, where
is the N th
harmonic number and N is the size of the subscription.
4.3.2 Rerunning the Redesign Algorithm
As time passes, the conguration of the ISDB changes
making a data-centric grouping less cost-eective. Examples
of changes include changing of client connectivity, client
subscriptions, or server speed. Since we can keep track of
these changes, recomputing current client-centric cost (Ccc )
is straight-forward using the cost model from Section 3. By
comparing this cost with the actual current cost of the current
data-centric grouping (Cdc ), an administrator can decide
to redesign the datagroups if the cost improvement o-
sets the estimated redesign time (Cr
This is a conservative rule of thumb for deciding the benet
of redesign and varies depending on the application.
5. EXPERIMENTS
5.1 Goals
The goal of our experiments is to show that data-centric
grouping of updates using our greedy heuristic (denoted by
dc), results in faster refresh times for the average client than
other, more intuitive methods. Refresh time includes the
time required for the server to generate an update log(s)
(including computation and storage) and transmit it to the
proper client. These costs correspond to the ones described
in Section 3. The other grouping methods we consider are
described in Table 1.
In our experimental design, the database is composed of
100 fragments. Each fragment i is assigned a value, p i (0
Each client subscribes to fragment
i with a probability p i . To assign values to each p i , we
consider two probability distributions: one highly skewed
(Zipan, with one uniform. This results in
the average client subscribing to 1% of the database. The
total volume of updates is a linear function of the number of
clients. They are allocated to each fragment in proportion
to that fragment's p i value. Each client is assumed to have
the same network bandwidth (C i
The client population and client bandwidth. Varying these
parameters over these values re
ects a wide range of appli-
cations, such as the OLTP-class mobile o-ce application
described in the beginning of this paper. We omit experiments
varying update volume for the sake of brevity. See
Table
2.
We show that as any of the experimental parameters in-
creases, the advantage of using dc increases over the other
techniques. For example, the dierence between dc and cc
increases with the number of clients. Performance gains generally
come from detecting and removing redundant update
processing (i.e., overlapping subscriptions), but as we show,
under some circumstances, can also come by ooading work
onto other components in the architecture.
5.2 Experimental Set-Up
We ran our experiment on an Ethernet LAN consisting
of Pentium II computers running Windows NT. Update services
are provided by Synchrologic iMobile on a 266MHz
PC with 64MB RAM. (Performance trends using alternative
ISDB middleware, such as those by Oracle, IBM or Sybase,
should be similar to those reported here.) The database
server is Sybase SQL Anywhere v.6, running on a 200MHz
PC with 128MB RAM. The database stores a universal relation
to which we apply updates that are distributed to the
clients.
For the data-centric experiments, extensions based on those
described in [5] have been incorporated into iMobile. One
of the consequences of these extensions is that an extra set
of meta-data must be sent to each client. The size of the
meta-data le has been empirically estimated as ( 24
where jGj is the number of datagroups generated. The meta-data
grows with the number of groups because increasing the
number of groups results in the need to store more mapping
information for them. This extra metadata increases the
refresh times of dc, og and op generally by adding transmission
time. This extra cost becomes insignicant however,
as workloads increase and does not aect the trends of the
results.
5.3 Experiment 1: Varying Client Population
The results for both data distributions are nearly identi-
cal. Method op is bad with low populations because it generates
update logs regardless of whether they are subscribed
to or not. But, as the population increases, the probability
that a datagroup is not subscribed to becomes low. Method
og makes sense when there are few clients, because it saves
Grouping Methods (notation) comments
data-centric (dc) groups generated with the greedy heuristic
client-centric (cc) employed in industry, creates a unique group for each client
one-giant-group (og) minimizes costs at the server by storing updates for all clients in a single group
one-per-fragment (op) minimizes network costs and eliminates some storage redundancy by generating
a single group for each fragment
Table
1: Grouping Methods
Parameter Description Values (control values in fg)
Number of clients 1, 5, 50, f100g, 200, 500, 1000
Workload (updates/client) f50g
Client bandwidth (bps) 19:2
Distribution of updates over fragments Zipf
Table
2: Parameter values for experiments.20601001401 5 50 100 200 500 1000
clients
refresh
time
og
op
cc
clients
refresh
time
og
op
cc
dc
A. Uniform Distribution B. Zipf Distribution
Figure
4: Per-Client Refresh Time (seconds) With Varying Client Population.51525354519200 57600 512000 1024000 10240000
bandwidth (bps)
refresh
time
og
op
cc
bandwidth (bps)
refresh
time
og
op
cc
dc
A. Uniform Distribution B. Zipf Distribution
Figure
5: Per-Client Refresh Time (seconds) With Varying Client Bandwidth.
the server some work. The amount of super
uous data sent
to each client grows, however, with the population, making
this grouping infeasible. Method cc breaks down with a high
population because of the increasing amount of redundant
work it must do with a growing population.
In these tests, dc, which uses the proposed greedy heuris-
tic, has consistently good performance, resulting in the lowest
or nearly the lowest refresh times over all populations.
Method dc outperforms op, because dc groups together fragments
that are often subscribed to together. Method dc
avoids the problems associated with og and cc by generating
many datagroups with little overlap, but not so many as
to compromise server performance. See Figure 4.
Please note that the total volume of data distributed is
not scaled up per client as the client population increases.
We keep the total volume of data constant in order to isolate
client-population eects. However, experimental results
given a greater total volume of data make the dc results even
more favorable with respect to the others.
5.4 Experiment 2: Varying Client Bandwidth
In practice, clients may choose among many connectivity
options, including wireless modem, conventional modem,
high-speed wireless LAN, or simply docking the portable device
at the o-ce LAN. We therefore study how these changing
bandwidths aect the eectiveness of the various grouping
methods.
Although og performs poorly when there is little band-
width, it gains the most as bandwidth increases. With og,
work at the server is already minimized, so more bandwidth
reduces the most harmful eects of shipping super
uous
data.
Method op has good performance as well, but fails to take
advantage of the increased bandwidth by generating fewer
groups in order to save the server some work. It therefore
ultimately performs worse than og.
With high bandwidth, cc has the worst performance of
all, because of the redundant work that must be done at the
server, regardless of network performance. This is an important
result, and indicates that, regardless of the capability of
the network, per-client ISDB refresh processing using cc has
a performance
oor.
Method dc does the best, by generating multiple disjoint
groups which conserve network resources when they
are poor, and generating fewer groups which conserve server
resources when the network is fast. See Figure 5.
6. CONCLUSION
In this paper, we dene a detailed model of ISDBs, describe
how this model inherently leads to performance problems
with client refresh, and oer a grouping solution. We
propose redesigning updates into groups based on the aggregated
interests of the entire client population (data-centric).
This allows clients to share groups intended for many clients.
To formulate a redesign technique we dene a detailed cost
dene operators that manipulate costs based on the
system conguration, and devise a way of applying them,
based on greedy heuristics.
We tested our greedy heuristic against the client-centric
approach, and two other intuitive solutions: a monolithic
group containing all updates and individuals groups for each
fragment. Overall, our heuristic outperforms all the others
in terms of refresh time because it greedily allocates resources
where they are needed{either to the server or the
network depending on the conguration of the ISDB. Fur-
thermore, the relative benet of data-centric grouping increases
with the client population, improving scalability.
Our work on ISDBs is ongoing. For example, we limited
client connectivity to unicast because this is currently the
dominant form of communication in practice. We are currently
exploring the use of multicast as a means of improving
network scalability.
7.
ACKNOWLEDGMENTS
The authors would like to acknowledge Ms. Mireille Jacobson
for her valuable editorial assistance.
8.
--R
Replication and consistency: Being lazy helps sometimes.
Replicating and allocation data in a distributed database system for workstations.
The dangers of replication and a solution.
Implementing data cubes e-ciently
Grouping techniques for update propagation in intermittently connected databases.
Vertical partitioning algorithms for database design.
Principles of Distributed Database Systems.
Data partitioning for disconnected client server databases.
Experience with disconnected operation in a mobile computing environment.
A framework for server data fragment grouping to improve scalability in intermittently synchronized databases.
Minimizing redundant work in lazily updated replicated databases.
--TR
Vertical partitioning algorithms for database design
The dangers of replication and a solution
Implementing data cubes efficiently
Replication and consistency
Principles of distributed database systems (2nd ed.)
Data partitioning for disconnected client server databases
Replicating and allocating data in a distributed database system for workstations
A framework for designing update objects to improve server scalability in intermittently synchronized databases
Grouping Techniques for Update Propagation in Intermittently Connected Databases
--CTR
Efficient synchronization for mobile XML data, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA | distributed databases;mobile databases;intermittent synchronization |
502784 | Controllable morphing of compatible planar triangulations. | Two planar triangulations with a correspondence between the pair of vertex sets are compatible (isomorphic) if they are topologically equivalent. This work describes methods for morphing compatible planar triangulations with identical convex boundaries in a manner that guarantees compatibility throughout the morph. These methods are based on a fundamental representation of a planar triangulation as a matrix that unambiguously describes the triangulation. Morphing the triangulations corresponds to interpolations between these matrices.We show that this basic approach can be extended to obtain better control over the morph, resulting in valid morphs with various natural properties. Two schemes, which generate the linear trajectory morph if it is valid, or a morph with trajectories close to linear otherwise, are presented. An efficient method for verification of validity of the linear trajectory morph between two triangulations is proposed. We also demonstrate how to obtain a morph with a natural evolution of triangle areas and how to find a smooth morph through a given intermediate triangulation. | Figure
7 shows this on a concrete example. Conjecture 3.5 allows to choose the maximum m, namely
that guarantees a valid morph. A simple algorithm to flnd M sequentially checks morphs for
every m > 1, incrementing m by 2. The number of morphs checked may be signiflcantly reduced to
ACM Transactions on Graphics, Vol. X, No. X, xxx 2001.
Controllable Morphing of Compatible Planar Triangulations 11
(a) (b) (c) (d) (e)
Fig. 7. Morphs generated by raising neighborhood matrices to various powers. (a), (b) The source and the target triangula-
tions. Correspondence is color coded. (c) The convex combination morph at 0:5. (d) An invalid morph is generated by
raising neighborhood matrices to power valid morph is generated by raising neighborhood matrices
to power Zoom in on the triangulations in (d) and (e). (h) Trajectories of the convex combination
morph. (i), (j) Trajectories of morphs generated by raising neighborhood matrices to power
Note the positions of the trajectories relative to the straight lines (the linear morph).
O(log M). First, we flnd an upper bound mmax by doubling m until the morph is invalid. Then, the
resulting M is found by binary search in the interval [mm2ax ; mmax].
Furthermore, a morph that is even closer to the linear morph than a morph deflned by may
be obtained. Consider the following deflnition of A(t):
This equation averages the neighborhood matrices of the valid morph deflned by with the
neighborhood matrices of the invalid morph, deflned by 2. Using power m for neighborhood
matrices and a parameter d, equation (14) may be viewed as a morph with a non-integer power for
neighborhood matrices m 1). In order to obtain a morph, which is the closest possible
to the linear morph, the maximal parameter d may be chosen by binary search in the interval [0; 1],
verifying the morph validity at every step. See Figure 11(b) for a morph generated by this scheme.
3.4 Morphing with an Intermediate Triangulation
This section demonstrates how to flnd a morph between two triangulations T0 and T1 such that at a
the morph interpolates a given triangulation Tm. The triangulations T0, T1 and Tm are
compatible and with identical boundaries. A naive solution is to flnd two convex combination morphs
independently: the flrst|between T0 and Tm, and the second morph|between Tm and T1. The problem
with this is that while the two independent morphs are continuous and smooth, the combined morph will
usually have a C1 discontinuity at the intermediate vertices.
In order to flnd a smooth morph, it is necessary to smoothly interpolate A0, Am and A1 in A(T0).
Consequently, the corresponding elements of the three matrices should be smoothly interpolated. Given
three points (0; i;j(0)), (tm; i;j(tm)) and (1; i;j(1)) in R2, it is necessary to flnd an interpolation
i;j(t) for all t 2 [0; 1], see Figure 8. Since the entries of the matrices are barycentric coordinates, the
interpolation must satisfy 0 i;j(t) 1. An interpolation within the bounded region [0; 1] [0; 1] may
be found as a piecewise Bezier curve, since any Bezier curve is located in the convex hull of its control
points.
An important point is that interpolations for the matrix entries are performed independently. But
every row i, 1 i n, of A(t), being barycentric coordinates of the interior vertex i, should sum to
unity. Due to the independent interpolations, this might not be the case. Normalizing the elements of
each row can solve this problem. The normalized entry i;j(t) is deflned as follows:
12 V. Surazhsky and C. Gotsman
i;j(t)PSfrag replacements i;j(tm)
Fig. 8. An interpolation of three points (0; i;j(0)), (tm; i;j(tm)) and (1; i;j(1)) when 0 tm 1 is in the bounded
region [0; 1] [0; 1].
Fig. 9. (top) A smooth morph interpolates an intermediate triangulation given at The morph trajectories. The dashed lines are the edges of the source, target and
intermediate triangulations.
Since i;j(t) is smooth, the sum of i;j(t)'s is also smooth. Therefore the normalized i;j(t) is a smooth
interpolation. See an example demonstrating a smooth morph in Figure 9.
4. MORPHING WITH LOCAL CONTROL
A well-behaved morphing scheme should have properties like those described in Section 3.2. Trajectories
traveled by the interior vertices should be smooth and even (not jerky and not bumpy). It would be
useful if the scheme would be linear-reducible. When the linear morph is invalid, the natural requirement
is to generate a morph as close as possible to the linear one. It would also be useful to be able to control
triangle areas in such a way that they transform naturally (uniformly) during the morph. This may
help to prevent shrinking/swelling of triangles, that result in an unnatural-looking morph. The schemes
presented in this section allow the control of trajectories of the interior vertices, triangle areas etc. in a
local manner.
To flnd a morph between two triangulations T0 and T1 means to flnd a curve A(t) for 0 t 1 in A(T0)
with endpoints in A(T0) and A(T1). We will do this by constructing each row of A(t) (corresponding to
each interior vertex) separately.
We deflne T 0(G0; }0) to be a subtriangulation of a triangulation T (G; }) if T 0 is a valid triangulation,
G0 is a subgraph of G and the coordinates of the corresponding vertices of T 0 and T are identical. The
triangulations T0 and T1 may be decomposed into n subtriangulations in the following manner: each
interior vertex corresponds to a subtriangulation that consists of the interior vertex, its
neighbors and edges connecting these vertices, see Figure 10. A subtriangulation deflned above is said to
Sn
be a star denoted by Zi. Namely, every star Zi corresponds to the interior vertex i, and
be stars of the triangulation T0; stars Zi(1) are deflned analogously for
T1. Clearly, Zi(0) and Zi(1) are two isomorphic triangulations, since they are the same subgraph of
two isomorphic triangulations T0 and T1. Barycentric coordinates of the interior vertex in star Zi with
respect to the boundary vertices of that star are also barycentric coordinates of the interior vertex i
Controllable Morphing of Compatible Planar Triangulations 13
Fig. 10. A triangulation is decomposed into stars; each interior vertex deflnes a separate star.
in a triangulation T with respect to its neighbors. Thus all Zi(0)'s when 1 i n together deflne a
neighborhood matrix A0 in A(T0); Zi(1) for 1 i n deflne A1 respectively. In the same manner,
we would like to deflne A(t) for a speciflc t using stars Zi(t), 1 i n. The question is how to
flnd Zi(t) for 0 < t < 1 such that it will deflne barycentric coordinates with some intermediate values
between barycentric coordinates of Zi(0) and Zi(1). Obviously, a smoth morph of two stars Zi(0) and
should su-ce to obtain this. But only a smooth A(t) will deflne a smooth morph between the
triangulations. For that reason it is important to use a method to generate barycentric coordinates of
the interior vertices that is at least C1-continuous, such as that described in Section 2.1. A morphing
scheme that generates A(t) by morphing separately the stars of two triangulations is said to be a local
scheme.
There is a simple way to morph two stars Z(0) and Z(1). First, translate the two stars in such a way
that the interior vertices of the both stars are at the origin. Then a morph may be deflned by linear
interpolation of the polar coordinates of the corresponding boundary vertices. One can flnd a proof for
the correctness of this morph in [Shapira and Rappoport 1995; Floater and Gotsman 1999; Surazhsky
1999], where there are also recommendations on how to choose the polar coordinates in order to obtain
a valid morph. Note that the validity of star morphs, based on the translation of interior vertices to the
origin, depends only on how angle components of the polar coordinates of the boundary vertices vary
during the morphs. Arbitrary variations in the radial direction of the boundary vertices do not afiect the
validity of the morph.
4.1 The Local Linear-Reducible Scheme
The local scheme morphs separately the corresponding stars of T0 and T1 and is based on the translation
of the source and target stars to the origin. Two translated stars Zi(0) and Zi(1) are morphed in the
following manner. If the linear morph of two stars is valid, we adopt it. Otherwise, an arbitrary valid
morph is taken. It can be the morph that averages the polar coordinates of the boundary vertices, as
described in Section 4; or translated trajectories of the boundary vertices during the convex combination
morph. In the latter case the row corresponding to the star Zi in A(t) is equal to the i'th row of A(t)
generated by the convex combination morph.
The following theorem shows that this scheme is linear-reducible.
Theorem 4.1 The linear morph of two triangulations T0 and T1 is valid ifi the linear morphs of all
component stars are valid.
Proof. Validity: If the linear morph between T0 and T1 is valid, then any triangulation T (t) for
compatible with T0 (and T1) and thus may be decomposed into valid stars. Each
is a valid morph between Zi(0) and Zi(1). If all morphs of the stars Zi(t) are valid,
then we have a legal neighborhood matrix function A(t) for 0 t 1, and thus T (t) is valid.
Linearity: Let pi;j be the coordinates of the vertex i in the star Zj. First, we prove that if the morph
between T0 and T1 is linear then the morphs of all stars are linear. We have
for 0 t 1. Every star Zj(t) is translated in such a way that the interior vertex is at the origin. Thus,
14 V. Surazhsky and C. Gotsman
Combining both equations:
For the opposite direction, we prove that if the morphs of all stars are linear then the morph between
T0 and T1 is linear. The linear morphs of the stars imply that:
Let T (t) be the linear morph between T0 and T1, namely,
It remains to show that A(t), deflned by the stars Zj(t) for 1 j n, satisfles
will show that every star Zj(t) deflnes the same barycentric coordinates of the interior vertex i as the
corresponding star j of T (t), and thus A(t) 2 A(T (t)). Clearly, Zj(t) is isomorphic with the corresponding
star j of T (t). The following states that the vertex coordinates of Zj(t) are translated coordinates of the
corresponding star j of T (t). Due to the initial translations of the interior vertices to the origin:
We can now express (16) as:
Hence, after the translation of pj(t) the coordinates of every vertex in the star Zj(t) are equal to the co-ordinates
of the corresponding vertex in T (t). Since barycentric coordinates are invariant to a translation
(as a special case of a-ne transformations), the stars Zj(t) for 1 j n deflne A(t) 2 A(T (t)).
This work presents two linear-reducible schemes: the scheme described in Section 3.3 and the scheme
introduced in this section. It is important to emphasize the principal difierence between these two
schemes. The flrst scheme approaches the linear morph using neighborhood matrices raised to a power.
This approaching signiflcantly afiects all trajectories of the interior vertices and is a global approach to
the linear morph. It allows to choose a degree of approximation to the linear morph by specifying the
power of the neighborhood matrices. However, even the morph closest to the linear morph does not allow
each individual trajectory to be as 'linear' as possible. The global convergence may be blocked by a single
problematic trajectory (which invalidates the morph), preventing the others from being straightened
further, see Figures 11(a) and 11(b). On the other hand, the local linear-reducible scheme, morphing the
component stars separately, may afiect a group of trajectories of adjacent vertices almost independently
of other trajectories of the morph. However, the local scheme does not attempt to approximate the
linear morph for stars for which the linear morph is invalid. Thus, vertices of triangulation regions
that cannot be morphed linearly have trajectories similar to those generated by the convex combination
morph, and vertices of regions that may be morphed linearly have trajectories very close to straight lines,
see
Figure
11(c). Knowing the properties of both the linear-reducible schemes, it is possible to choose
the most suitable for speciflc triangulations and speciflc applications.
These two schemes may also be combined to obtain a morph that is closer to the linear morph than
a morph generated separately by each of the schemes. First, the scheme of Section 3.3 is used to
generate a valid morph TP (t) with a maximal power for neighborhood matrices. Then the scheme of
this section is applied, morphing each of the stars separately. For stars for which the linear morph is
invalid, corresponding trajectories from TP (t) are used. For the rest of the stars, the linear morph is
used. The resulting morph is valid, since the morphs of all component stars are valid. The morph in
Figure
11(d) was generated using this combined scheme.
Controllable Morphing of Compatible Planar Triangulations 15
(a) (b) (c) (d)
Fig. 11. Trajectories of various morphs approaching the linear morph. The dashed lines are the edges of the source and
target triangulations. (a) Trajectories of the convex combination morph. (b) Trajectories of the valid morph generated by
raising neighborhood matrices to power trajectories are closer to straight lines than the trajectories of the
convex combination morph. However, the two lower trajectories could still potentially be straight lines without afiecting the
validity of the morph. (c) Trajectories of the valid morph generated using the local linear-reducible scheme. The two lower
trajectories are straight lines, but the rest are identical to the corresponding trajectories of the convex combination morph.
(d) Trajectories of the valid morph generated by combination of two linear-reducible schemes. The two lower trajectories
are linear. The rest approach straight lines similar to the corresponding trajectories of the morph with power 1:3.
4.2 Testing Validity of the Linear Morph
The linear-reducible scheme, described in the previous section, morphs an individual star linearly only if
the linear morph is valid. A natural question is how to determine whether the linear morph between Z(0)
and Z(1) will be valid or not. Clearly, the naive test which verifles whether Z(1 ) is a valid triangulationis not enough. To verify whether Z(t) is a valid triangulation for all 0 t 1 is impossible in practice,
since [0; 1] is a continuum. Appendix A presents a robust and fast (linear time complexity) method to
perform the test. This method can also be applied to check the validity of linear morphs for general
triangulations. According to Theorem 4.1 it is su-cient to check the validity of the linear morphs for all
corresponding stars of the two triangulations. The complexity of this test is O(V (T )), namely, linear in
the size of the triangulations.
4.3 Improving Triangle Area Behavior
This section describes a method for improving the behavior of the triangle areas during the morph. The
triangle areas do not always evolve uniformly during the morph when using the methods described in
the previous sections. In fact, the triangle areas may evolve linearly only when the triangulations have
a single interior vertex. For a speciflc triangle i, we would like its area, denoted by Si, to behave for
This cannot be satisfled for all triangles of the triangulation for all 0 < t < 1. Consider the following
equation for the area of a triangle with vertices i, j and k:
Areas of triangles with two boundary vertices are transformed uniformly only when the third vertex travels
linearly with a constant velocity. Areas of triangles with a single boundary vertex are quadratically (not
uniformly) transformed, since the two non-boundary vertices travel linearly with constant velocities, by
(21).
The problem of the triangle area improvement may be formulated as follows. Denote by Si(t) the
ACM Transactions on Graphics, Vol. X, No. X, xxx 2001.
Gotsman
desired area of a triangle i that evolves linearly:
Thus, a morph between two triangulations should minimize a cost function such as:
We now show how to improve the triangle area evolution using the local scheme, such that the resulting
morph is at least closer to (23) than the convex combination morph.
Since it is di-cult to improve the triangle areas for the entire triangulation, the improvement may be
done separately for the stars of the triangulation. This is performed after a morph of a speciflc star is
deflned. To preserve the validity of the morph, '-components of the boundary vertices are preserved.
The improvement is done by a variation of the boundary vertices in the radial direction relative to the
origin.
First, we consider an improvement such that all triangles within the star have exactly the desired areas,
namely, the area of each triangle is Si(t). This approach, however, has a serious drawback. While every
triangle has its desired area within the star, its shape signiflcantly difiers from the shape it assumes in
the entire triangulation. Furthermore, since a triangle belongs to a number of stars, its shapes in the
difierent stars might contradict each other considerably. Therefore the resulting morph is very unstable.
The trajectories that the interior vertices travel are tortuous. The triangle areas are far from uniform
and hardly better than those generated by the convex combination morph.
All this means that the evolution of the triangle areas within the stars must also take into account the
triangle shapes. For a speciflc triangle i, one of its vertices is the interior vertex of the star, that is placed
at the origin for 0 t 1. The angle adjacent to the interior vertex cannot be changed, because it may
afiect the validity of the morph. Therefore an improvement of the triangle area is achieved by a variation
of the lengths of its two edges adjacent to the interior vertex. Every edge adjacent to the interior vertex
belongs to exactly two triangles. Consequently, the length of the edge after an improvement for one
triangle does not always coincide with the length of the edge within the second triangle. We improve the
triangle areas separately for each triangle, and the resulting length of the edge is the average of the two
lengths.
We propose a simple method that improves the area of a single triangle and also preserves the triangle
shape. This method changes the positions of the triangle vertices in the radial direction such that the
triangle area evolves linearly and the lengths of the radial edges maintain the proportions they would
have had, had the edge lengths evolved linearly. Let a and b be the lengths of the edges, and be the
angle between them. The area of the triangle is:S = a b sin() (24)Denote by a(t) and b(t) the lengths of the edges as they evolve linearly:
The resulting a(t) and b(t) are the lengths of the edges such that the triangle area is S(t) deflned by
(22). In order to flnd a(t) and b(t) it is necessary to solve the following system of equations with the
unique solution.2 S(t)
to satisfy
preserving the relation between the edges:
See an example of a morph generated using this method in Figure 12.
5. EXPERIMENTAL RESULTS: MORPHING POLYGONS
In practice, morphing is performed more frequently on planar flgures than on planar triangulations.
Luckily, many types of planar flgures may be embedded in triangulations as a subset of the triangulation
edges. Thus the problem of morphing planar flgures may be reduced to that of morphing triangulations,
and the edges not part of the flgure are ignored in the resulting morph. Two popular cases of planar
flgures are planar polygons and planar stick flgures. The former is a cycle of edges, and the latter a
ACM Transactions on Graphics, Vol. X, No. X, xxx 2001.
Controllable Morphing of Compatible Planar Triangulations 17
Fig. 12. A morph with good area behavior is generated using the local scheme with the
method for area improvement. Compare with Figure 7 showing the convex combination
morph between these triangulations.
connected straight line graph. Embedding these types of flgures in a triangulation in an e-cient manner
is a di-cult problem in itself, and has been treated separately by us in [Gotsman and Surazhsky 2001]
and [Surazhsky and Gotsman 2001]. Here we will assume that these embeddings have been done, and
investigate the efiect of the various triangulation morphing techniques described in the previous sections
on the results.
Figure
13 shows morphs between two polygons|the shapes of the two letters U and S. Figure 14
shows morphs between two stick flgures|the shapes of a scorpion and a dragony. These examples
have been embedded in planar convex tilings [Floater and Gotsman 1999] (the faces are not necessarily
triangles), for which all the theory developed in this paper holds too. Being convex, these tilings may
be easily triangulated if needed. In both examples the linear morph self-intersects (Figure 13(a) and
Figure
14(a)). The convex combination morph is valid, but has an unpleasant behavior (Figure 13(b)
and
Figure
14(b)). The local scheme, which averages the polar coordinates of star boundary vertices,
provides good results when parts of the flgures should be rotated during the morph. Figure 13(c) and
Figure
14(c) demonstrate that this results in a rather natural morph. Unfortunately, the morph of
Figure
13(c) undergoes some exaggerated shrinking. This may be avoided by using the local scheme with
area improvement, as in Figure 13(d).
Figure
14(d) shows how to approach the linear morph while still preserving the morph validity, by
using the scheme that raises the neighborhood matrices to power 17. Note that the tail travels a path
similar to that of the linear morph, but by shrinking avoids self-intersection. Also note that some parts
of the rest of the triangulations self-intersect, since the trajectories of all interior vertices approach the
linear ones. But since we are interested only in the validity of the stick flgure itself, we can ignore the
behavior of other edges.
6. CONCLUSION
We have described a robust approach for morphing planar triangulations. This approach always yields a
valid morph, free of self-intersections, based on the only known analytical method for generating morphs
guaranteed to be valid [Floater and Gotsman 1999]. The approach, having many degrees of freedom, may
be used to produce a variety of morphs, and, thus, can be tuned to obtain morphs with many desirable
characteristics.
6.1 Discussion
Morphing thru an intermediate triangulation poses the following interesting problem. Find a morph
through an intermediate triangulation at a given time tm, in which only a subset of the interior vertices
have prescribed positions. This contrasts with the scenario treated in Section 3.4, where all vertices of
the intermediate triangulation have prescribed positions. While constraining only a subset of the vertices
might seem easier than constraining all the vertices, it is actually more di-cult, since if all vertices are
constrained, the user supplies a complete geometry compatible with the triangulation. Supplying only
part of the vertex geometry leaves the algorithm the task of flnding compatible geometries for the other
vertices, which is di-cult, especially since they might not exist.
This (static) problem is interesting in its own right, and has applications in the generation of texture
coordinates. It is only recently that Eckstein et al. [2001] have shown how to solve this problem by the
introduction of (extraneous) Steiner vertices. The solution with a minimal number of Steiner vertices,
Gotsman
(a)
(b)
(c)
(d)
Fig. 13. Morphing simple polygons|the shapes of two letters S and U: (a) The linear morph is invalid | the polygon
self-intersects. (b) The convex combination morph is valid, but unnatural. (c) Morph generated by the local scheme that
averages polar coordinates. It behaves naturally, accounting for the rotation of the lower part of the S, but shrinks in an
exaggerated manner. (d) Morph generated by the local scheme with area improvement. It is similar to the morph in (c),
but with much less shrinking of the shape.
and in particular, with none when it is possible, is still open. In general, the main di-culty stems from
the fact that our morphing techniques use neighborhood matrices, which always result in a global solution
to the morphing problem, making it virtually impossible to precisely control an individual vertex location
(or trajectory).
In Section 3.3, two conjectures are used to generate a morph that approaches the linear morph. Numerous
examples support these conjectures, but a proof still eludes us. Since the matrices used to generate
morphs by that method are not legal neighborhood matrices, the proof requires more a profound comprehension
of the method.
Section 4.3 presents a heuristic for improving the evolution of triangle areas. Further analysis of the
correlation between vertex trajectories as well as triangle area behavior within the stars and behavior of
these elements in the triangulation, may provide insight to more successful heuristics, perhaps even some
optimal approximation to the desired triangle areas.
It is important to make the techniques presented in this work applicable to real-world scenarios. As
mentioned in Section 5, the techniques have already been applied to morph simple planar polygons and
stick flgures by embedding them in triangulations. In practice, the triangulations are built around them.
For example, the triangulations in which simple polygons are embedded are constructed by compatibly
triangulating the interior of the polygons and an annular region in the exterior of the polygon between
the polygon boundary and a flxed convex enclosure. See [Gotsman and Surazhsky 2001; Surazhsky and
Gotsman 2001] for more details. These works have yet to be extended to morph planar flgures with
arbitrary (e.g. disconnected) topologies.
Controllable Morphing of Compatible Planar Triangulations 19
(a)
(b)
(c)
(d)
Fig. 14. Morphing between flgures of a scorpion and a dragony: (a) The linear morph is invalid | the flgure self-
intersects. (b) The convex combination morph is valid, but unnatural. (c) Morph generated by the local scheme that
averages polar coordinates. It behaves naturally, accounting for the rotation of the tail. (d) Morph generated by the raising
the neighborhood matrices to power 17. Note that the tail travels a path similar to that of the linear morph (a), but it
shrinks in order to avoid self-intersection.
Morphing triangulations is usually useful as a means to morph planar flgures, and in that case the flxed
convex boundary is not restrictive. However, if the objective is to actually morph two triangulations (e.g.
for image warping), then a flxed common convex boundary might be restrictive. Fortunately, using the
methods of [Gotsman and Surazhsky 2001; Surazhsky and Gotsman 2001], it is possible to overcome this
by embedding the source and target triangulations with difierent boundaries, in two larger triangulations
with a common flxed boundary. In practice this is done by compatibly triangulating the annulus between
the original and new boundary, possibly introducing Steiner vertices. See Figure 15.
6.2 Future Work
A challenging research subject would be to extend the techniques of this work to three dimensions,
certainly, starting from an extension of [Tutte 1963]. Furthermore, it would be interesting to address the
problems in [Aronov et al. 1993; Babikov et al. 1997; Souvaine and Wenger 1994; Etzion and Rappoport
1997] for 3D.
A. TESTING VALIDITY OF THE LINEAR MORPH OF A STAR
Let v0 be the interior vertex of the stars with degree d. The corresponding boundary vertices are indexed
without loss of generality as in a counterclockwise order with respect to the interior vertex. Let
be the boundary vertex coordinates during the linear morph. We denote
by i(t); 'i(t) polar coordinates of the vertex i. Let 'i(t) be the angle of a triangle
i adjacent to the interior vertex. Note that all calculations with indices are performed modulo d. It is
ACM Transactions on Graphics, Vol. X, No. X, xxx 2001.
20 V. Surazhsky and C. Gotsman
Fig. 15. Embedding two compatible triangulations (shaded regions) with difierent boundaries into larger triangulations
with a common flxed boundary.
assumed that the polar coordinates of the vertices for are chosen in such a way that
is necessary to check that the linear morph preserves the
triangle orientations, namely, it should be verifled that 0 < i(t) < for 0 t 1. To verify this, it
is su-cient to check the extrema of i(t) on [0; 1]. The extremum points may be found by solving the
equation
For notational simplicity, we denote b. Due to the linear traversals of the vertices,
we have:
The '-component of the polar coordinates is expressed as:
The next step is to derive '0b(t)'0a(t). But the sign(z) function is not convenient for the derivation. To
overcome this problem we perform some substitutions for '(t). Since j'a(1) 'a(0)j < we can rotate
both vertices va(0) and va(1) by the same angle !a round the origin such that the vertices are placed
in the upper half plane, namely, the y-components of the vertices are positive. We denote the rotated
coordinates by (x~; y~) with the polar '-component '~. Thus, we have
Since the rotation is an a-ne transformation, it is easy to see that for the line segment deflned by (27):
Clearly, due to y~a(0) > 0, y~a(1) > 0 and (27). Consequently, 'a(t) deflned as
in (29) may now be expressed as:
may easily be derived and after the simpliflcation we get:
Due to the rotational invariance of the nominator and the denominator, we can return to the original
(not rotated) coordinates:
Controllable Morphing of Compatible Planar Triangulations 21
The similar procedure of a rotation may be performed for the vertex b, since for b it also holds that
Therefore we can write
The expression x~2(t) being -components of the polar coordinates, is strictly positive. Hence, 35
is equivalent to:
Since x(t) and y(t) are linear in t, (36) is a quadratic equation in t and may be solved analytically.
--R
On compatible triangulations of simple polygons.
Constructing piecewise linear homeomorphisms of polygons with holes.
Texture mapping with hard constraints.
On compatible star decompositions of simple polygons.
Parameterization and smooth approximation of surface triangulation.
How to morph tilings injectively.
Polygon morphing using a multiresolution representation.
Guaranteed intersection-free polygon morphing
Introduction to linear and nonlinear programming.
Joint triangulations and triangulation maps.
2D shape blending: an intrinsic solution to the vertex path problem.
A physically based approach to 2D shape blending.
Shape blending using the star-skeleton representation
Constructing piecewise linear homeomorphisms.
Surface interpolation based on new local coordinates.
Morphing planar triangulations.
Morphing stick flgures using optimized compatible triangulations.
Image morphing with feature preserving texture.
How to draw a graph.
accepted May
--TR
Joint triangulations and triangulation maps
A physically based approach to 2MYAMPERSANDndash;D shape blending
Feature-based image metamorphosis
On compatible triangulations of simple polygons
2-D shape blending
Parametrization and smooth approximation of surface triangulations
Three-dimensional distance field metamorphosis
Foldover-free image warping
How to morph tilings injectively
As-rigid-as-possible shape interpolation
On Compatible Star Decompositions of Simple Polygons
Morphing Stick Figures Using Optimized Compatible Triangulations
--CTR
David Vronay , Shuo Wang, Designing a compelling user interface for morphing, Proceedings of the SIGCHI conference on Human factors in computing systems, p.143-149, April 24-29, 2004, Vienna, Austria
Jeff Danciger , Satyan L. Devadoss , Don Sheehy, Compatible triangulations and point partitions by series-triangular graphs, Computational Geometry: Theory and Applications, v.34 n.3, p.195-202, July 2006
Anna Lubiw , Mark Petrick , Michael Spriggs, Morphing orthogonal planar graph drawings, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.222-230, January 22-26, 2006, Miami, Florida
Hayley N. Iben , James F. O'Brien , Erik D. Demaine, Refolding planar polygons, Proceedings of the twenty-second annual symposium on Computational geometry, June 05-07, 2006, Sedona, Arizona, USA
Vitaly Surazhsky , Joseph (Yossi) Gil, Type-safe covariance in C++, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus | linear Morph;morphing;compatible triangulations;self-intersection elemination;controllable Morphing;isomorphic triangulations;local Control |
502796 | Application of aboutness to functional benchmarking in information retrieval. | Experimental approaches are widely employed to benchmark the performance of an information retrieval (IR) system. Measurements in terms of recall and precision are computed as performance indicators. Although they are good at assessing the retrieval effectiveness of an IR system, they fail to explore deeper aspects such as its underlying functionality and explain why the system shows such performance. Recently, inductive (i.e., theoretical) evaluation of IR systems has been proposed to circumvent the controversies of the experimental methods. Several studies have adopted the inductive approach, but they mostly focus on theoretical modeling of IR properties by using some metalogic. In this article, we propose to use inductive evaluation for functional benchmarking of IR models as a complement of the traditional experiment-based performance benchmarking. We define a functional benchmark suite in two stages: the evaluation criteria based on the notion of "aboutness," and the formal evaluation methodology using the criteria. The proposed benchmark has been successfully applied to evaluate various well-known classical and logic-based IR models. The functional benchmarking results allow us to compare and analyze the functionality of the different IR models. | Summary
In summary, the probabilistic model has the highest degree of potential precision,
followed by the threshold vector space model, then the Boolean model and the nave
vector space model. This conclusion is consistent with the experimental results. The
motivation for this judgment lies in the varying degrees to which they respectively
support (or don't support) conservative monotonicity.
4. Inductive Evaluation of Logical IR Models
In the past decade, a number of logic based IR models have been proposed (see
[Bruza and Lalmas 1996; Lalmas 1998; Lalmas and Bruza 1998] for detailed
surveys). These models can be generally classified into three types: Situation Theory
based, Possible World based, and other types. In what follows, we investigate three
well-known logic IR models.
In the following analyses, the fact of a document D consisting of information
~
carrier i is represented by Dfi i. For example, Guarded Left Compositional
Monotonicity (i.e., postulate 7) means that if a document consisting of i is about k (i.e.
under the guarded condition that i doesn't preclude j (i ^/ j), we can conclude
that a document consisting of i j is about k (i j |= k). In the following
benchmarking exercise, we adopt this interpretation for logical IR models for reasons
of simplicity. For the classical models, we treated the document and the query as
information carriers directly, for there are no term semantic relationships involved in
classical models.
4.1 Situation theory based model
4.1.1 Background
Rijsbergen and Lalmas developed a situation theory based model [Lalmas 1996;
Rijsbergen and Lalmas 1996]. In their model, a document and the information it
contains are modeled as a situation and types. A situation s supports the type j,
denoted by s|=j, means that j is a part of the information content of the situation.
The flow of information is modeled by constraints (fi ). Here, we assume jfi j. A
query is one type (single type query) or a set of types (complex query), e.g., a query f
For a situation s and a set of types f, there are two methods to determine whether d
supports f. The first is that d supports f if and only if s supports j for all types j f
[Barwise 1989]. Later Lalmas relaxed the condition to represent partial relevance: any
situation supports f if it supports at least one type in f [Lalmas 1996].
IR system is to determine to which extent a document d supports the query f,
denoted by d|=f. If d|=f, then the document is relevant to the query with certainty.
Otherwise, constraints from the knowledge set will be used to find the flow that lead
to the information f. The uncertainty attached to this flow is used to compute the
degree of relevance.
A channel is to link situations. The flow of information circulates in the channel,
where the combination of constraints in sequence (c ;c ) and in parallel (c ||c ) can
be represented. Given two situations s1, s2, s1| cfi s2 means that s1 contains the
information about s2 due to the existence of the channel c. A channel c supports
constraint jfi y, denoted c|=jfi y, if and only if for all situations s1 and s2, if s1|=j,
s1|fi s2, and jfi y, then s2|=y. The notation s1|=j | cfi s2|=y stands for c|=jfi y
and s1|fi s2, which means that s1|=j carries the information that s2|=y, due to
channel c. If s1|=j | cfi s2|=y and s1=s2, then c is replaced by a special channel 1,
and j logically entails y.
4.1.2 Situation Theory Based Aboutness (|= )
Let U be the set of documents, S be the set of situations, T be the set of types, C be
the set of channels. Furthermore, let DU is a document, and Q is a query. Then,
D is modeled as a situation.
Q is modeled as a set of types
Given two set of types f1 and f2:
Dfi~ f1 iff ("jf1)(D|=j).
f1 |= f2 iff ($ c C) ("D|Dfi~ f1) ($jf1) ($yf2) (D |=j | cfi D'
|=y). Note that D' could be D itself, i.e. c=1. A more special case is D |=y
| 1fi D |=y.
(Aboutness)
f1 |=/ ST f2 iff ($/ c C) ("D|Dfi~ f1) ($jf1) ($yf2) (D |=j | cfi D'
|=y).
f1 sfi f2 iff f1 f2 (Surface
f1 d fi f2 iff ($y1f1) ($y2 f2) (jfi y). (Deep
f1 f2 f1 f2 (Composition)
precludes its negation, e.g., (s| s|=<<hit, john,
john, x; 0>>).
Suppose the negation of a set of types Q is the set of the negations of every
component type, then Q^Q.
4.1.3 Inductive Evaluation
Situation theory based IR model supports R, C, LM, RM, M, C-FA, GLM, GRM,
QLM and QRM. The proofs are provided as follows:
1. R: Reflexivity is supported.
Given Dfi~ f, Q=f
($ c C and c=1) ("D|Dfi~ f) ($jf) (D |=j | 1fi D |=y).
2. C: Containment is supported.
Surface containment is supported.
Given f1 sfi f2
f1 f2
($ c C and c=1) ("D|Dfi~ f1) ($jf1) ($jf2) (D |=j | 1fi D |=j).
Deep containment is supported.
Given f1 d fi f2
($ c C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | cfi D' |=y2).
3. RCM: Right Containment Monotonicity is not supported.
Surface containment:
Given f1 |= f2 and f2 sfi f3
($ c1 C)("D|Dfi~ f1)($y1f1)($y2 f2) (D |=y1 | c1fi D' |=y2) and
But it not necessary that y2 f3
It is not necessary that f1 |= f3.
Deep containment:
Given f1 |= f2 and f2 d fi f3
($ c1 C)("D|Dfi~ f1)($y1f1)($y2 f2) (D |=y1 | c1fi D' |=y2) and
But it is not necessary that j1=y2
It is not necessary that ($c2C) ("D|Dfi~ f1) ($y1f1) ($j2f3)
(D|=y1
| c1fi D' |=j2)
It is not necessary that f1 |= f3.
4. LM: Left Compositional Monotonicity is supported.
Given f1 |= f2
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | c1fi D' |=y2), f1
f3 f1 f3, and {"D| D ~ f1 f3} {"D| D fi~ f1}
("D|Dfi~ f1 f3) ($y1f1 f3) ($y2 f2) (D |=y1 | c1fi D' |=y2),
5. RM: Right Compositional Monotonicity is supported.
Given f1 |= f2
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | c1fi D' |=y2), f2
f3 f2 f3, and {"D| D ~ f2 f3} {"D| D fi~ f2}
("D|Dfi~ f1) ($y1f1) ($y2 f2 f3) (D |=y1 | c1fi D' |=y2),
6. Mix (M) is supported.
Given f1 |= f2 and f3 |= f2
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | c1fi D' |=y2) and
Recall f1 f3 f1 f3
(D |=y1 | c1fi D' |=y2) and
(D |=j1 | c2fi D' |=j2)
6 If Q1 and Q2 are single types, RCM would be supported. Here, however, we consider Q as a set of
types, which is a more general case.
7. C-FA: Context Free And is supported.
Given f1 |= f2 and f1 |= f3
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | c1fi D' |=y2) and
($ c2 C) ("D|Dfi~ f1) ($j1f1) ($j2 f3) (D |=j1 | c2fi D' |=j2)
Recalling f1 f2 f1 f1
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2 f3) (D |=y1 | c1fi D' |=y2)
and
($ c2 C) ("D|Dfi~ f1) ($j1f1) ($j2 f2 f3) (D |=j1 | c2fi D' |=j2)
8. GLM: Guarded Left Compositional Monotonicity is trivially supported, as LM is
supported.
9. GRM: Guarded Right Compositional Monotonicity is trivially supported, as RM is
supported.
10. QLM: Qualified Left Monotonicity is trivially supported, as LM is supported.
11. QRM: Qualified Right Monotonicity is trivially supported, as RM is supported.
12. NR: Negation Rational is not supported.
Given f1|=/ ST f2
($/ c C) ("D|Dfi~ f1) ($jf1) ($yf2) (D |=j | cfi D' |=y)
This does not imply that
($/ c C) ("D|Dfi~ f1) ($jf1) ($yf2 f3) (D |=j | cfi D' |=y)
\ f1|=/ ST f2 f3 can not be guaranteed.
13. CWA: Close World Assumption is not supported.
Given f1|=/ ST f2, f2^f2
($/ c C) ("D|Dfi~ f1) ($jf1) ($yf2) (D |=j | cfi D' |=y)
But it doesn't mean it is necessary that ($ c C) ("D|Dfi~ f1) ($jf1)
(D |=j | cfi D' |= y)
\ We cannot conclude f1|= f2.
4.2 Terminological Logic based model
4.2.1 Background
Meghini, et al. proposed an IR model based on Terminological Logic (TL) [Meghini
et al. 1993]. An object-oriented approach is used to represent documents, queries, and
lexical, thesaural knowledge. The representations are not confined to describing the
content by a set of keywords. Instead, the contextual attributes, the layout
characterizations, the structure organizations and the information contents of the
documents are also taken into account. uses terms as the primary syntactic
expressions, which are concepts (monadic relations), roles (dyadic relations), and
individuals (see [Meghini et al. 1993] for formal definitions). Documents are
modeled as individual constants and a query is a concept entailing a class of
individuals, while a document can be an instance of a set of concepts describing the
properties of the document. The assertion C(i) means that an individual i is an
instance of a concept C. Concepts can be partially ordered by subsumption which
is specified by terminological postulates comprising the thesaural knowledge base W.
A terminological postulate is an expression of the form connotation (<.) or definition
.
(= ). The semantics of TL is defined by the interpretation I over the nonempty set of
individuals U i.e., the domain of discourse. I is a function that maps individual
constants into elements of U such that I(i )I(i
subsets of D and roles into subsets of U.U (see (Meghini et al. 1993) for details). An
algorithm called constraint propagation is also proposed for reasoning the complete
constraint set on the knowledge base by using a set of completion rules. This
inference is performed at KB construction time rather than at query time. When a
query C is formulated, it is added to the constraint set and the completion rules are
applied; then for every individual constant i occurring in the set, C(i) is checked by
simple table lookup techniques.
4.2.2
Let U be the set of all the documents, C be an alphabet of concepts. Furthermore, let
DU be a document, and QC be a query, then the aboutness in Terminological
Logic is defined as follows:
D is modeled as an individual constant.
Q is modeled as a concept.
For C1, C2 C,
Dfi~ C1 implies that C1(D) is satisfied, i.e. DI(C1).
C1fi C2 implies C1 is subsumed by C2, i.e. I(Q1) I(Q2). (Containment)
Note that if there are no terminological postulates involved, they are surface
containment. Otherwise, they are deep containment. As the surface containment
and deep containment have the same semantics under the interpretation I, we
need not distinguish them in the following proofs.
C1|= C2 Given any Dfi~ C1 and Q=C2, Q(D) is satisfied, i.e. D I(Q).
(Aboutness)
C1|=/ For every D such that Dfi~ C1 and Q=C2, Q(D) is unsatisfied, i.e.
D I(Q).
C1^C1. Note that I(a-not
4.2.3 Inductive Evaluation
based model supports R, C, RCM, C-FA, LM, M, GLM, QLM, NR and CWA.
The proofs are shown as follows:
7 (and C1 C2 Cn) denotes the set of those individuals that are denoted by C1 and C2 and Cn.
I(and C1 C2
8 (a-not C) denotes the set of all individuals that are not denoted by C. I(a-not C)=U\I(C).
1. R: Reflexivity is supported.
~
Given Dfi C1 and Q=C1
D I(C1)
D I(Q)
C1|= C1
2. C: Containment is supported.
~
Given C1 fi C2, Dfi C1 and Q=C2
D I(C1) I(C2)
D I(C2)
C1|= C2
3. RCM: Right Containment Monotonicity is supported.
Given C1|= C2, and C2fi C3
C1|= C2 For every D such that D fi~ C1, there is D I(C2)
D I(C3)
4. LM is supported.
Given C1|= C2
C1|= C2 For every D such that D fi~ C1, there is D I(C2)
\ For every D such that D fi~ C1 C3, there is D I(C2)
C1 C3|= C2.
5. RM: Right Compositional Monotonicity is not supported.
Given C1|= C2
C1|= C2 For every D such that D fi~ C1, there is D I(C2). But it is not
necessary that D I(C2 C3), as I(C2 C3) is a subset of I(C2).
\It is not necessary that C1 |= C2 C3.
6. Mix (M) is supported.
Given C1|= C2 and C3|= C2
C1|= C2 For every D such that D fi~ C1, there is D I(C2)
C3|= C2 For every D such that D fi~ C3, there is D I(C2)
\ For every D such that D fi~ C1 C3, there is D I(C2)
C1 C3|= C2.
7. C-FA is supported.
Given C1|= C2, C1|= C3
C1|= C2 For every D such that D fi~ C1, there is D I(C2)
C1|= C3 For every D such that D fi~ C1, there is D I(C3)
\ For every D such that D fi~ C1, there is D I(C2). I(C3), i.e. D I(C2 C3)
C1 |= C2 C3.
8. GLM: Guarded Left Compositional Monotonicity is trivially supported, as LM is
supported.
9. GRM: Guarded right Compositional Monotonicity is not supported for the similar
reason of RM.
10. QLM: Qualified Left Monotonicity is trivially supported as LM is supported.
11. QRM: Qualified Right Monotonicity is not supported for the similar reason of
RM.
12. NR: Negation Rational is supported.
C1|=/ For every D such that D fi~ C1, there is D I(C2)
D I(C2 C3)
13. CWA: Close World Assumption is supported.
Given C1 | C2, C2^C2
C1|=/ For every D such that D fi~ C1, there is D I(C2)
C2^C2 I(a-not Q1)=U\I(Q1)
D I(not Q1)
4.3 Possible world based model
4.3.1 Background
A number of possible world based logical IR models have been proposed. As stated in
[Lalmas and Bruza 1998], these systems are founded on a structure <W, R>, where W
is the set of worlds and R W.W is the accessibility relation. They can be classified
according to the choice made for the worlds wW and accessibility relation R. For
example, w can be a document (or its variation) and R is the similarity between two
documents w1 and w2 [Nie 1989; Nie 1992], or w is a term and R is the similarity
between two terms w1 and w2 [Crestani and van Rijsbergen 1995(a); Crestani and
van Rijsbergen 1995(b); Crestani and Van Rijsbergen 1998], or w is the retrieval
situation and R is the similarity between two situations w1 and w2 [Nie et al. 1995],
etc.
Most of these systems use a technique called imaging. To obtain P(dfi q), where
the connective fi represents conditional, we can move the probability from non-d-
world to d-world by a shift from the original probability distribution P of the world w
to a new probability distribution Pd of its closest world wd where d is true. This
process is called deriving P from P by imaging on d. The truth of dfi q at w will
d
then be measured by the truth of q at wd . To simplify the analysis, let's suppose that
the truth of q in a world is binary and the closest world of a world w is unique .
P(dfi q) can be computed as follows:
P(d
1, if q is true in w
0, otherwise
0, otherwise
wd is the closest world of w where d is true
Now, we study in detail Crestani and van Rijsbergen's model which models the terms
as possible worlds to see some properties of the possible world based approach. In
this model, term is considered as vector of documents, while the document and query
are vectors of terms. The accessibility relations between terms are estimated by the
co-occurrence of terms. P(dfi q) can be computed as:
P(d
occurs in q
0, otherwise
0, otherwise
td is the closest term of t where d is true(td occurs in d) (12)
Generally, d is deemed relevant to q when P(dfi q) is greater than a threshold value,
e.g., a positive real number . Similar to the vector space model (see section 3.3.2),
the simplest case is that at least one term which occurs in both d and q, or it is also the
closest term of some other terms occurring in d and q. This case is referred to as nave
possible world based model and the general case as threshold possible world based
model.
9 Actually, it can be multi-valued in an interval.
There is also an approach called General Logical Imaging that does not rely on this assumption.
4.3.2 Nave Possible World Aboutness Based on Crestani and van Rijsbergen's
Model (|=NAIVE-PW -CV )
Let U be the set of all the documents, T be the set of all the index terms, Furthermore,
let DU be a document, Q be a query, and t be a term. The aboutness in the nave
Possible World based models is defined as follows:
D and Q are sets of terms
(Surface containment)
is the closest term of t2 (Deep containment)
Q1 Q2 Q1 Q2
Preclusion is foreign to this model.
4.3.3 Inductive evaluation
This model supports R, C (surface containment), LM, RM, M and C-FA. Proofs are
given as follows:
1. R: Reflexivity is supported.
Given
P(D fi
2. C:
Surface containment is supported.
Given DQ
Deep containment is not supported.
Given t1 D, t2 Q, and t1fi t2
t2 can be imaged to its closest D-world t1
But this cannot imply P(D fi Q may not occur in Q.
\D|= Q cannot be guaranteed.
3. RCM: Right Containment Monotonicity is not supported.
For surface containment:
Given D|= Q1, and Q1fi Q2
P(D fi
But it does not imply ($t Q2) ($t' T) (I(t ,t' ) =1)
It is not necessary that P(D fi
\ It is not necessary that D|= Q2
For deep containment: Given P(D fi Q1) > 0, t1 occurs in Q1, t1fi t2, and t2
occurs in Q2. This does not mean that there must exist a term which is the
closest term of some terms where D is true, and occurs in Q2. Thus, it is not
necessary that D|= Q2
4. LM: Left Compositional Monotonicity is supported.
Given D1|= Q, and D= D1 D2
At least one term t is the closest term of some terms where
is true and t Q, and D1 D2=D1 D2
t is also true in D1 D2, and t Q
5. RM: Right Compositional Monotonicity is supported.
Given D|= Q1, and
P(D fi
($t Q1) ($t' T) (I(t ,t' )=1) and t Q
P(D fi Q1
\D|= Q1 Q2
trivially supported, as LM is supported.
7. C-FA: Context Free And is trivially supported, as RM is supported.
8. GLM: Guarded Left Compositional Monotonicity is inapplicable, as preclusion is
foreign to this model.
9. GRM: Guarded Right Compositional Monotonicity is inapplicable, as preclusion
is foreign to this model.
10. QLM: Qualified Left Monotonicity is inapplicable, as preclusion is foreign to this
model.
11. QRM: Qualified Right Monotonicity is inapplicable, as preclusion is foreign to
this model.
12. NR: Negation Rational is not supported.
Given D|NAIVE-PW -CV Q1,
P(D fi
($t Q1) ($t' T) (I(t ,t' )=1), and Q =Q1 Q2
But it is possible that ($t Q) ($t' T) ( I(t ,t' )=1),
It is possible that P(D fi Q1
\ It's possible that D|= Q1 Q2.
13. CWA: Close World Assumption is inapplicable, as preclusion is foreign to this
model.
4.3.4 Threshold Possible World Aboutness Based on Crestani and van
Rijsbergen's Model (|=T -PW -CV )
Let U be the set of all the documents, T be the set of all the index terms, Furthermore,
let DU be a document, Q be a query, and t be a term. The aboutness in this models is
then defined as follows:
D and Q are sets of terms
D|=T -PW -CV Q iff P(Dfi Q), where is a positive real number in the interval (0,
1]. (aboutness)
The mappings of containment, composition and preclusion are same as those in
Section 4.3.2.
4.3.5 Inductive evaluationThis model supports R, LM, RM, M, C-FA, and conditionally supports C, RCM and
NR. Proofs are given as follows:
1. R: Reflexivity is supported. The proof is the same as that of R for |=Z -PW -CV .
2. C:
Surface containment is conditionally supported.
Given DQ,
This does not imply P(D fi Q It depends on the sum of
probability of the index terms shared by D and Q and the index terms which
11 The comparison between nave and threshold PW based models are similar to that between nave
and threshold vector space models.
can be imaged to those shared terms. Only under the condition that the
threshold is not greater than that sum, P(D fi Q
D|=T -PW -CV Q) can be guaranteed.
Deep containment is conditionally supported.
Given t1 D, t2 Q, and t1fi t2
t2 can be imaged to its closest D-world t1
But this cannot imply P(D fi Q . Only under the condition
that the threshold is not greater than the probability of the index terms
shared by D and Q and the index terms which can be imaged to those shared
can be guaranteed.
3. RCM: Right Containment Monotonicity is conditionally supported.
For surface containment:
Given D|=T -PW -CV Q1, and Q1fi Q2
P(D fi
But this does not imply enough index terms shared by D and Q1 and the index
terms which can be imaged to those shared terms to make
Only under the condition that the threshold is set to be not greater than the
sum of probability of the index terms shared by D and Q and the index terms
which can be imaged to those shared terms, P(D fi
(i.e. D|=T -PW -CV Q2) can be guaranteed.
For deep containment: The proof and the condition are similar to those of
surface containment.
4. LM: Left Compositional Monotonicity is supported.
Given D1|=T -PW -CV Q, and D= D1 D2
The number of index terms which are the closest terms of certain terms where
true must be not less than that of index terms which are the closest
terms of certain terms where D1 is true. This implies that P (t) P (t) .
5. RM: Right Compositional Monotonicity is supported. The proof is similar to that
of C-FA.
6. Mix (M) is supported.
Given D1|=T -PW -CV Q, D2|=T -PW -CV Q
7. C-FA: Context Free And is supported.
Given D|=T -PW -CV Q1, D|=T -PW -CV Q2, and
P(D fi
Q=Q1 Q2=Q1 Q2 (Q1 Q and Q2 Q),
P(D fi
P(D fi Q1
\D|=T -PW -CV Q1 Q2
8. GLM: Guarded Left Compositional Monotonicity is inapplicable, as preclusion is
foreign to this model.
9. GRM: Guarded right Compositional Monotonicity is inapplicable, as preclusion is
foreign to this model.
10. QLM: Qualified Left Monotonicity is inapplicable, as preclusion is foreign to this
model.
11. QRM: Qualified Right Monotonicity is inapplicable, as preclusion is foreign to
this model.
12. NR: Negation Rational is conditionally supported.
Given D|T -PW -CV Q1,
P(D fi
But it's possible that ($ t Q) ($ t' T) ( I(t ,t' )=1) and the number of
t is large
It's possible that P(D fi Q1
\ It's possible that D|=T -PW -CV Q1 Q2.
Only under the condition that the threshold is set to be greater than the sum of
probability of the index terms shared by D and Q and the index terms which can
be imaged to those shared terms, P(D fi
D|T -PW -CV Q) can be guaranteed.
13. CWA: Close World Assumption is inapplicable, as preclusion is foreign to this
model.
4.4 Discussion
Deep containment is not relevant to classical models, unless they are augmented
by thesauri from which deep containment relationships like penguin ELUG can
be extracted. Logical models, by their very nature, directly handle deep
containment relationships. This means logical models are able to capture
information transformation e.g., logical imaging in the possible world models.
This is a major advantage of logical models. Moreover, they provide stronger
expressive power, e.g. based model provides the structured representation of
information, while concepts such as situation, type and channel, etc. in situation
theory based model make it more flexible.
The properties of an IR model are largely determined by the matching function it
supports. Two classes of matching function are widely used: containment andoverlapping (nave and non-zero threshold). The Boolean and based models
have similar properties (except that some properties inapplicable to Boolean
model are supported by based model), due to their common retrieval
mechanism, namely containment, which requires that all the information of the
query must be contained in or can be transformed to the information of the
document. The nave vector space model and nave possible world based model
have similar properties (except that deep containment is applicable to possible
world based model only) due to their simple overlapping retrieval mechanism
(i.e., a document is judged to be relevant if it shares at least one term with the
query). Compared with Boolean and based models, the nave vector space and
the nave possible world based model support Left and Right Compositional
Monotonicity, which causes imprecision. The Boolean and based models
support Right Containment Monotonicity, which promotes recall and supports the
Negation Rationale, which can improve precision. In the nave vector space and
possible world based models, Right Containment Monotonicity and Negation
rational are not supported. In summary, there is evidence to support the
assumption that the Boolean and based models are more effective models than
the nave vector space and the nave possible world based model.
The nave possible world model uses imaging (i.e., imaging from non-D world to
D-world) besides simple overlapping. Even though there may exist a containment
relation between a term t1 in the document and another one t2 in the query, if t1 is
not shared by the document and the query, then this transformation from t2 to t1 is
ineffective to establish the relevance. This explains why nave possible world
model does not support Containment (deep). The mechanics of imaging is
dependent on a notion of similarity between worlds. Experimental evidence shows
12 The discussion of Boolean model is based on the assumption that the information composition is
modeled by logical AND as we adopted in 3.1.
a relation between retrieval performance and the way in which the relationship
between worlds is defined [Crestani and Van Rijsbergen 1998]. As the underlying
framework for inductive evaluation presented in this paper does not explicitly
support a concept of similarity, it can be argued that the mapping of the possible
worlds based model into the inductive framework is incomplete. More will be said
about this point in the conclusions.
The threshold possible world model is (surprisingly) both left and right
monotonic. As a consequence there is some grounds to conclude that this model
would be imprecise in practice, and also be insensitive to document length. As
mentioned in the previous point, retrieval performance depends on how the
similarity between worlds is defined. As both LM and RM are supported, it can be
hypothesized that the baseline performance for the threshold possible world model
would be similar to the nave overlap model. More sophisticated similarity metrics
between worlds would improve performance above this baseline. Crestani and
Rijsbergen allude to this point as follows: . it is possible to obtain higher
levels of retrieval effectiveness by taking into consideration the similarity between
the objects involved in the transfer of probability. However, the similarity
information should not be used too drastically since similarity is often based on
cooccurrence and such a source of similarity information is itself uncertain
[Crestani and Van Rijsbergen 1998]. When the threshold possible world model
judges a document D relevant to the query Q, this implies that D shares a number
of terms with Q or a number of terms can be transformed to the shared terms so
that P(Dfi Q) is not less than the threshold . The expansion of D or Q can only
increase P(Dfi Q). This judgment is not true for threshold vector space model, for
after the expansion of D (or Q), the increase of the space of D (or Q), i.e. number
of terms in D and Q, may be much more than the increase of the shared terms.
Thus the degree of overlapping may be decreased.
The threshold possible worlds model and situation theory using Lalmas' relaxed
condition support LM and RM, and based model supports LM. This implies
that these models turn out to be less precise than probabilistic and threshold vector
space models. This in turn reflects the fact that logical models have not yet shown
the performance hoped for since their inception.
5. Results Summary and Conclusions
Table
1: Summary of the results of the evaluation.
Models
Postulates
Boolean
Nave
Vector Space
Threshold
Vector
Space
Probabilistic
Model
Situation
Theory
Based
Terminological
Logic Based
Nave
Possible
World
Threshold
Possible
World
R CS
C (Deep) NA NA NA NA . CS
RCM
(Surface) . CS CS . . CS
RCM (Deep) NA NA NA NA . . CS
RM . CS CS .
C-FA CS CS
GLM NA NA NA NA NA NA
GRM . NA NA NA . NA NA
QLM NA NA NA NA NA NA
QRM . NA NA NA . NA NA
NR . CS CS . . CS
CWA NA NA NA . NA NA
Note: NA means not applicable, CS means conditionally support, means support;
and . means not support.
5.2 Conclusion
The functional benchmarking exercise presented in this paper indicates that functional
benchmarking is both feasible and useful. It has been used to analyze and compare the
functionality of various classical and logical IR models. Through the functional
benchmarking, phenomenaoccurring in the experimental IR research can be explained
from a theoretical view. The theoretical analysis could in turn help us better
understand IR and provide guideline to investigate more effective IR models.
A major point could be drawn here is that IR is conservatively monotonic in nature. It
is important that the conservatively monotonic model be studied and developed, as it
would help get an optimal tradeoff between precision and recall. The postulates GLM,
GRM, QLM, QRM, etc. guarantee the conservatively monotonic properties, but they
are foreign to some models. Even in those models, which support some of
conservatively monotonic properties, preclusion is only based on the assumption that
an information carrier precludes its negation. Moreover, GLM, QLM and MIX are the
special cases of LM, and GRM, QRM and C-FA are the special case of RM. As such,
if a model supports LM, and GLM and CautM are applicable, then it must also
support GLM and CautM. In this case, these conservative monotonicity properties
have no effect. Therefore, a model supporting conservative monotonicity should
embody conservatively monotonic properties and without also supporting LM and
RM. The probabilistic model and threshold vector space model show the good
performance in practice because they mimic the conservatively monotonicity.
However, they are dependant of factors set extraneously, e.g. the threshold value. This
is undesirable from theoretical point of view.
Current logical IR models have advantage of modeling information transformation
and their expressive power, however, they are still insufficient to model conservative
monotonicity. A primary reason is that the important concepts, such as (deep and
containment, information preclusion, etc., upon which conservative
monotonicity is based, are not sufficiently modeled. For example, semantics of
information preclusion is not explicitly defined in current logical models. We just
simply assume that an information carrier precludes its negation during the
benchmarking. It is interesting to show that if we add some kind of semantics of
preclusion to the logical IR models, the conservative monotonicity could be partially
realized. For example, we could add the following definition to the model:
Preclusion:
Given two types j1 and j2, j1^j2, s1|=j1 and s2|=j2, there does not exist any
channel between s1 and s2.
The Left composition monotonicity (LM) is no longer supported:
Given f1 |= f2
($ c1 C) ("D|Dfi~ f1) ($y1f1) ($y2 f2) (D |=y1 | c1fi D' |=y2),
Assume LM is supported, i.e. ("D|Dfi~ f1 f3) ($y1f1 f3) ($y2f2) (D
|=y1 | c1fi D' |=y2).
Consider the case of f2^f3. This implies for D|=f3 and D' |=f2, there does not
exist a channel between D and D'. This contradicts the above assumption,
because {"D|D~ f1 f3} {"D| D|=f3}.
\ It is not necessary that f1 f2 |= f2.
On the other hand, RM is not supported for the similar reason of LM. However,
by applying the conservative forms of monotonicity, QLM and QRM, with the
qualifying non-preclusion conditions, the above-like counter example will no
longer exist.
The above definition of preclusion is simple and just for the purpose of illustration. It
is true that current IR systems are not defined in terms of these concepts mainly
because they do not view retrieval as an aboutness reasoning process. However
informational concepts are in the background. Preclusion relationships can be derived
via relevance feedback [Amati and Georgatos 1996, Bruza et al.1998]. For restricted
domains, information containment relationships can be derived from ontologies, and
the like. For example, we have been investigating automatic knowledge discovery
from text corpus based on Barwise and Seligman's theory of information flow
[Barwise and Seligman 1997, Bruza and song 2001; Song and Bruza 2001]. When
language processing tools have advanced further, the concepts under the aboutness
theory could be applied to IR more easily and more directly. More sensitive IR
systems would then result; in particular those which are conservatively monotonic
with respect to composition. Therefore, more investigations about how to achieve
conservative monotonicity in current logical IR models are necessary.
Finally, we reflect on the strengths and weaknesses of the inductive theory of
information retrieval evaluation. The strengths are summarized below:
Enhanced perspective: Matching functions can be characterized qualitatively in
terms of aboutness properties that are, or are not implied, by the matching
function in question. It may not be obvious what the implications are of a given
numeric formulation of a matching function. The inductive analysis allows some
of these implications to be teased out. By way of illustration, models based on
overlap may imply monotonicity (left or right), which is precision degrading. In
addition, inductive analysis allows one to compute under what conditions a
particular aboutness property is supported. It has been argued that a conservatively
monotonic aboutness relationship promotes effective retrieval. The analysis in
this paper revealed that although both of these models support conservative
monotonicity, the fundaments of this support are very different: The thresholded
vector space model support for conservative monotonicity depends on overlap
between document and query terms modulo the size of the document [check this
because our formulations don't include document length normalization]. Support
for conservative monotonicity in the probabilistic model depends on whether the
terms being added have a high enough probability of occurring in relevant
documents. Form an intuitive point of view, the latter condition would seem a
more sound basis for support because it is directly tied to relevance.
Transparency: One may disagree with a given functional benchmark (as
represented by a set of aboutness properties), or with how a given matching
function has been mapped into the inductive framework, however, the
assumptions made have been explicitly stated. This differs from some
experimental studies where the underlying assumptions (e.g., the import of certain
constants) are not, or insufficiently, motivated.
New insights: The use of an abstract framework allows new insights to be
gleaned. Inductive evaluation has highlighted the import of monotonicity in
retrieval functions, and its affect on retrieval performance. Designers of new
matching functions should provide functions that are conservatively monotonic
with respect to the composition of information. More sensitive IR systems would
then result. The lack of such systems currently can be attributed in part to the
inability to effectively "operationalize" information preclusion. Most common IR
models are either monotonic or non-monotonic - another class of IR models,
namely those that are conservatively monotonic is missing. Such models are
interesting for purposes for producing symbolic inference foundation to query
expansion and perhaps even relevance feedback.
The weaknesses of an inductive theory for evaluation are:
Difficulty in dealing with weights: Much of the subtlety of IR models remains
buried in different weighting schemes. Due to its symbolic nature, the inductive
approach can abstract too much, thereby losing sensitivity in the final analysis.
For example, the nuances of document length normalization, term independence
assumptions, probabilistic weighting schemes are difficult, if not impossible, to
map faithfully into a symbolic, inductive framework
Difficulties with mapping: For an arbitrary model, it may not be obvious how to
map the model into an inductive framework. This is particularly true for heavily
numeric models such as probabilistic models. It is often the case that such models
do not support many symbolic properties - they are like black holes defying
analysis [Bruza, Song & Wong 2000]. However, by analysing the conditions
under which given properties are supported allow us to peak at the edges of the
black hole.
Incompleteness of framework: In order to pursue functional benchmarking, a
sufficiently expressive framework is necessary in order to represent salient aspects
of the model in question. This is an issue of completeness. In the inductive
analysis of the possible worlds based models presented in this paper, we have seen
that the notion of similarity inherent to these models cannot be directly translated
into the underlying inductive framework. This suggests that the framework
presented in this paper should be extended. One could also argue that not all
salient aspects of aboutness have been captured by the properties used for the
benchmark. These are not criticisms of inductive evaluation, but of the tools being
used.
It is noteworthy that conventional experimental IR evaluation approaches are good
performance indicators but fail to reflect the functionality of an IR system, i.e. which
types of IR operation the system supports. From an application point of view, the
experimental approaches could serve as the performance benchmark (e.g. TREC).
Practically, it is complementary to the functional benchmark proposed in this paper.
Acknowledgement
This project is partially supported by the Chinese University of Hong Kong's strategic
grant (project ID: 44M5007), and by the Cooperative Research Centres Program
through the Department of the Prime Minister and Cabinet of Australia.
--R
Information Retrieval and Hypertext.
Relevance as deduction: A logical view of information retrieval.
Modern Information Retrieval.
The Situation in Logic.
Investigating aboutness axioms using information fields.
Logic based information retrieval: Is it really worth it?
Preferential models of query by navigation.
Informational Inference Via Information Flow.
Commonsense aboutness for information retrieval.
Fundamental properties of aboutness.
Information Retrieval
An Axiomatic Theory for Information Retrieval.
Information retrieval and situation theory.
Using default logic in information retrieval.
Intelligent text handling using default logic
On the problem of 'aboutness' in document analysis.
Theories of Information and Uncertainty for the Modeling of Information Retrieval: An Application of Situation Theory and Dempster-Shafer's Theory of Evidence
Information retrieval and Dempster-Shafer's theory of evidence
Logical models in information retrieval: Introduction and overview.
The use of logic in information retrieval modeling.
Towards a theory of information.
Comparing boolean and probabilistic information retrieval systems across queries and disciplines.
Text Retrieval and Filtering: Analytic Models of Performance.
On indexing
A model of information retrieval based on terminological logic.
A relevance terminological logic for information retrieval.
An information retrieval model based on modal logic.
Towards a probabilistic modal logic for semantic-based information retrieval
Information retrieval as counterfactual.
What is information discovery about?
A new theoretical framework for information retrieval.
The state of information retrieval: logic and information.
An information calculus for information retrieval.
Retrieval of complex objects using a four-valued logic
A probabilistic terminological logic for modeling information retrieval.
On the role of logic in information retrieval.
Discovering Information Flow using a High Dimensional Conceptual Space.
Fundamental properties of the core matching functions for information retrieval.
Towards a commonsense aboutness theory for information retrieval modeling.
A comparison of text retrieval models.
--TR
Automatic text processing
Towards an information logic
Nonmonotonic reasoning, preferential models and cumulative logics
Information retrieval
Towards a probabilistic modal logic for semantic-based information retrieval
A comparison of text retrieval models
Investigating aboutness axioms using information fields
Probability kinematics in information retrieval
Information calculus for information retrieval
Query expansion using local and global document analysis
Pivoted document length normalization
Retrieval of complex objects using a four-valued logic
A study of aboutness in information retrieval
Comparing Boolean and probabilistic information retrieval systems across queries and disciplines
(invited paper) A new theoretical framework for information retrieval
Information flow
On the role of logic in information retrieval
Logical models in information retrieval
A study of probability kinematics in information retrieval
Text retrieval and filtering
What is information discovery about?
Fundamental properties of aboutness (poster abstract)
Aboutness from a commonsense perspective
Information retrieval and situation theory
Discovering information flow suing high dimensional conceptual space
Information Retrieval
Modern Information Retrieval
Informal Inference via Information Flow
Information retrieval and Dempster-Shafer''s theory of evidence
Using Default Logic in Information Retrieval
Towards Functional Benchmarking of Information Retrieval Models
Fundamental Properties of the Core Matching Functions for Information Retrieval
Intelligent Text Handling Using Default Logic
A commonsense aboutness theory for information retrieval modeling
--CTR
Tobias Blanke , Mounia Lalmas, Theoretical benchmarks of XML retrieval, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Dawei Song , Jian-Yun Nie, Introduction to special issue on reasoning in natural language information processing, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.4, p.291-295, December 2006
D. Song , P. D. Bruza, Towards context sensitive information inference, Journal of the American Society for Information Science and Technology, v.54 n.4, p.321-334, February 15,
Fang , ChengXiang Zhai, An exploration of axiomatic approaches to information retrieval, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Raymond Y.K. Lau , Peter D. Bruza , Dawei Song, Belief revision for adaptive information retrieval, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Jian-Yun Nie , Guihong Cao , Jing Bai, Inferential language models for information retrieval, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.4, p.296-322, December 2006 | functional benchmarking;inductive evaluation;aboutness;logic-based information retrieval |
503036 | From checking to inference via driving and dag grammars. | Abramov and Glck have recently introduced a technique called URA for inverting first order functional programs. Given some desired output value, URA computes a potentially infinite sequence of substitutions/restrictions corresponding to the relevant input values. In some cases this process does not terminate.In the present paper, we propose a new program analysis for inverting programs. The technique works by computing a finite grammar describing the set of all input that relate to a given output. During the production of the grammar, the original program is implicitly transformed using so-called driving steps. Whereas URA is sound and complete, but sometimes fails to terminate, our technique always terminates and is complete, but not sound. As an example, we demonstrate how to derive type inference from type checking.The idea of approximating functional programs by grammars is not new. For instance, the second author has developed a technique using tree grammars to approximate termination behaviour of deforestation. However, for the present purposes it has been necessary to invent a new type of grammar that extends tree grammars by permitting a notion of sharing in the productions. These dag grammars seem to be of independent interest. | INTRODUCTION
The program-transformation techniques collectively called
supercompilation have been shown to eectively handle problems
that partial evaluation and deforestation cannot han-
dle. The apparent strength of supercompilation stems from
driving the object programs, that is, speculatively unfolding
expressions containing variables, based on the possible executions
described by the program. As an example of driving,
consider the Haskell-like program
where, by convention, the main denition serves as the interface
to the program. It is not possible to execute this
program because of the variables v and vs in main v vs, but
we can drive the program, resulting in a process tree describing
all possible computations of the program. For the
above program, one possible process tree is
- append (Cons v vs) Nil
Cons v (append vs Nil)
- append vs Nil
let u=
- append Nil Nil
vs=Nil
append (Cons x xs) Nil -
vs=Cons x xs
Cons v u -
in
(1)
In general, the process tree is constructed as follows. The
root of the process tree is labelled with right-hand side of the
main denition. New leaves are added to the tree repeatedly
by inspecting the label of some leaf, and either
1. Unfolding an outermost call (-).
2. Instantiating a variable that hinders unfolding (-).
3. Generalising by creating two disjoint labels (-).
Whenever the label of a leaf is identical to the label of an
ancestor (up to renaming of variables), that leaf is not unfolded
further (-9 9 K-).
A new (slightly more e-cient) program can easily be extracted
from a process tree, namely
Our main interest in this paper is to transform a checker
into a description of the input that will satisfy the checker.
That is, given a program that answers true or false, we will
transform the program into a description of the input for
which the checker answers true.
The above activity is generally known as program inversion
when the description of the satisfying input is yet another
program. It is, however, a non-trivial task to perform program
inversion, as the following example shows.
Example 1. Consider a program that checks whether two
lists are identical.
The auxiliary functions isnil and iscons are needed because
we only allow pattern matching on one argument at a time.
The reason for having this restriction is that it associates
every pattern match with a particular function denition.
The result of inverting the above program w.r.t. true should
be another program that produces all pairs of identical lists.
However, it is unreasonable to assume that we can produce
such a program: Even though it is easy to imagine a program
that produces an innite stream of pairs of lists with identical
spines, where should the elements come from? Based on
their type, these elements could be enumerated, but such an
enumeration clearly leads to non-termination in the general
case. What is worse still, the imagined program will not
per se give us a good description of the input set; we can
merely get an indication of the input set by running it and
observing its ever-increasing output.
Instead of inverting a program, one might perform its computation
backwards. A general method to perform inverse
computation has been proposed by Abramov & Gluck [1],
namely the Universal Resolving Algorithm (URA). The URA
constructs a process tree for the object program, and produces
from the process tree a potentially innite set of constraints
on the uninstantiated input (variables xs and ys in
the above example). Each constraint describes a set of input
values by means of a substitution/restriction pair. The
produced constraints are pairwise disjoint, in the sense that
the sets they describe are pairwise disjoint. Variables can
appear several times in each constraint, indicating identical
values. For the above example, the URA would produce
something like
([xs 7! Nil; ys 7! Nil]; [])
([xs 7! Cons x1 Nil; ys 7! Cons x1 Nil]; [])
([xs 7! Cons x1 (Cons x2 Nil);
ys 7! Cons x1 (Cons x2 Nil)]; [])
(2)
Here the URA would never terminate. The merit of the
URA is that it is sound and complete, so if it terminates,
the result precisely captures the input set.
In this paper we will sacrice soundness to develop an approach
that always produces a nite description of the satisfying
input.
1.2
Overview
In Section 2, we present a formalisation of a certain kind of
context-free grammars, namely dag grammars. For instance,
the above checker can be described by the grammar> > <
Nil Nil
Cons
x
Cons
The grammar consists of two productions, each formed as
an acyclic directed graph (also known as a dag). The rst
says that an S can be rewritten into a dag consisting of two
single nodes labelled Nil. The second says that an S can
be rewritten into a more complex dag with two roots. The
two productions can be viewed as a nite representation of
(2). Such dag grammars can precisely express the data and
control
ow of a program, something which is not possible
with standard tree grammars.
In Section 4, we present an automatic method to extract a
dag grammar from a program. Conceptually, the extraction
works in two steps: First we drive the object program to
produce a process tree, second we extract a grammar from
this process tree. A precursor for driving the program is
a precise formulation of the semantics of our programming
language, which we present in Section 3. The extracted dag
grammars are approximations of the input set, and in Section
6, we consider various ways to improve the precision of
our method.
As an application, we will in Section 5 show that, given a
type checker and a -term, it is possible to derive a type
scheme for the -term.
2. DAG GRAMMARS
We denote by fs0 the set containing s0
we
g. For a binary relation * S T ,
we denote by D* the domain S of *. The set of deterministic
binary relations (i.e., partial functions) from S to
T is denoted by S - T , and such partial functions can be
g. By S $ T we denote the
set of bijections between S and T .
Denition 1. Given a set S, a graph G over S consists
of a label function lab 2 N - S and a connected-by relation
means that node i's
kth successor is node j and j's 'th predecessor is i. The
relation should satisfy the properties that there is a label
to each node, and that the successors and predecessors are
numbered consecutively from 0. Formally,
When the order of successors and predecessors is immaterial,
we simply use - as a binary relation and write i - j
whenever 9k ' 2 N [i k ' - j].
In the following, we will use subscripts like lab G or -
G
when it is otherwise not obvious which graph we refer to.
By - we denote the transitive closure of - . In general,
we will superscript any binary relation with + to denote its
transitive closure, and we will superscript with to denote
its re
exive closure.
Denition 2. A graph G is a dag when it contains no
cycles (i.e., @i 2 N [i i]). For dags, it is natural to talk
of roots and leaves:
roots G
G i]
leaves G
G j]
Two dags D and E are equivalent, denoted D E, when D
and E can be transformed into one another by renumbering
the nodes, that is,
The set of dags over S is denoted DS .
Example 2. Each of the two structures (3) depicts an equivalence
class of dags over fS Nil Cons xg | in the sense that
the structure describes a family of equivalent dags | because
the node set ( N) is left unspecied; the order of
successors and predecessors, however, is specied by adopting
the convention that heads and tails of arcs are ordered
from left to right. A concretisation of the right dag is, for
example,
and we have that leaves = f3 4g and roots = f0g.
Denition 3. Given a set , a dag grammar over is
a set of nite dags over . The set of dag grammars over
is denoted G.
Example 3. The two dags (3) comprise a dag grammar
over fS x Nil Consg.
A dag grammar describes a dag language, in which every dag
has been generated by a number of rewrites on an initial
dag. Before we give the formal denition of rewrites and
languages, an example is in order.
Example 4. The dag
can be rewritten by the dag
grammar (3) as
Nil Nil
Cons
x
Cons
Cons
x Nil
Cons
Cons
x Cons
x
Cons
Cons
The symbol is here used to maintain an order between an
otherwise unordered set of roots.
Below you will nd the formal denition of graph grammar
rewrites. The example above hopefully illustrates how such
rewrites work. Informally, a dag can be rewritten if it has a
leaf node i that matches a root node j in a dag in the graph
grammar. By matches, we mean that i and j have the same
label, and that the number of predecessors of i matches the
number of successors of j. The result of the rewrite is that
the leaf i and root j dissolve, as it were, and the predecessors
of i become the predecessors of the successors of j, in the
right order, as illustrated below.6 6k1 kn
6 6 6k1 kn
denitions generalise the above notion of rewriting
to several leaves We will
need this generality in Section 4.
Denition 4. We denote by S P the set of all nite sequences
over set S. Both sequences and tuples are denoted
by angles h i.
Denition 5. Given dags D and E, we dene
match
ng
(lab
and
maxmatch
Denition 6. Given sets S and T , if S and T are disjoint,
we write S T , and in that case the disjoint union S ]
T is dened (and undened, otherwise). Given a relation
* S T and a set S 0 , we dene the removal of S 0 from
* as * nS 0
. The disjoint
union of two binary relations is dened if their domains are
disjoint.
In the following, we carefully pay attention to the exact set
of nodes ( N) that each particular dag comprises. To avoid
node clashes when we rewrite graphs, we use the fact that,
given a nite dag G 0 , there are innitely many equivalent
dags G.
Denition 7. Given a dag grammar 2 G , the one-
step-rewrite relation ! D D is dened by
9 hI Ji 2 maxmatch D G6 6 6
I
Denition 8. Given a dag grammar and an initial dag
I, the dag language LI is the set of normal forms: LI
We now have a grammar framework that can readily express
sets of input. The next step is to show how a dag grammar
can be extracted from a program. We start by dening
the semantics of the object language, and, on top of the
semantics, we then build the machinery that calculates the
desired grammars.
3. OBJECT LANGUAGE
Our object language is a rst-order functional language with
pattern matching and operators for term equality and boolean
and.
Denition 9. Given a set of variables X, function names
F , pattern-function names G, and constructors C (including
the constants true and false), we dene
d
G. As
usual, we require that no variable occurs more than once in
the left-hand side of a function denition (left-linear), and
that all variables in the right-hand side of a function definition
is a subset of those in the left-hand side (closed).
Finally we require that the patterns dened for each g-function
are pairwise distinct modulo variable names (non-
overlapping); in particular, a g-function can have at most
one \catch-all" pattern.
In concrete programs we use a Haskell-like syntax, including
data-type denitions. The above syntax can be viewed
as the intermediate language obtained after type checking,
which rules out terms like Nil & Nil and Nil true.
Denition 10. Given a term t, we denote by V t
the variables of t, collected from left to right. Term t is
Example 5.
(Triple x y
Denition 11. A function 2 X - T can be regarded
as a substitution T - T in the usual way, and, for t 2
T , we will write t for the result of such a substitution.
Given program p, writing, say, g (c
means that in p there is a function denition identical to
t, up to variable naming.
We will now give the normal-order semantics of the object
language by dening a small-step relation that takes redexes
into their contracta. We can separate redexes from their
contexts by the following syntactic classes.
if
if
true
Figure
1: Normal-order semantics
Denition 12.
e
O 3
By d([t]) we denote the result of replacing ([ ]) in d with term
t.
Any ground term t is either a value or can be uniquely decomposed
into a context and a redex (i.e.,
decomposition property allows us to dene the semantics as
below. The semantics imposes left-to-right evaluation, except
for the equality operator, which evaluates both its arguments
until two outermost constructors can be compared.
Denition 13. Given a program p, the small-step semantics
of p is dened by the smallest relation ! RT
on closed terms as dened by Fig. 1. In the following we will
use subscripts like ! p when it is otherwise not obvious which
program we refer to.
To get the full semantics of the language, we simply need
to close the !-relation under all contexts. The semantics is
deterministic:
Lemma 1.
Proof sketch. By induction and case analysis of the
syntactic structure of terms: Each term either cannot be
decomposed into a context and a redex, or it can be uniquely
decomposed, in which case the redex has at most one small-step
derivation.
4. EXPLICITATION
The previous section aloows us to deal with execution of
ground terms. To be able to drive a program, however, we
need to handle terms containing variables. We use the following
syntactic class, in combination with the previously
dened ones, to describe all terms that cannot be decomposed
into contexts and redexes.
Denition 14.
As for the variables of a let-term, V (let
The use of let-terms will be described below. As in the previous
section, we obtain a unique-decomposition property:
Any term t is either a value, can be uniquely decomposed
into a context and a redex, or can be uniquely decomposed
into a context and a speculative (i.e., d([s])). The extra
syntactic class s enables us to identify terms that need to
be driven (i.e., instantiated).
In supercompilation, driving is used to obtain a process tree
from the object program. The process tree serves two orthogonal
purposes: It keeps track of data and control
ow
(essentially variable instantiations and recursive calls), and
it provides a convenient way to monitor the driving process
and perform the generalisations needed to ensure a -
nite process tree. When a generalisation is needed for a
t, t is replaced by a term \let " such that
. The point of making such a generalisation
is to be able to treat t1 and t2 independently. For an exam-
ple, you might want to revisit (1) in the introduction. For a
thorough introduction to these techniques, see Srensen &
Gluck [13].
In our approach to program inversion, called explicitation,
we will assume that the generalisations necessary to ensure
termination have been computed in advance by an o-line
generalisation analysis. To be more specic, we assume that
some terms have been replaced by terms of the form let
t1 in t2 in the program of interest. With respect to the data
and control
ow of the program, the
ow can be expressed
by dag grammars, which we will elaborate on later in this
section.
Since we have thus eliminated the need for a process tree, we
will, to keep things simple, drive the object program without
constructing the actual process tree. The construction of
a process tree, although important in practice, is not the
essence in our approach.
Remark 1. We should note here that the existence of an
o-line generalisation analysis is not essential: The explicitation
process described in the following could incorporate
well-known non-termination detection and perform the necessary
generalisations. 1 But because such an incorporation
would induce unnecessary clutter in our presentation, we
will concentrate on the description of how to extract a dag-
grammar by driving.
From a bird's-eye view, the explicitation process of a single
branch of speculative execution works as follows. Starting
with the main term of the object program, a dag grammar
is gradually built up by driving the term: each time a
speculative (cf. Def. 14) hinders normal reduction, we perform
an instantiation to both the term and the dag gram-
mar, such that reduction of the term can proceed and the
re
ects the structure of the input variables. In
fact, each driving step of a term results in a new production
in the grammar, such that every term we meet while driving
the program has its own production. When we meet a
term which we have seen before, a loop is introduced into
the grammar, and driving of the term stops. A term t that
cannot be driven any further is compared to the output we
desired, namely non-false output: If t is false, then the result
is an empty grammar; otherwise it is the accumulated
grammar . In general, we parameterise the explicitation
process over a discrimination function h. For our purpose,
In the above fashion, we can produce a grammar for every
speculative execution of the program: Each possible instantiation
of a term gives rise to a slightly dierent term and
grammar. The nal grammar is then the union of the grammars
produced for all executions.
As an example of an instantiation on a dag grammar, consider
the dags
same xs ys
6 6 6xs ys
Cons
iscons ys x xs7 7
same xs ys
Cons
iscons ys x xs7
D represents a call to same where the arguments are un-
known. E represents the body of same: a pattern match on
the variable xs and a call to iscons with three arguments (cf.
Example 1). The order of the arrows is important, since it
have presented a method for preventing
non-termination and performing generalisations of dangerous
terms, as it were, based on certain quasi-orders. With
a few extensions, this method can be applied to our lan-
guage. We can prove that such an extended method will indeed
guarantee termination of the explicitation, if we apply
the general framework for proving termination of program
transformers presented by the second author [14].
denes the data
ow. If we view fEg as a dag grammar, D
can be rewritten into D 0 by means of fEg, as shown above.
Formally,
Denition 15. Given dags D and E, the dag substitution
fEgD is dened as (cf. Def. 5)
D; otherwise.
Substitutions are extended to grammars in the obvious way:
To construct dags from terms, we use the following shorthands
Denition 16. Given a term t,
and ? t
The full explicitation process will be explained in detail be-
low. Formally, it is summed up in the following denition.
Denition 17. Given a program p and a function h 2
the explicitation of p is a
dag grammar Eh [[p]], as described in Fig. 2.
The following explanation of the explicitation process carries
a number of references that hopefully will guide the
understanding of Fig. 2.
Explicitation starts with the main term t of the program,
an empty dag grammar , and an empty set of previously
seen
terms
In each step, we inspect the structure of the
current term t, and either stop, or add a production for t to
and make a new step with t added to the seen-before set.
If t has been seen before, a production for t is already
present in the grammar , so we return the accumulated
unchanged. - If a renaming of t has been seen before
(captured by a bijection ), we introduce recursion in the
grammar by adding a production that connects t to the (pre-
viously seen) t, respecting the number of variables.
- If a redex can be unfolded using the standard semantics
(cf. Defs. 12 & 13), a production linking t to its unfolding is
added to , and the process continues with the unfolding.
- If a generalised term hinders unfolding, that is
t1 in t2 ]), d([t 2 ]) and t1 are processed independently. There-
fore, a production is added to the grammar such that t is
linked to both d([t 2 ]) and t1 . This production will have some
dangling roots 2 (namely x and V t1 \ V t2 ) which re
ect that
the data
ow is approximated. Because the traces of t1 will
tell us nothing about the output of t, the function h (cf. (4))
Unmatched roots are allowed in dag rewrites, cf. Def. 7.
Eh
let
[ ftg in
- if tthen
then [<
@> <
c
AC
A
@> <
r
x y
AC
A
and t
and
x
c
and t
x
true
#)
and t
x
false
#)
Figure
2: Explicitation by driving
that is supposed to discriminate between various output is
replaced by the function xy:y which does not discriminate:
It always returns the accumulated grammar.
- For a pattern matching function, the process is continued
for all dened patterns. For each pattern q, we substitute the
arguments into the matching body, and put it back into the
context, which in turn receives the instantiation fx 7! qg,
and we add to the grammar a production re
ecting this
instantiation.
For comparisons, there are three cases. - The rst simply
makes sure that variables are on the left side of the com-
parison. That settled, - if the right-hand side is another
variable, two possibilities are explored: Either the comparison
will fail, and hence we replace the speculative with false;
or the comparison will succeed, and we replace the speculative
with true and update the grammar and the context to
re
ect that both sides must be identical. In the grammar,
the equal variables are coalesced by means of a special symbol
r, which is needed to maintain the invariant that the
in/out degree of terms correspond to the number of distinct
variables. - If the right-hand side is an n-ary constructor,
either the comparison will fail (as above), or it will succeed,
in which case we will propagate that the variable must have a
particular outermost constructor, of which the children must
be tested for equality with the children of the constructor.
- If a boolean expression depends on a variable, then the
variable will evaluate to either true or false, and this information
is propagated to each branch.
- Terms that cannot be driven any further (i.e., t 2 V ) are
fed to the function h, which in turn decides what to do with
the accumulated grammar.
Example 6. Let p be the program from Example 1 and h
dened as (4). The explicitation Eh is depicted in Fig. 3.
The grammar produced is fA B D F I Jg.
The derived grammars are sub-optimal: Most of the productions
are intermediate, in the sense that they can be directly
unfolded, or in-lined as it were. We say that a grammar is
normalised if it contains no intermediate productions, and
we can easily normalise a grammar.
Denition 18. A dag grammar can be normalised, denoted
b , as follows.
roots G leaves G
Example 7. If we normalise the grammar from Example 6,
we almost get the grammar we promised in the introduction:4
same xs ys
same xs ys
Cons Cons
r same xs ys
In this particular grammar, the bookkeeping symbol r can
be eliminated by yet another normalisation process, if so
desired.
Given that the object program contains the necessary gen-
eralisations, the lemma below is not surprising. However,
if we incorporated a standard on-line termination strategy
into the explication process, as explained in Remark 1, the
following lemma would still hold.
Lemma 2. Any explicitation produces a nite grammar.
Proof. The process tree of the program is nite (since
the program contains the necessary generalisations), and
thus a nite number of dags will be produced, assuming
the h function is to be total and computable.
More interestingly, the explicitation returns a dag grammar
that contains all solutions. To express such a completeness
theorem, we need a precise formulation of the set of
terms generated by a dag grammar, as given below. Infor-
mally, a term is extracted from a dag simply by following
the edges from left to right, collecting the labels | except
when the label is a variable or the bookkeeping symbol r:
Every variable is given a distinct name, and r is treated as
an indirection and left out in the term.
Denition 19. Given a dag grammar , a label S with
arity n, and a set of variables g, the term
language T n
S is the set of tuples of terms that can be
extracted from the underlying dag language:
I =4
0:
S5
=> <
and fhi 0 '0
We can now relate all possible executions of the program to
the set of terms generated by its explicitation.
Eh [[same xs ys
fsame xs ysg
Eh
f(same xs ys) (isnil ys)g
Eh
f(same xs ys) (isnil ys)g
Eh [[iscons ys x xs
fsame xs ysg
Eh
f(same xs ys) (iscons ys x xs)g
f(same xs ys) (iscons ys x xs)g
f:::(if (xy) (same xs ys) false)g
f:::(if (xy) (same xs ys) false) (if false (same xs ys)
f:::(if (xy) (same xs ys) false)g
f(same xs ys):::
6 6same xs ys
Nil
6 6isnil ys
Cons
iscons ys x xs
Nil x xs7G 2
(same xs ys)(false)
x y if false
(same xs ys)(false)7 7
if
(same xs ys)(false)
r
x
if true
(same xs ys)
isnil ys
same xs ys
Cons
iscons ys x xs7 7 75
26 6 6
6 6iscons ys x xs
Cons
if
6 6if false
(same xs ys)
xs ys7
6 6if true
(same xs ys)
same xs ys7
Figure
3: Explicitation of the same-program
Theorem 1. Given a program p where main
holds that
5. APPLICATION: TYPE INFERENCE
We now show that a type checking algorithm can be transformed
into a type inference algorithm by explicitation. Specif-
ically, we check that a -calculus expression has a given type
in a given environment, using the standard relation
As an example, consider the expression
meaning \what is the type of x:y:z:(xz)(yz) in the empty
environment?". We would expect the answer to be something
like
We will now perform explicitation of the above expression.
The type checker below takes an encoding of a proof tree P ,
an expression M , and an environment , and returns true
if P is indeed a valid proof of the type of M in . In this
encoding, 3 x:y:z:(xz)(yz) becomes
and ? becomes Empty. If we want to explicitate the above
expression, we can add the denition
main
to the implementation of the type checker (which we then
refer to as the specialised typechecker):
data
3 | Abs Int Term
4 data
5 data Proof = Infer Premise Type
6 data
7 | Intro Proof
9 expchk (Var x) y z
match z x y
14 elimchk (Elim x y) z v w
3 The implementation assumes a primitive integer type Int.
19 arrowcheck v x y z w
match (Bind x y z) v
22 if (v == x) (w == y) (match z v w)
match x y
arrowcheck (Ar x y) z v w
26 y == conclusion z
28 conclusion (Infer x
29 if false x
Explicitation of the specialised typechecker gives a term language
that consists of a single pair
<(Ar (Ar x (Ar y z)) (Ar (Ar x y) (Ar x z)))
(.)>
The rst element is indeed the encoding of type (5), and the
second is the proof tree, which we have left out.
6. IMPROVING SOUNDNESS
Any automatic method for inversion of Turing-complete programs
will have to make a compromise with respect to com-
pleteness, soundness and termination. We have made a compromise
that will result in possibly unsound results: The explicitation
of a program can produce a grammar that generates
too many terms. From a practical point of view,
we feel that this is the right compromise: It is better to
obtain an approximation than not obtaining an answer at
all. Moreover, explicitation can produce grammars that precisely
identies the input set, as seen from the two examples
is previous sections, which indicates that explicitation
is tight enough for at least some practical problems.
However, it still remains to identify exactly what causes loss
of soundness in the general case. Our method is inherently
imprecise for three reasons.
Generalisations cause terms to be split up in separate
parts (by means of let-terms). This prevents instantiations
in the left branch of the process tree to aect
the right branch, and vice versa.
Negative information is not taken in to account while
driving. For example, driving the speculative term
x y will not propagate the fact that x 6= y to the
right branch of the process tree (although the fact that
propagated to the left branch, by means of
a substitution). Moreover, the dag grammars cannot
represent negative information.
Occur check is not performed on speculative terms like
that is, a situations where
for some i n is not discovered. Obviously, such
an equality would never imply that
Similarly, the symmetric property that x x never
implies x 6= x is not used either.
The occur check and its counterpart can easily be incorporated
into rules - and -, respectively, in Fig. 2. Interest-
ingly, explicitation of the type checker (specialised to a -
would be sound, had these checks been incorporated.
As for negative information, we have described how to handle
and propagate such information in another paper [11].
Incorporating negative information as proposed in that paper
would be a simple task. Incorporating negative information
into the dag grammars, however, would destroy their
simplicity and thus severely cripple their usability.
Hence, generalisation and the inability of dag grammars to
represent negative information are the true culprits. One
could therefore imagine a variant of the explicitation algorithm
| lets call it EXP | where one has incorporated
the extensions we have suggested above. To improve soundness
of EXP, one should target the way generalisations are
carried out.
We are now in a position to conjecture the following roundabout
relationship between EXP and the URA [1] described
in the introduction: Given a program p without generalisa-
tions, EXP terminates on p whenever URA terminates on p.
Moreover, if the result of URA contains no restrictions (neg-
ative information), the resulting grammar of EXP is sound.
7. RELATED WORK
Program inversion
In the literature, most program inversion is carried out by
hand. One exception is Romanenko [9], who describes a
pseudo algorithm for inverting a small class of programs
written in Refal. In a later paper, Romanenko [10] extends
the Refal language with non-deterministic construct, akin
to those seen in logic programming, to facilitate exhaustive
search. He also considers inverting a program with respect
to a subset of its parameters, so-called partial inversion. We
would like to extend our method to handle partial inversion,
but as of this writing it is unclear how this should be done.
The only fully automated program inversion method we
know of, is the one presented by Abramov & Gluck [1].
Their method constructs a process tree from the object pro-
gram, and solutions are extracted from the leaves of the
tree in form of substitution/restriction pairs. The process
trees are perfect (Gluck & Klimov [3]) in the sense that no
information is lost in any branch (completeness), and every
branching node divides the information in disjoint sets
(soundness). Unfortunately, soundness together with completeness
implies non-termination for all but a small class
of programs. The method we have presented here sacrices
soundness for termination.
What is common to both ours and the above methods is that
they all build upon the ground breaking work of Turchin and
co-workers. The rst English paper that contains examples
of program inversion by driving seems to be [15]. For more
references, see Srensen & Gluck [4].
Grammars
The idea of approximating functional programs by grammars
is not new. For instance, Jones [5] presents a
ow
analysis for non-strict languages by means of tree gram-
mars. Based on this work, the second author has developed
a technique using tree grammars to approximate termination
behaviour of deforestation [12]. Tree grammars, how-
ever, cannot capture the precise data and control
ow: By
denition, branches in a tree grammar are developed inde-
pendent, which renders impossible the kind of synchronisation
we need between variable and function calls. The dag
grammars we have presented, precisely capture the data and
control
ow for any single speculative computation trace;
synchronisation can only be lost when several alternative
recursive productions exist in the grammar.
It seems possible to devise a more precise
ow analysis based
on dag grammars, which could be used along the lines of [12].
Indeed, the way we use dag grammars to trace data and
control
ow, turns out to have striking similarities with the
use of size-change graphs, presented by Lee, Jones & Ben-
Amram [7]. Size-change graphs approximate the data
ow
from one function to another, by capturing size-changes of
parameters.
The dag-rewrite mechanism we have presented turns out to
have a lot in common with the fan-in/fan-out rewrites presented
by Lamping [6], in the quest for optimal reduction
in the -calculus. The fan-in/fan-outs represent a complex
way of synchronising dierent parts of a -graph, whereas
our dag rewrites only perform a simple use-once synchroni-
sation. The rewriting mechanism is also akin to graph substitution
in hyper-graph grammars (see Bauderon & Courcelle
[2] for a non-category-theory formulation), except that we
allow any number of leaves to be rewritten and do not allow
inner nodes to be rewritten. Strength-wise, hyper-graph
grammars are apparently equivalent to attribute grammars.
At present, we are not sure of the generative power of dag
grammars.
8. CONCLUSION AND FUTURE WORK
We have developed a method for inverting a given program
with respect to a given output. The result of the inversion
is a nite dag grammar that gives a complete description of
the input: \Running" the dag grammar produces a (possibly
innite) set of terms that will contain all tuples of terms that
result in the given output.
The method seems to be particularly useful when the object
program is a checker, that is, one that returns either true or
false. We have exemplied this by deriving a type scheme
from a type checker and a given -term, thus eectively
synthesising a type inference algorithm. Following this line,
one could imagine a program that checks whether a document
is valid XML. Inverting this program would result in
a dag grammar, which could then be compared to a speci-
cation for valid XML, as a means of verifying the program.
Inverting the program when specialised to a particular doc-
ument, would result in a Document Type Denition. One
could even imagine inverting a proof-carrying-code verier
[8] with respect to a particular program, thus obtaining a
proof skeleton for the correctness of the code.
Further experiments with the above kinds of applications
should be carried out to establish the strength and usability
of our method.
9.
--R
Graph expressions and graph rewritings.
Flow analysis of lazy higher-order functional programs
An algorithm for optimal lambda calculus reduction.
The size-change principle for program termination
Compiling with Proofs.
The generation of inverse functions in Refal.
Inversion and metacomputation.
de
--TR
An algorithm for optimal lambda calculus reduction
Inversion and metacomputation
Convergence of program transformers in the metric space of trees
The size-change principle for program termination
Introduction to Supercompilation
On Perfect Supercompilation
Occam''s Razor in Metacompuation
A Roadmap to Metacomputation by Supercompilation
Semantic definitions in REFAL and the automatic production of compilers
The Universal Resolving Algorithm
Grammar-Based Data-Flow Analysis to Stop Deforestation
Compiling with proofs
--CTR
Siau-Cheng Khoo , Kun Shi, Output-constraint specialization, Proceedings of the ASIAN symposium on Partial evaluation and semantics-based program manipulation, p.106-116, September 12-14, 2002, Aizu, Japan
Aaron Tomb , Cormac Flanagan, Automatic type inference via partial evaluation, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.106-116, July 11-13, 2005, Lisbon, Portugal
Siau-Cheng Khoo , Kun Shi, Program Adaptation via Output-Constraint Specialization, Higher-Order and Symbolic Computation, v.17 n.1-2, p.93-128, March-June 2004
Morten Heine Srensen , Jens Peter Secher, From type inference to configuration, The essence of computation: complexity, analysis, transformation, Springer-Verlag New York, Inc., New York, NY, 2002
Principles of inverse computation and the Universal resolving algorithm, The essence of computation: complexity, analysis, transformation, Springer-Verlag New York, Inc., New York, NY, 2002 | supercompilation;program inversion;inference |
503042 | Mixed-initiative mixed computation. | We show that partial evaluation can be usefully viewed as a programming model for realizing mixed-initiative functionality in interactive applications. Mixed-initiative interaction between two participants is one where the parties can take turns at any time to change and steer the flow of interaction. We concentrate on the facet of mixed-initiative referred to as 'unsolicited reporting' and demonstrate how out-of-turn interactions by users can be modeled by 'jumping ahead' to nested dialogs (via partial evaluation). Our approach permits the view of dialog management systems in terms of their support for staging and simplifying inter-actions; we characterize three different voice-based interaction technologies using this viewpoint. In particular, we show that the built-in form interpretation algorithm (FIA) in the VoiceXML dialog management architecture is actually a (well disguised) combination of an interpreter and a partial evaluator. | INTRODUCTION
Mixed-initiative interaction [8] has been studied for the
past years in the areas of artificial intelligence (AI) planning
[17], human-computer interaction [5], and discourse
analysis [6]. As Novick and Sutton point out [13], it is
'one of those things that people think that they can recognize
when they see it even if they can't define it.' It can
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
PEPM '02, Jan. 14-15, 2002 Portland, OR, USA
be broadly viewed as a flexible interaction strategy between
participants where the parties can take turns at any time to
change and steer the flow of interaction. Human conversations
are typically mixed-initiative and, interestingly, so are
interactions with some modern computer systems.
Consider the two dialogs in Figs. 1 and 2 with a telephone
pizza delivery service that has voice-recognition capability
(the line numbers are provided for ease of refer-
ence). Both these conversations involve the specification of
a {size,topping,crust} tuple to complete the pizza ordering
procedure but di#er in important ways. In Fig. 1, the caller
responds to the questions in the order they are posed by the
system. The system has the initiative at all times (other
than, perhaps, on line 0) and such an interaction is thus
referred to as system-initiated. In Fig. 2, when the system
prompts the caller about pizza size, he responds with information
about his choice of topping instead (sausage; see line
3 of Fig. 2). Nevertheless, the conversation is not stalled and
the system continues with the other aspects of the information
gathering activity. In particular, the system registers
that the caller has specified a topping, skips its default question
on this topic, and repeats its question about the size
(see line 5 of Fig. 2). The caller thus gained the initiative
for a brief period during the conversation, before returning
it to the system. Such a conversation that mixes system-initiated
and user-initiated modes of interaction is said to
be mixed-initiative.
1.1 Tiers of Mixed-Initiative Interaction
It is well accepted that mixed-initiative provides a more
natural and personalized mode of interaction. A matter of
debate, however, are the qualities that an interaction must
possess to merit its classification as mixed-initiative [13]. In
determining who has the initiative at a given point in
an interaction can itself be a contentious issue! The role of
intention in an interaction and the underlying task goals also
a#ect the characterization of initiative. We will not attempt
to settle this debate here but a few preliminary observations
will be useful.
One of the basic levels of mixed-initiative is referred to as
unsolicited reporting in [3] and is illustrated in Fig. 2. In this
facet, a participant provides information out-of-turn (in our
case the caller, about his choice of topping). Furthermore,
the out-of-turn interaction is not agreed upon in advance by
the two participants. Novick and Sutton [13] stress that the
unanticipated nature of out-of-turn interactions is important
and that mere turn-taking (perhaps in a hardwired order)
does not constitute mixed-initiative. Finally, notice that in
Fig. 2 there is a resumption of the original questioning task
Caller: #calls Joe's Pizza on the phone#
calling Joe's pizza ordering system.
What size pizza would you like?
3 Caller: I'd like a medium, please.
4 System: What topping would you like on your pizza?
5 Caller: Pepperoni.
6 System: What type of crust do you want?
7 Caller: Uh, deep-dish.
8 System: So that is a medium pepperoni pizza with deep-dish crust. Is this correct?
9 Caller: Yes.
(conversation continues to get delivery and payment information)
Figure
1: Example of a system-directed conversation.
Caller: #calls Joe's Pizza on the phone#
calling Joe's pizza ordering system.
What size pizza would you like?
3 Caller: I'd like a sausage pizza, please.
4 System: Okay, sausage.
5 System: What size pizza would you like?
6 Caller: Medium.
7 System: What type of crust do you want?
8 Caller: Deep-dish.
9 System: So that is a medium sausage pizza with deep-dish crust. Is this correct?
(conversation continues to get delivery and payment information)
Figure
2: Example of a mixed-initiative conversation.
once processing of the unsolicited response is completed. In
other applications, an unsolicited response might shift the
control to a new interaction sequence and/or abort the current
interaction.
Another level of mixed-initiative involves subdialog invo-
cation; for instance, the computer system might not have
understood the user's response and could ask for clarifications
(which amounts to it having the initiative). A final,
sophisticated, form of mixed-initiative is one where participants
negotiate with each other to determine initiative (as
opposed to merely 'taking the initiative') [3]. An example
is given in Fig. 3.
In addition to models that characterize initiative, there
are models for designing dialog-based interaction systems.
Allen et al. [2] provide a taxonomy of such software models
- finite-state machines, slot-and-filler structures, frame-based
methods, and more sophisticated models involving
planning, agent-based programming, and exploiting contextual
information. While mixed-initiative interaction can be
studied in any of these models, it is beyond the scope of this
paper to address all or even a majority of them.
Instead, we concentrate on the view of (i) a dialog as a
task-oriented information assessment activity requiring the
filling of a set of slots, (ii) where one of the participants in
the dialog is a computer system and the other is a human,
and (iii) where mixed-initiative arises from unsolicited reporting
(by the human), involving out-of-turn interactions.
This characterization includes many voice-based interfaces
to information (our pizza ordering dialog is an example)
and web sites modeling interaction by hyperlinks [15]. In
Section 2, we show that partial evaluation can be usefully
viewed as a programming model for such applications. Section
3 presents three di#erent voice-based interaction technologies
and analyzes them in terms of their native support
for mixing initiative. Finally, Section 4 discusses other facets
of mixed-initiative and mentions other software models to
which our approach can be extended.
2. PROGRAMMINGAMIXED-INITIATIVE
APPLICATION
Before we outline the design of a system such as Joe's
Pizza (ref. Figs. 1 and 2), we introduce a notation [7, 11]
that captures basic elements of initiative and response in
an interaction sequence. The notation expresses the local
organization of a dialog [14] as adjacency pairs; for instance,
the dialog in Fig. 1 is represented as:
The line numbers given below the interaction sequence refer
to the utterance numbers in Fig. 1. The letter I denotes who
has the initiative - caller (c) or the system and the
letter R denotes who provides the response. It is easy to see
from this notation that the dialog in Fig. 1 consists of five
turns and that the system has the initiative for the last four
turns. The initial turn is modeled as the caller having the
initiative because he or she chose to place the phone call in
the first place. The system quickly takes the initiative after
playing a greeting to the caller (which is modeled here as
the response to the caller's call). The subsequent four interactions
then address three questions and a confirmation,
all involving the system retaining the initiative (Is) and the
caller in the responding mode (Rc).
(with apologies to O. Henry)
Husband: Della, Something interesting happened today that I want to tell you.
Wife: I too have something exciting to tell you, Jim.
Husband: Do you want to go first or shall I tell you my story?
Figure
3: Example of a mixed-initiative conversation where initiative is negotiated.
Caller: #calls Joe's Pizza on the phone#
calling Joe's pizza ordering system.
What size pizza would you like?
3 Caller: I'd like a sausage pizza, medium, and deep-dish.
4 System: So that is a medium sausage pizza with deep-dish crust. Is this correct?
5 Caller: Yes.
(conversation continues to get delivery and payment information)
Figure
4: Example of a mixed-initiative conversation with a frequent customer.
Likewise, the mixed-initiative interaction in Fig. 2 is represented
as:
In this case, the system takes the initiative in utterance 2
but instead of responding to the question of size in utterance
3, the caller takes the initiative, causing an 'insertion'
to occur in the interaction sequence (dialog) [11]. The system
responds with an acknowledgement ('Okay, sausage.')
utterance 4. This is represented as the nested pair (Ic
Rs) above. The system then re-focusses the dialog on the
question of pizza size in utterance 5 (thus retaking the ini-
tiative). In utterance 6 the caller responds with the desired
size (medium) and the interaction proceeds as before, from
this point.
There are various other possibilities for mixing initiative.
For instance, if a user is a frequent customer of Joe's Pizza,
he might take the initiative and specify all three pizza attributes
on the first available prompt, as shown in Fig. 4.
Such an interaction would be represented as:
Notice that even though this dialog consists of only three
turns it constitutes a complete instance of interaction with
the pizza ordering service.
The notation aids in understanding the structure and staging
of interaction in a dialog. By a post-analysis of all interaction
sequences described in this manner, we find that
utterances 0 and 1 have to proceed in order. Utterances
dealing with selection of {size,topping,crust} can then be
nested in any order and provide interesting opportunities
for mixing initiative. Finally, the utterances dealing with
confirmation of the user's request can proceed only after
choices of all three pizza attributes have been made.
While the notation doesn't directly reflect the computational
processing necessary to achieve the indicated struc-
ture, it can be used to express a set of requirements for
a dialog system design. There are 13 possible interaction
sequences (discounting permutations of attributes specified
in a given utterance): 1 possibility of specifying everything
in one utterance, 6 possibilities of specification in two ut-
terances, and 6 possibilities of specification in three utter-
ances. If we include permutations, there are 24 possibilities
(our calculations do not consider situations where, for in-
stance, the system doesn't recognize the user's input and
reprompts for information). Of these possibilities, all but
one are mixed-initiative sequences.
Many programming models view mixed initiative sequences
as requiring some special attention to be accommodated. In
particular, they rely on recognizing when a user has provided
unsolicited input 1 and qualify a shift-in-initiative as a
'transfer of control.' This implies that the mechanisms that
handle out-of-turn interactions are often di#erent from those
that realize purely system-directed interactions. Fig. 5 (left)
describes a typical software design. A dialog manager is in
charge of prompting the user for input, queueing messages
onto an output medium, event processing, and managing
the overall flow of interaction. One of its inputs is a dialog
script that contains a specification of interaction and a set
of slots that are to be filled. In our pizza example, slots
correspond to placeholders for values of size, topping, and
crust. An interpreter determines the first unfilled slot to
be visited and presents any prompts for soliciting user in-
put. A responsive input requires mere slot filling whereas
unsolicited inputs would require out-of-turn processing (in-
volving a combination of slot filling and simplification). In
turn, this causes a revision of the dialog script. The interpreter
terminates when there is nothing left to process in
the script. While typical dialog managers perform miscellaneous
functions such as error control, transferring to other
scripts, and accessing scripts from a database, the architecture
in Fig. 5 (left) focusses on the aspects most relevant to
our presentation.
Our approach, on the other hand, is to think of a mixed-initiative
dialog as a program, all of whose arguments are
passed by reference and which correspond to the slots comprising
information assessment. As usual, an interpreter in
the dialog manager queues up the first applicable prompt
to an output medium. Both responsive and unsolicited inputs
by a user now correspond (uniformly) to values for
arguments; they are processed by partially evaluating the
program with respect to these inputs. The result of partial
evaluation is another dialog (simplified as a result of user
input) which is handed back to the interpreter. This novel
We use the term 'unsolicited input' here to refer to expected
but out-of-turn inputs as opposed to completely unexpected
(or out-of-vocabulary) inputs.
out-of-turn
processing
slot
filling
Interpreter
user
input?
Dialog
Manager
user input Dialog script
unsolicited
responsive
nothing
to process STOP
Interpreter
user
input?
partial
evaluator
Dialog
Manager
user input Dialog script
to process
yes
nothing
Figure
5: Designs of software systems for mixed-initiative interaction. (left) Traditional system architecture,
distinguishing between responsive and unsolicited inputs. (right) Using partial evaluation to handle inputs
uniformly.
{
if (unfilled(size)){
/* prompt for size */
if (unfilled(topping)){
/* prompt for topping */
if (unfilled(crust)){
/* prompt for crust */
Figure
Modeling a dialog script as a program
parameterized by slot variables that are passed by
reference.
design is depicted in Fig. 5 (right) and a dialog script represented
in a programmatic notation is given in Fig. 6. Partial
evaluation of Fig. 6 with respect to user input will remove
the conditionals for all slots that are filled by the utterance
(global variables are assumed to be under the purview of
the interpreter). The reader can verify that a sequence of
such partial evaluations will indeed mimic the interaction
sequence depicted in Fig. 2 (and any of the other mixed-initiative
sequences).
Partial evaluation serves two critical uses in our design.
The first is obvious, namely the processing of out-of-turn
interactions (and any appropriate simplifications to the dialog
script). The more subtle advantage of partial evaluation
is its support for staging mixed-initiative interactions. The
mix-equation [9] holds for every possible way of splitting
inputs into two categories, without enumerating and 'trap-
ping' the ways in which the computations can be staged. For
instance, the nested pair in Fig. 2 is supported as a natural
consequence of our design, not by anticipating and reacting
to an out-of-turn input.
Another way to characterize the system designs in Fig. 5
is to say that Fig. 5 (left) makes a distinction of responsive
versus unsolicited inputs, whereas Fig. 5 (right) makes
a more fundamental distinction of fixed-initiative (interpre-
tation) versus mixed-initiative (partial evaluation). In other
words, Fig. 5 (right) carves up an interaction sequence into
(i) turns that are to be handled in the order they are modeled
(by an interpreter), and (ii) turns that can involve mixing
of initiative (handled by a partial evaluator). In the latter
case, the computations are actually used as a representation
of interactions. Since only mixed-initiative interactions involve
multiple staging options and since these are handled
by the partial evaluator, our design requires the least amount
of specification (to support all interaction sequences). For
instance, the script in Fig. 6 models the parts that involve
mixing of initiative and helps realize all of the 13 possible
interaction sequences. At the same time it does not model
the confirmation sequence of Fig. 2 because the caller cannot
confirm his order before selecting the three pizza attributes!
This turn must be handled by straightforward interpretation
To the best of our knowledge, such a model of partial evaluation
for mixed-initiative interaction has not been proposed
before. While computational models for mixed-initiative interaction
remain an active area of research [8], such work
is characterized by keywords such as 'controlling mixed-initiative
'knowledge representation and reasoning
strategies,' and `multi-agent co-ordination.' There
are even projects that talk about 'integrating' mixed initiative
interaction and partial evaluation to realize an architecture
for planning and learning [17]. We are optimistic
that our work has the same historical significance as the relation
between explanation-based generalization (a learning
technique in AI) and partial evaluation established by van
Haremelen and Bundy in 1988 [16].
2.1 Preliminary Observations
It is instructive to examine the situations under which a
concept studied in a completely di#erent domain is likened
to partial evaluation. Establishing a resemblance to partial
evaluation is usually done by mapping from an underlying
idea of specialization or customization in the original do-
main. This is the basis in [16] where specialization of domain
theories is equated to partial evaluation of programs.
The motivating theme is one of re-expressing the given program
(domain theory) in an e#cient but less general form,
by recognizing that parameters have di#erent rates of variation
[9]. This theme has also been the primary demonstrator
in the partial evaluation literature, where inner loops
call-in confirm
size
d2
d1 d3
topping
crust
Figure
7: Modeling requirements for unsolicited reporting
as traversals of a graph.
interpretation
layer
call-in
d1 confirm
pizza
size
d2 d3
topping
crust
partial evaluation
layer
Figure
8: Layering interaction sequences for unsolicited
reporting.
in heavily parameterized programs are optimized by partial
evaluation.
Our model is grounded on the (more basic) view of parameters
as involving di#erent times of arrival. By capturing
the essence of unsolicited reporting as merely di#erences
in arrival time (specification time) of aspects, we are able to
use partial evaluation for mixed-initiative interaction. The
property of partial evaluation that is pertinent here is not
just that it is a specialization technique, but also that a
sequence of such specializations corresponds to a valid instance
of interaction with the dialog script. Moreover, the
set of valid sequences of specializations is exactly the set of
interactions to be supported. The program of Fig. 6 can thus
be thought of as a compaction of all interaction sequences
that involve mixing initiative.
2.1.1 Decomposing Interaction Sequences
An important issue to be addressed is the decomposition
of interaction sequences into parts that should be handled
by interpretation and parts that can benefit from partial
evaluation. We state only a few general guidelines here.
Fig. 7 describes the set of valid interaction sequences for
the pizza example as traversals of a graph. Nodes in the
graph correspond to stages of specification of aspects. Thus,
taking the outgoing edge from the call-in node implies that
this stage of the interaction has been completed (the need
for the dummy nodes such as d2 and d3 will become clear
in a moment). The rules of the traversal are to find paths
such that all nodes are visited and every node is visited
exactly once. It is easy to verify that with these rules, Fig. 7
models all possibilities of mixing initiative (turns where the
user specifies multiple pizza aspects can be modeled as a
sequence of graph moves).
Expressing our requirements in a graph such as Fig. 7 reveals
that the signature bushy nature of mixing initiative
is restricted to only a subgraph of the original graph. We
can demarcate the starting (d2) and ending points (d3) of
bakery
item
coffee
eggs
d2 d3
Figure
9: Modeling subdialog invocation.
call-in
d1 confirm
breakfast
d2 d3
eggs
coffee
bakery
item
layer
layer
first interpretation
second interpretation
partial evaluation
layer
Figure
10: Layering interaction sequences for sub-dialog
invocation.
the subgraph and layer it in the context of a larger inter-action
sequence as shown in Fig. 8. The nodes in the top
layer dictate a strict sequential structure, thus they should
be modeled by interpretation. The nodes in the bottom
layer encapsulate and exhibit the bushy characteristic; they
are hence candidates for partial evaluation. The key lesson
to be drawn from Fig. 8 is that partial evaluation e#ectively
supports the mixed-initiative subgraph without maintaining
any additional state (for instance after a node has been vis-
ited, this information is not stored anywhere to ensure that
it is not visited again). In contrast, the interpretation layer
has an implicit notion of state (namely, the part of the interaction
sequence that is currently being interpreted). The
layered design can be implemented as two alternating pieces
of code (for interpretation and partial evaluation, respec-
tively) where the interpreter passes control to the partial
evaluator for the segments involving pizza attribute specification
and resumes after these segments have been evaluated
For some applications, the alternating layer concept might
need to be extended to more than two layers. Consider a
hotel telephone service for ordering breakfast. Assume that
ordering breakfast involves specifications of {eggs, co#ee,
bakery item} tuples. The user can specify these items in any
order, but each item involves a second clarification aspect.
After the user has specified his choice of eggs, a clarification
of 'how do you like your eggs?' might be needed. Similarly,
when the user is talking about co#ee, a clarification of 'do
take cream and sugar?' might be required, and so on.
This form of mixed-initiative was introduced in Section 1.1
as subdialog invocation. The set of interaction sequences
that address this requirement can be represented as shown
in Fig. 9 (only the mixed-initiative parts are shown). In
call-in confirm
size
d2
d1 d3
topping
crust
Figure
11: A requirement for mixing initiative that
cannot be captured by partial evaluation.
this case, it is not possible to achieve a clean separation of
subgraphs into interpretation and partial evaluation in just
two layers.
One solution is to use three layers as shown in Fig. 10.
If we implement both interpretation layers of this design
by the same code, some form of scope maintenance (e.g., a
stack) will be necessary when moving across the bottom two
layers. Pushing onto the stack preserves the context of the
original interaction sequence and is e#ected for each step
of partial evaluation. Popping restores this context when
control returns to the partial evaluator. The semantics of
graph traversal remain the same. Once the nodes in the
second and third layers are traversed, interpretation is resumed
at the top layer to confirm the breakfast order. It is
important to note that once again, the semantics of transfer
of control between the interpreter and partial evaluator are
unambiguous and occur at well defined branch points.
The above examples highlight the all-or-nothing role played
by partial evaluation. For a dialog script parameterized in
terms of slot variables, partial evaluation can be used to support
all valid possibilities for mixing initiative, but it cannot
restrict the scope of mixing initiative in any way. In particular
this means that, unlike interpretation, partial evaluation
cannot enforce any individual interaction sequence! Even a
slight deviation in requirements that removes some of the
walks in the bushy subgraph will render partial evaluation
inapplicable.
For instance, consider the graph in Fig. 11 that is the
same as Fig. 7 with some edges removed. Mixing initiative
is restricted to visiting the size, topping, and crust nodes in
a strict forward or reverse order. Partial evaluation cannot
be used to restrict the scope of mixing initiative to just these
two possibilities of specifying the pizza attributes. We can
model this by interpretation but this requires anticipation,
akin to the design of Fig. 5 (left).
This is the essence of partial evaluation as a programming
model; it makes some tasks extremely easy but like every
other programming methodology, it is not a silver bullet. In
a realistic implementation for mixed-initiative interaction,
partial evaluation will need to be used in conjunction with
other paradigms (e.g., interpretation) to realize the desired
objectives.
In this paper, we concentrate on only the unsolicited reporting
of mixed initiative for which the decomposition
illustrated by Fig. 8 is adequate. In other words, these are
the applications where the all-or-nothing role of partial evaluation
is su#cient to realize mixed-initiative interaction.
Our modeling methodology is concerned only with the interaction
staging aspect of dialog management, namely determining
what the next step(s) in the dialog can or should
be. We have not focused on aspects such as response gener-
ation. In the pizza example, responses to successful partial
evaluation (or interpretation) can be easily modeled as side-
Traditional Browser
partial input specification window
Figure
12: Sketch of an interface for mixed-initiative
interaction with web sites.
e#ects. In other cases, we would need to take into account
the context of the particular interaction sequence. The same
argument holds for what is known as tapered prompting [4].
If the prompt for size needs to be replayed (because the user
took the initiative and specified a topping), we might want
to choose a di#erent prompt for a more natural dialog (i.e.,
instead of repeating 'What size pizza would you like?,' we
might prompt as 'You still haven't specified a size. Please
choose among small, medium, or large. We do not discuss
these aspects further except to say that they are an important
part of a production implementation of dialog based
systems.
2.1.2 Implementation Technologies
We now turn our attention to implementing our partial
evaluation model for existing information access and delivery
technologies. As stated earlier, our model is applicable
to voice-based interaction technologies as well as web access
via hyperlinks. In [15], we study the design and implementation
of web site personalization systems that allow the
mixing of initiative. In contrast to a voice-based delivery
mechanism, (most) interactions with web sites proceed by
clicking on hyperlinks. For instance, a pizza ordering web
service might provide hyperlinks for choices of size so that
clicking on a link implies a selection of size. This might
refresh the browser to a new page that presents choices for
topping, and so on. Since clicking on links implies a response
to the initiative taken by the web page author, a di#erent
communication mechanism needs to be provided to enable
the user to take the initiative.
Our suggested interface design is shown in Fig. 12. An
extra window is provided for the user to type in his specification
aspects. This is not fundamentally di#erent from a
location toolbar in current browsers that supports the specification
of URL. Consider that a web page presents hyper-links
for choices of pizza size. Using the interface in Fig. 12,
the user can either click on her choice of size attribute in
the top window (e#ectively, responding to the initiative), or
can use the bottom window to specify, say, a topping out-
of-turn. To force an interpretation mode for segments of
an interaction sequence, the bottom window can be made
inactive. Modeling decisions, implementation details, and
experimental results for two web applications developed in
this manner are described in [15]; we refer the reader to this
reference for details.
Our original goal for this paper was to study these concepts
for voice-based interaction technologies and to see if
our model can be used for the implementation of a voice-
Microphone extraction
Feature
Digital Converter
Analog to
Speech
Recognizer
Language
model
HMM
accoustic
models
Language
Processing
Natural
Management
Dialog
Dialog
model
analog
signal
digital
signal
feature
vector
response result(s)
results+
Figure
13: Basic components of a spoken language processing system.
Internet Server
.html pages
http
protocol
PC with
web browser
Voice
Browser
Platform
Telephone
Network Internet Server
http
protocol
.vxml pages
files
Figure
14: (left) Accessing HTML documents via a HTTP web server. (right) Accessing VoiceXML documents
via a HTTP web server.
based mixed-initiative application. A variety of commercial
technologies are available for developing voice-based appli-
cations; as a first choice of a representational formalism, we
proceeded to use the specification language of the VoiceXML
dialog management architecture [4]. The idea was to
describe dialogs in VoiceXML notation and use partial evaluation
to realize mixed-initiative interaction. After some
initial experimentation we realized that VoiceXML's form
interpretation algorithm (FIA), which processes the dialogs,
provides mixed-initiative interaction using a script very similar
to the one we presented for use with a partial evaluator
(see Fig. 6)! In other these, there is no real advantage to
partially evaluating a VoiceXML specification! This pointed
us to the possibility that perhaps we can identify an instantiation
of our model in VoiceXML's dialog management architecture
and especially, the FIA. The rest of the paper
takes this approach and shows that this is indeed true.
We also identify other implementation technologies where
we can usefully implement a voice-based mixed-initiative
system using our model. We merely identify the opportunities
here and hope to elaborate on a complete implementation
of our model in a future paper.
3. SOFTWARE TECHNOLOGIES FOR
VOICE-BASED MIXED-INITIATIVE
APPLICATIONS
Before we can study the programming of mixed-initiative
in a voice-based application, it will be helpful to understand
the basic architecture (see Fig. 13) of a spoken language
processing system. As a user speaks into the sys-
tem, the sounds produced are captured by a microphone
and converted into a digital signal by an analog-to-digital
converter. In telephone-based systems (the VoiceXML architecture
covered later in the paper is geared toward this
mode), the microphone is part of the telephone handset and
the analog-to-digital conversion is typically done by equipment
in the telephone network (in some cellular telephony
models, the conversion would be performed in the handset
itself).
The next stage (feature extraction) prepares the digital
speech signal to be processed by the speech recognizer. Features
of the signal important for speech recognition are extracted
from the original signal, organized as an attribute
vector, and passed to the recognizer.
Most modern speech recognizers use Hidden Markov Models
(HMMs) and associated algorithms to represent, train,
and recognize speech. HMMs are probabilistic models that
must be trained on an input set of data. A common technique
is to create sets of acoustic HMMs that model phonetic
units of speech in context. These models are created from a
training set of speech data that is (hopefully) representative
of the population of users who will use the system. A language
model is also created prior to performing recognition.
The language model is typically used to specify valid combinations
of the HMMs at a word- or sentence-level. In this
way, the language model specifies the words, phrases, and
sentences that the recognizer can attempt to recognize. The
process of recognizing a new input speech signal is then accomplished
using e#cient search algorithms (such as Viterbi
decoding) to find the best matching HMMs, given the constraints
of the language model. The output of the speech
recognizer can take several di#erent forms, but the basic
result is a text string that is the recognizer's best guess of
what the user said. Many recognizers can provide additional
information such as a lattice of results, or an N-best ranked
list of results (in case the later stages of processing wish to
reject the recognizer's top choice). A good introduction to
speech recognition is available in [10].
The stages after speech recognition vary depending on the
application and the types of processing required. Fig. 13
presents two additional phases that are commonly included
in spoken language processing systems in one form or an-
other. We will broadly refer to the first post-recognition
stage as natural language processing (NLP). NLP is a large
field in its own right and includes many sub-areas such as
parsing, semantic interpretation, knowledge representation,
and speech acts; an excellent introduction is available in
Allen's classic [1]. Our presentation in this paper has assumed
NLP support for slot-filling (i.e., determining values
for slot variables from user input).
Slot-filling is commonly achieved by defining parts of a
language model and associating them with slots. The language
model could be specified as a context-free grammar
or as a statistically-based model such as n-grams. Here we
focus on the former: in this approach, slots can be specified
within the productions of a context-free grammar (akin to
a attribute grammar) or they can be associated with the
non-terminals in the grammar.
We will refer to the next phase of processing as simply
'dialog management' (see Fig. 13). In this phase, augmented
results from the NLP stage are incorporated into the dialog
and any associated logic of the application is executed. The
job of the dialog manager is to track the proceedings of the
dialog and to generate appropriate responses. This is often
done within some logical processing framework and a dialog
model (in our case, a dialog script) is supplied as input that
is specific to the particular application being designed. The
execution of the logic on the dialog model (script) results in a
response that can be presented back to the user. Sometimes
response generation is separated out into a subsequent stage.
3.1 The VoiceXML Dialog Management
Architecture
There are many technologies and delivery mechanisms
available for implementing Fig. 13's basic components. A
popular implementation can be seen in the VoiceXML dialog
management architecture. VoiceXML is a markup language
designed to simplify the construction of voice-response applications
[4]. Initiated by a committee comprising AT&T,
IBM, Lucent Technologies, and Motorola, it has emerged as
a standard in telephone-based voice user interfaces and in
delivering web content by voice. We will hence cover this
architecture in detail.
The basic idea is to describe interaction sequences using
a markup notation in a VoiceXML document. As the VoiceXML
specification [4] indicates, a VoiceXML document constitutes
a conversational finite state machine and describes
a sequence of interactions (both fixed- and mixed-initiative
are supported). A web server can serve VoiceXML documents
using the HTTP protocol (Fig. 14, right), just as
easily as HTML documents are currently served over the
Internet (Fig. 14, left). In addition, voice-based applications
require a suitable delivery platform, illustrated by a
telephone in Fig. 14 (right). The voice-browser platform
in Fig. 14 (right) includes the VoiceXML interpreter which
processes the documents, monitors user inputs, streams mes-
sages, and performs other functions expected of a dialog
management system. Besides the VoiceXML interpreter,
the voice-browser platform typically includes a speech rec-
ognizer, a speech synthesizer, and telephony interfaces to
help realize these aspects of voice-based interaction.
Dialog specification in a VoiceXML document involves organizing
a sequence of forms and menus. Forms specify a
set of slots (called field item variables) that are to be filled
by user input. Menus are syntactic shorthands (much like
a case construct); since they involve only one field item
variable (argument), there are no opportunities for mixing
initiative. We do not discuss menus further in this paper.
An example VoiceXML document for our pizza application
is given in Fig. 15.
As shown in Fig. 15, the pizza dialog consists of two forms.
The first form (welcome) merely welcomes the user and transitions
to the second. The place order form involves four
fields (slot variables) - the first three cover the pizza attributes
and the fourth models the confirmation variable (re-
call the dialogs in Section 1). In particular, prompts for
soliciting user input in each of the fields are specified in
Fig. 15.
Interactions in a VoiceXML application proceed just like
a web application except that instead of clicking on a hyper-link
(to goto a new state), the user talks into a microphone.
The VoiceXML interpreter then determines the next state
to move to. Any appropriate responses (to user input) and
prompts are delivered over a speaker. The core of the interpreter
is a so-called form interpretation algorithm
that drives the interaction. In Fig. 15, the fields provide
for a fixed-initiative, system-directed interaction. The FIA
simply visits all fields in the order they are presented in the
document. Once all fields are filled, a check is made to ensure
that the confirmation was successful; if not, the fields
are cleared (notice the clear namelist tag) and the FIA
will proceed to prompt for the inputs again, starting from
the first unfilled field - size.
The form in Fig. 15 is referred to as a directed one since
the computer has the initiative at all times and the fields
are filled in a strictly sequential order. To make the interaction
mixed-initiative (with respect to size, crust, and
topping), the programmer merely has to specify a form-level
grammar that describes possibilities for slot-filling from a
user utterance. An example form-level grammar file (size
toppingcrust.gram) is given in Fig. 16. The productions
for sizetoppingcrust cover all possibilities of filling slot
variables from user input, including multiple slots filled by
a given utterance, and various permutations of specifying
pizza attributes. The grammar is associated with the dialog
script by including the line:
just before the definition of the first field (size) in Fig. 15.
The form-level grammar contains productions for the various
choices available for size, topping, and crust and also
qualifies all possible parses for a given utterance (modeled
by the non-terminal sizetoppingcrust). Any valid combination
of the three pizza aspects uttered by the user (in any
order) is recognized and the appropriate slot variables are
instantiated. To see why this also achieves mixed-initiative,
let us consider the FIA in more detail.
Fig. 17 reproduces the salient aspects of the FIA relevant
for our discussion. Compare the basic elements of the FIA
to the stages in Fig. 5 (right). The Select phase corresponds
to the interpreter, the Collect phase gathers the user input,
and actions taken in the Process phase mimic the partial
evaluator. Recall that 'programs' (scripts) in VoiceXML
can be modeled by finite-state machines, hence the mechanics
of partial evaluation are considerably simplified and just
amount to filling the slot and tagging it as filled. Since the
<?xml version="1.0"?>
<vxml version="1.0">
<!- pizza.vxml
A simple pizza ordering demo to illustrate some basic elements
of VoiceXML. Several details have been omitted from this demo
to help make the basic ideas stand out. ->
<form id="welcome">
<block name="block1">
Thank you for calling Joe's pizza ordering system.
<goto next="#place_order" />
<form id="place_order">
<field name="size">
What size pizza would you like?
<field name="topping">
What topping would you like on your pizza?
<field name="crust">
What type of crust do you want?
<field name="verify">
So that is a <value expr="size"/> <value expr="topping"/> pizza
with <value expr="crust"/> crust.
Is this correct?
yes | no
<if cond="verify=='no'">
<clear namelist="size topping verify crust"/>
Sorry. Your order has been canceled.
<else/>
Thank you for ordering from Joe's pizza.
Figure
15: Modeling the pizza ordering dialog in a VoiceXML document.
public
{this.size=$} {this.crust=$} {this.topping=$} |
{this.topping=$} {this.size=$} {this.crust=$} |
{this.crust=$} {this.topping=$} {this.size=$};
small | medium | large;
sausage | pepperoni | onions | green peppers;
regular | deep dish | thin;
Figure
form-level grammar to be used in conjunction with the script in Fig. 15 to realize mixed-initiative
interaction.
While (true)
{
Select the first form item with an unsatisfied guard condition
(e.g., unfilled)
If no such form item, exit
// COLLECT PHASE
Queue up any prompts for the form item
Get an utterance from the user
// PROCESS PHASE
foreach (slot in user's utterance)
{
if (slot corresponds to a field item) {
copy slot values into field item variables
set field item's `just_filled' flag
some code for executing any 'filled' actions triggered
Figure
17: Outline of the form interpretation algorithm (FIA) in the VoiceXML dialog management archi-
tecture. Adapted from [4].
public
{this.crust=$} |
{this.topping=$};
small | medium | large;
sausage | pepperoni | onions | green peppers;
regular | deep dish | thin;
Figure
alternative form-level grammar to realize mixed-initiative interaction with the script in Fig. 15.
FIA repeatedly executes while there are unfilled form items
remaining, the processing phase (Process) is e#ectively parameterized
by the form-level grammar file. In other words,
the form-level grammar file not only enables slot filling, it
also implicitly directs the staging of interactions for mixed-
initiative. When the user specifies 'peperroni medium' in an
utterance, not only does the grammar file enable the recognition
of the slots they correspond to (topping and size), it
also enables the FIA to simplify these slots (and mark them
as 'filled' for subsequent interactions).
The form-level grammar file shown in Fig. 16 (which is
also a specification of interaction staging) may make Voice-
XML's design appear overly complex. In reality, however,
we could have used the vanilla form-level grammar file in
Fig. 18. While helping to realize mixed-initiative, the new
form-level file (as does our model) also allows the possibility
of utterances such as 'pepperoni pepperoni,' or even,
'pepperoni sausage!' Suitable semantics for such situations
(including the role of side-e#ects) can be defined and accommodated
in both the VoiceXML model and ours. It
should thus be obvious to the reader that VoiceXML's dialog
management architecture is actually implementing a mixed
evaluation model (for conversational finite state machines),
comprising interpretation and partial evaluation.
The VoiceXML specification [4] refers to the form-level
file as a 'grammar file,' when it is actually also a specification
of staging. Even though the grammar file serves the
role of a language model in a voice application, we believe
that recognizing its two functionalities is important in understanding
mixed-initiative system design. A case in point
is our study of personalizing interaction with web sites [15]
(see also Fig. 12). There is no requirement for a 'grammar
file,' as there is usually no ambiguity about user clicks and
typed-in keywords. Specifications in this application thus
serve to associate values with program variables and do not
explicitly capture the staging of interactions. The advantageous
of partial evaluation for interaction staging are thus
obvious.
3.2 Other Implementation Technologies
VoiceXML's FIA thus includes native support for slot fill-
ing, slot simplification, and interaction staging. All of these
are functions enabled by partial evaluation in our model.
Table
contrasts two other implementation approaches in
terms of these aspects. In a purely slot-filling system, native
support is provided for simplifying slots from user utterances
but extra code needs to be written to model the
control logic (for instance, 'the user still didn't specify his
choice of size, so the question for size should be repeated.
Several commercial speech recognition vendors provide APIs
that operate at this level. In addition, many vendors support
low-level APIs that provide basic access to recognition
results (i.e., text strings) but do not perform any additional
processing. We refer to these as recognizer-only APIs. They
serve more as raw speech recognition engines and require significant
programming to first implement a slot-filling engine
and, later, control logic to mimic all possible opportunities
for staging. Examples of the two latter technologies can be
seen in the commercial telephone-based speech recognition
market (from companies such as Nuance, SpeechWorks, and
IBM). The study presented in this paper suggests a systematic
way by which their capabilities for mixed-initiative interaction
can be assessed. Table 1 also shows that in the latter
two software technologies, our partial evaluation model
can be implemented to achieve mixed-initiative interaction.
4. DISCUSSION
Our work makes contributions to both partial evaluation
and mixed-initiative interaction. For the partial evaluation
community, we have identified a novel application where
the motivation is the staging of interaction (rather than
speedup). Since programs (dialogs) are used as specifications
of interaction, they are written to be partially eval-
uated; partial evaluation is hence not an 'afterthought' or
an optimization. An interesting research issue is: Given
(i) a set of interaction sequences, and (ii) addressable information
(such as arguments and slot variables), determine
(iii) the smallest program so that every interaction sequence
can be staged in a model such as Fig. 5 (right). As stated
earlier, this requires algorithms to automatically decompose
and 'layer' interaction sequences into those that are best addressed
in the interpreter and those that can benefit from
representation and specialization by the partial evaluator.
For mixed-initiative interaction, we have presented a programming
model that accommodates all possibilities of stag-
ing, without explicit enumeration. The model makes a distinction
between fixed-initiative (which has to be explicitly
programmed) and mixed-initiative (specifications of which
can be compacted for subsequent partial evaluation). We
have identified instantiations of this model in VoiceXML
and slot-filling APIs. We hope this observation will help
system designers gain additional insight into voice application
design strategies.
It should be recalled that there are various facets of mixed-initiative
that are not addressed in this paper. Besides sub-dialog
invocations, VoiceXML's design can support dialogs
such as shown in Fig. 19. Caller 1's request, while demonstrating
initiative, implies a dialog with an optional stage
(which cannot be modeled by partial evaluation). Such a
situation has to be trapped by the interpreter, not by partial
evaluation. Caller 2 does specify a staging, but his staging
poses constraints on the computer's initiative, not his own.
Such a 'meta-dialog' facet [5] requires the ability to jump
out of the current dialog; VoiceXML provides many elements
for describing such transitions. Extending our programming
model to cover these facets is an immediate direction of future
research.
VoiceXML also provides certain 'impure' features and side-
e#ects in its programming model. For instance, after selecting
a size (say, medium), the caller could retake the initiative
in a di#erent part of the dialog and select a size again (this
time, large). This will cause the new value to override any
existing value in the size slot. In our model, this implies
the dynamic substitution of an earlier, 'evaluated out,' stage
with a functional equivalent. Obviously, the dialog manager
has to maintain some state (across partial evaluations) to
accomplish this feature or support a notion of despecializa-
tion. This suggests new directions for research in program
transformation.
It is equally possible to present the above feature of VoiceXML
as a shortcoming of its implementation of mixed ini-
tiative. Consider that after selection of a size, the scope of
any future mixing of initiative should be restricted to the remaining
slots (topping and crust). The semantics of graph
traversal presented earlier capture this requirement. Such
an e#ect is cumbersome to achieve in VoiceXML and would
Software Support for Support for
Technology Slot Simplification Interaction Staging
Slot Filling Systems # -
Recognizer-Only APIs -
Table
1: Comparison of software technologies for voice-based mixed-initiative applications.
calling Joe's pizza ordering system.
What size pizza would you like?
Caller 1: What sizes do you have?
3 Caller 2: Err. Why don't you ask me the questions in topping-crust-size order?
Figure
19: Other mixed-initiative conversations that are supported by VoiceXML.
probably require transitioning to progressively smaller forms
(with correspondingly restrictive form-level grammars). Our
model provides this feature naturally; after size has been
partially evaluated 'out,' the scope of future partial evaluations
is automatically restricted to involve only topping and
crust.
Our long-term goal is to characterize mixed initiative facets,
not in terms of initiative, interaction, or task models but
in terms of the opportunities for staging and the program
transformation techniques that can support such staging.
This means that we can establish a taxonomy of mixed-initiative
facets based on the transformation techniques (e.g.,
partial evaluation, slicing) needed to realize them. Such a
taxonomy would also help connect the facets to design models
for interactive software systems. We also plan to extend
our software model beyond slot-and-filler structures, to include
reasoning and exploiting context.
5. NOTES
The work presented in this paper is supported in part
by US National Science Foundation grants DGE-9553458
and IIS-9876167. After this paper was submitted, a new
version (version 2.00) of the VoiceXML specification was
released [12]. Our observations about the instantiation of
our model in the VoiceXML dialog management architecture
also apply to the new specification.
6.
--R
Natural Language Understanding.
Towards Conversational Human-Computer Interaction
Voice eXtensible Markup Language: VoiceXML.
An Assessment of Written/Interaction Dialogue for Information Retrieval Applications.
An Introduction to Discourse Analysis.
Computational Models for Mixed Initiative Interaction (Papers from the
Partial Evaluation and Automatic Program Generation.
An Introduction to Natural Language Processing
Cambridge University Press
Voice eXtensible Markup Language: VoiceXML.
What is Mixed-Initiative Interaction? In
The Partial Evaluation Approach to Information Personalization.
Integrating Planning and Learning: The PRODIGY Architecture.
--TR
Explanation-based generalisation = partial evaluation
Partial evaluation and automatic program generation
Natural language understanding (2nd ed.)
A collaborative model of feedback in human-computer interaction
Speech and Language Processing
Toward conversational human-computer interaction
Mixed-Initiative Interaction
--CTR
J. Wilkie , M. A. Jack , P. J. Littlewood, System-initiated digressive proposals in automated human-computer telephone dialogues: the use of contrasting politeness strategies, International Journal of Human-Computer Studies, v.62 n.1, p.41-71, January 2005 | mixed-initiative interaction;VoiceXML;interaction sequences;dialog management;partial evaluation |
503068 | Timing verification of dynamically reconfigurable logic for the xilinx virtex FPGA series. | This paper reports on a method for extending existing VHDL design and verification software available for the Xilinx Virtex series of FPGAs. It allows the designer to apply standard hardware design and verification tools to the design of dynamically reconfigurable logic (DRL). The technique involves the conversion of a dynamic design into multiple static designs, suitable for input to standard synthesis and APR tools. For timing and functional verification after APR, the sections of the design can then be recombined into a single dynamic system. The technique has been automated by extending an existing DRL design tool named DCSTech, which is part of the Dynamic Circuit Switching (DCS) CAD framework. The principles behind the tools are generic and should be readily extensible to other architectures and CAD toolsets. Implementation of the dynamic system involves the production of partial configuration bitstreams to load sections of circuitry. The process of creating such bitstreams, the final stage of our design flow, is summarized. | INTRODUCTION
In dynamically reconfigurable logic (DRL), a circuit or system is
adapted over time. This presents additional design and
verification problems to those of conventional hardware design
[1] that standard tools cannot cope with directly. For this reason,
DRL design methods typically involve the use of a mixture of
industry standard tools, along with custom tools and some
handcrafting to cover the conventional tools inadequacies.
This paper introduces extensions to a previously reported CAD
tool named DCSTech [2] which was created to automate the
process of translating dynamic designs from VHDL into placed
and routed circuits. The original version of the tool supported
the Xilinx XC6200 family of FPGAs, and concentrated on the
timing verification aspects of the problem. This paper reports on
the extensions made to DCSTech to target the Xilinx Virtex
family, and to enhance its capabilities. As a mainstream
commercial FPGA, the design tool capabilities available with
this family exceed those of the XC6200, allowing the designer to
work more productively at a higher level of abstraction. By
combining the Virtex platform's capabilities with those of the
extended DCSTech, the designer has the ability to specify
designs in RTL/behavioural VHDL, place and route them and
verify their timing. DCSTech's back-annotation support has
been extended to produce VITAL VHDL models suitable for
DRL in addition to processing SDF timing information. This
enables back-annotated timing analysis regardless of the level of
abstraction at which the original design was produced.
The original DCSTech tool was written to be extensible to other
architectures. This work verifies the validity of its extensibility
hooks. The extensibility of DCSTech to other architectures
relies on the architecture's CAD tools supporting a select set of
capabilities. Most modern CAD tools meet the majority of these
requirements (with the exception of configuration bitstream
access), although some weaknesses, particularly in the control of
routing, are apparent. Therefore, the design techniques presented
here should be readily extensible to other dynamically
reconfigurable FPGAs.
The paper begins by reviewing existing work in section 2 before
presenting the challenges of DRL design in section 3. In section
4 we provide an overview of the principles behind DCSTech
while section 5 describes how they are applied to the Virtex.
Section 6 discusses the enhanced back annotation capabilities
necessary for design at the RTL and behavioral abstraction
levels. The tools are designed to be as architecture independent
as possible and section 7 describes how the tool may be extended
*Now at Xilinx Inc.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page. To
copy otherwise, or republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.
FPGA'02, February 24-26, 2002, Monterey, California, USA.
to support other dynamically reconfigurable FPGAs. In section
8, we describe how partial configuration bitstreams may be
obtained from the placed-and-routed subsections of the dynamic
design, an area of current research. The design flow is illustrated
with an example in section 9 before the paper concludes with
remarks on future research into the use of other modern CAD
techniques such as Static Timing Analysis (STA) within the
DRL design flow.
2. EXISTING WORK
Over the last six years, researchers have developed a number of
tools and techniques, supporting different target DRL systems.
The target systems can be characterized by their component set,
the set of resources that make up the system. Custom
Computing Machines (CCMs), for example, include processors,
FPGAs and memory in their component set. Tools ranging from
high-level language compilers to structural-level
hardware/software co-design environments have been designed
for such target systems. CCM compilers include tools such as
Nimble [3], and compilers for the GARP chip [4], which
compile ANSI-C. In addition to standard C compilation, CCM
compilers partition the application into a software executable and
a set of hardware modules that can be loaded onto the
reconfigurable datapath or FPGA. As these tools are aimed at
achieving a rapid design flow, similar to conventional computer
programming, they do not usually achieve optimum results.
Tools such as JHDL [5][6], a structural/RT level
hardware/software codesign environment based on Java, allow
the designer to customize his circuitry and specify its placement.
This allows designers to use their own expertise to optimize the
layout and composition of their circuits to achieve better results
(e.g. faster circuits and smaller reconfiguration bitstreams if
partial reconfiguration is used) as well as designing the
associated software in one environment.
Another design challenge is found when the component set is a
single FPGA device or when dynamic reconfiguration is applied
within individual devices. This sort of design throws up many
situations that most industry standard tools cannot handle at all,
such as verification, partial bitstream generation and automatic
configuration controller production. Many of the solutions
developed for this type of design also apply to CCM design. In
[7], Luk et al described a CAD framework for DRL design
targeted at the Xilinx XC6200 FPGA. A library based design
approach was used to encourage design reuse and control circuit
placement. This increases the similarity between successive
configurations and reduces the size of the partial configuration
files required. However, such a structural design approach limits
the portability of the tools, since new libraries targeted to each
device are required. Vasilko's DYNASTY [8] CAD framework
uses a designer driven temporal floorplanning approach, in
which the designer can visualise the layout of tasks on the FPGA
over time. It acts as a DRL-aware replacement to a place and
route (PAR) tool and operates on synthesised gate-level designs.
This has a number of advantages, such as ease of area estimation
and the ability to control routing and component placement
exactly. The designer therefore has the ability to generate
exactly the required layouts. However, as the tools are closely
associated with the XC6200 architecture considerable effort
would be required to port them to operate with other devices.
Research has also taken place into the use of alternative
languages that have useful properties in expressing aspects of a
DRL design. Ruby [9], Pebble [10] and Lava [11] allow the
designer to specify component placement using more convenient
methods than the usual use of attributes associated with standard
HDL designs. Pebble also includes a reconfigure-if statement,
which builds in support for DRL. Recent work with Lava has
seen it used with the Xilinx Virtex FPGA.
The DCS CAD framework provides simulation (DCSim) [1],
technology mapping and back annotation (DCSTech) [2] and
configuration controller synthesis (DCSConfig) [12]. Although
DYNASTY uses the same ideas as DCSTech, DCSTech
partitions the design at a higher level of abstraction. This gives
two advantages in the form of portability and circuit
specialisation by the synthesis tool. Since the design is
partitioned at an abstract level, DCSTech requires only a little
device specific knowledge. The majority of the partitioning
process is platform independent, as is the resulting circuit
description. The tool is therefore easily ported to support
different architectures. As the designs are synthesised after
partitioning any optimisations such as constant propagation can
be performed by the synthesis tools. If the design is partitioned
after synthesis, a further optimization stage may be required to
obtain the best results. At this level of abstraction the area
requirements of the circuit are more difficult to estimate, so
some iteration may be required to obtain the optimal layout.
Other researchers have concentrated on design at lower levels of
abstraction, allowing the designer absolute control over
component placement and routing. Such tools include CHASTE
[13], which provides access to the XC6200 configuration file and
the JBits SDK [14][15], which provides a variety of tools to
access, modify and verify Virtex configurations. In addition, it
allows the designer to produce new whole or partial
configurations. This approach could also be valuable as a method
of performing final optimizations at the end of a higher-level
design flow.
3. IMPLEMENTATION CHALLENGES
DRL is based on a many-to-one temporal logic mapping. This
means that different logic functions occupy the same area of the
logic array at different points in time. Tasks that share physical
resources cannot be active at the same time; they are mutually
exclusive. Tasks can also be mutually exclusive for algorithmic
reasons. A set of mutually exclusive tasks is called a mutex set
and the swappable tasks are termed dynamic tasks. Tasks that
are not altered in any way over time are described as static tasks.
In designing a dynamic system, the various tasks must be placed
in such a way as to ensure that no task is accidentally overwritten
while it is active. The consequences of such an error range from
subtle errors in operation to damage to the FPGA itself.
Dynamic tasks are added to and removed from the array by
loading partial configuration files to alter logic and routing. The
designer has to guarantee that all necessary connections between
the dynamic task and the surrounding environment will be made.
The routing paths configured onto the array with the dynamic
task must meet the routing coming from the surrounding array to
which they are intended to connect. The bitstreams must not
cause contention, for example by configuring a second driver
onto a bidirectional routing resource.
The final problem the designer faces is that standard CAD tools,
which are intended for the design of static circuits, will not
accept the mapping of more than one function to a particular
logic resource. Similarly, multiple drivers for a particular signal
would be treated as an error, since no mechanism exists to
indicate that the drivers are scheduled to operate at different
times.
4. AUTOMATING DYNAMIC DESIGN
PROCESSING WITH DCSTech
4.1
Overview
DCSTech was designed as a tool to help the designer to
overcome these problems. It can be thought of as a domain
converter between the static and dynamic domains, fig. 1. The
input dynamic system is split into a series of static designs on
which conventional design tools (synthesis and APR) can be
used. After the required implementation steps have been
performed on these static sub-designs, a number of files are
produced. VITAL compliant VHDL files that describe the
systems functionality are created, along with SDF files
specifying the circuit's timing and configuration bitstreams.
These files all require further processing before they are useful.
To verify the designs functionality and timing, the SDF and
VHDL files must be converted back to the dynamic domain, in
order to simulate them in the context of the overall system. To
implement the dynamic system, the configuration bitstreams
must also be converted into valid partial reconfigurations. The
original version of DCSTech supported the domain conversion
of timing information. This was because designs were specified
at the netlist level, and therefore a VITAL compliant simulation
model could be produced from the original design. The use of
higher design abstractions such as behavioural code combined
with synthesis means that no such netlist exists before synthesis.
The current version has therefore added netlist conversion to the
original SDF conversion, leaving bitstream conversion as a
manual process. The progress so far is illustrated in fig. 2.
4.2 Design and File Flow
The dynamic design input to DCSTech consists of VHDL files
describing the systems functionality. Each dynamic task is
represented as a component instantiation. Hence, the top-level
of the design is structural. Within each component, any
synthesisable level of abstraction can be used. The designer
assigns each dynamic task to a mutex set. This mutex set is
assigned a zone on the logic array and all dynamic tasks within
that set must reside within the corresponding zone. Thus, tasks
within a mutex set can overwrite each other, but static logic and
tasks in other mutex sets are unaffected. The correct system
operation is then assured so long as an appropriate
reconfiguration schedule is used (it is possible that the
configuration control mechanism used to activate and deactivate
tasks could cause problems if it is incorrectly designed).
Clearly, the zone of each mutex set must be large enough to
accommodate its largest task.
DCSTech
Bitstream
Original Version This Version Future work
DCSTech
Bitstream
Original Version This Version Future work
The dynamic intent of the system is captured in a
Reconfiguration Information Format (RIF) file. This file
describes the conditions under which tasks activate (are
configured onto the FPGA) and deactivate (are removed or
stopped), the mutex set to which they belong and their
placement. Information on the RIF file was published in [2].
In the static domain, one of the sub-designs deals with all the
static tasks in the design while each dynamic task is placed into a
Figure
1: between static and dynamic domains
Figure
2: Domain transforms performed by different
DCSTech versions
Reconfiguration
Information
Conventional
CAD Tools
DCSTech
Conventional
CAD Tools
Timing
Model
Timing
Model
Timing
Model
DCSTech
Dynamic Dynamic
static designs static results dynamic
timing results
Reconfiguration
Information
Reconfiguration
Information
Conventional
CAD Tools
DCSTechDCSTech
Conventional
CAD Tools
Timing
Model
Timing
Model
Timing
Model
Timing
Model
Timing
Model
Timing
Model
DCSTechDCSTech
Dynamic Dynamic
static designs static results dynamic
timing results
sub-design of its own. The concept of terminals is used to
ensure the correct routing connectivity with the dynamic tasks
surrounding environment. These are special components used to
lock the end of hanging signals to a particular location on the
logic array. By locating both hanging ends of the signal at the
same place, the connection can be easily produced. One
reserved area is added to the static sub-design for each mutex set
in the original design. Similarly, the dynamic task components
are surrounded by a bounding-box that ensures that they will be
placed within the reserved area for their mutex set, fig. 3.
After the sub-designs have been placed and routed by standard
back-end tools, accurate estimates of their timing can be made.
These estimates are typically written out into an SDF file. To
allow evaluation of the performance of the system, this
information must be applied to the overall dynamic system.
DCSTech is capable of mapping the SDF information into the
dynamic design simulation model that DCSim creates, allowing
timing simulation.
To apply the SDF file to the dynamic domain, the cells must
each be changed to match the hierarchy of the dynamic system
simulation to which it is applied. In addition, the timing entries
for the terminals are removed and their relevant timing
information mapped to isolation switches (simulation artefacts
added by DCSim to mimic the design's dynamic behavior in a
conventional simulator). Although the system hierarchy is
altered during this domain conversion process, the actual timing
information is unaltered, providing an accurate timing model.
Further details of the process can be found in [2].
5. CHANGES MADE TO DCSTech TO
A number of changes were required in order to retarget the static
design representations to Virtex synthesis and APR tools, as
summarized in table 1. These changes allow us to replicate the
capabilities DCSTech made available for the XC6200 on the
Virtex.
Table
1. Methods of implementing DCSTech requirements
on XC6200 and Virtex
Problem XC6200 Solution Virtex Solution
Reserving areas of
the array Reserve constraint Prohibit constraint
Locating dynamic
tasks within a
zone
bbox attribute
assigns a bounding
box
loc constraint allows
ranges to be
assigned
Preventing partial
circuits from
being removed
Use register as
terminal
component on
hanging signals
Changes to design
representation and
software settings
Lock hanging
signals to fixed
array locations
Terminal
components with
rloc constraints
Terminal
components with loc
constraints
Reserving areas on the logic array is a simple change of attribute
from a RESERVE constraint which prevents XACT6000 from
placing logic in the specified zone to specifying a PROHIBIT
constraint which does the same task in the Xilinx CAD tools.
This is added to the User Constraints Format (UCF) file.
Dynamic task locations can be set using a combination of an rloc
and a bbox attribute in XACT6000. The Virtex tools allow
location ranges to be specified with the loc attribute.
Because registers could be read from and written to through the
configuration interface, any line connected to and from
registers was considered a valid connection, even when the
A
A and D are static tasks
and C are mutually
exclusive dynamic tasks
floorplan of dynamic design
floorplan of static-task design
floorplan of a dynamic task B
floorplan of a dynamic task C
A
A
DCSTech
Special terminals used to lock the ends
of cut signals to a fixed position in the
static-task design and in relevant
dynamic task designs
Dynamic Static
Reserved
A
A and D are static tasks
and C are mutually
exclusive dynamic tasks
floorplan of dynamic design
floorplan of static-task design
floorplan of a dynamic task B
floorplan of a dynamic task C
A
A
DCSTech
Special terminals used to lock the ends
of cut signals to a fixed position in the
static-task design and in relevant
dynamic task designs
Dynamic Static
Reserved
Figure
3: Floorplan of a DRL circuit containing two dynamic tasks before and after
processing by DCSTech
register had incomplete connectivity, such as no output
connection. Using registers to terminate hanging nets therefore
prevented partial circuits from being removed. This technique
does not work with the Virtex synthesis tools, making two
changes necessary in the way that dynamic designs were
represented in the static domain. Firstly, the VHDL entity of
each dynamic task must have ports in it to describe its
connectivity, whereas before terminal components were all that
was required. In addition, to prevent large areas of the static
design being optimised away, the connectivity between the
inputs and outputs of the reserved area should be indicated.
Instantiating a black-box "mutex set" component, encapsulating
the inputs and outputs of all the dynamic tasks in the mutex set
solves this problem. The Xilinx Foundation tools support an
option not to remove unconnected logic, which suffices for the
placement and routing stage.
The terminal component used to terminate hanging nets has been
changed to a wire or buffer mapped to a look-up-table. This
component replaces the RPFDs and FDCs used on the XC6200
and has an advantage in that it does not contribute any
functionality, while accepting location constraints. This
simplifies the changes required in the final bitstream generation
stage and the netlist conversion process.
The changes described above allow most of the basic
requirements outlined in section 3 to be met by the standard
Virtex tools. However, one area of weakness is constraining the
placement of routing. The constraints described above only
apply to logic placement, and therefore the routing from circuits
can exceed their bounding boxes and invade reserved zones,
although the Xilinx modular design tools [16] can help alleviate
this problem. These are factors that the designer must take
account of when configuration bitstreams are being produced,
either by re-routing the offending lines, or by including the
routes in the appropriate configurations. In effect, the dynamic
task bounding-box should be increased in size to accommodate
any wayward routing.
6. ENHANCED BACKANNOTATED
The original static-to-dynamic domain conversion support for
SDF files has been enhanced in the new revision of DCSTech.
SDF information can only be applied to gate-level VITAL
compliant designs. If a design is produced at an abstract level,
then SDF information cannot be applied to it.
As with most modern APR tools, the Virtex tools are capable of
writing out a VITAL VHDL netlist that matches their SDF files.
The netlists are typically flat "seas of gates" with no hierarchy
(although many tools allow control over hierarchy flattening).
These files must be included in the domain conversion process in
order to allow timing analysis to be performed when design
abstractions above the structural level are used. DCSTech
handles this domain conversion process by instantiating the
dynamic tasks into the VHDL netlist for the static design. The
resulting dynamic circuit is, in effect a gate-level version of the
original RTL design, such as a DRL aware synthesis tool might
produce. DCSim is used to simulate the circuit. Since the
hierarchy of the system often changes if synthesis and APR tools
flatten the design, it may not match the hierarchy entries in the
original RIF file. Therefore, a new RIF file is written as part of
the domain conversion process. The domain conversion
therefore produces a complete new dynamic design
representation that DCSim can use to build a simulation model.
As reported in section 4, the relevant timing information
associated with the terminal components is usually applied to
DCSim's isolation switches while the references to the terminal
components are removed from the design. However, the Virtex
terminal components have no functionality and therefore do not
interfere with the simulation of the system. As a result, those
components that contribute timing data do not need to be
removed during the static to dynamic domain conversion; hence,
there is no need to retarget the timing data to the isolation
switches (although the isolation switches are still introduced as
they are needed to simulate the circuit). This simplifies the
conversion process thereby reducing the runtime of the
DCSTech tool.
7. THE EXTENDED DCSTech TOOL
DCSTech now provides multi-architecture support and interfaces
with several third party CAD tools. It was originally designed to
be extensible with as much of the technique as generic and
device independent as possible. Obviously, changes in the CAD
environment and device architectures will mean that parts of the
technique will need to be changed, either to take advantage of
device features or to coexist with its supporting CAD
DCSTech
dependent
functions
RIF VHDL
Dynamic design domain
Virtex dependent
functions
Other device
dependent functions
Static design domain
Log file
CRF file
Options
DCSim's
CRF file
DCSTech
dependent
functions
RIF VHDL
Dynamic design domain
Virtex dependent
functions
Other device
dependent functions
Static design domain
Log file
CRF file
Options
DCSim's
CRF file
Figure
4: File flow for the extended DCSTech tool
framework. The major changes came on the back annotation
side, where support for VHDL domain conversion was added.
However, this is not something that is specific to the Virtex
device, but a necessary step to enable designers to work at higher
levels of abstraction. Therefore, the concepts behind the tool
remain generic and architecture independent and the design
methodology, outlined in section 4, remains unchanged in this
revision. To facilitate this extensibility, the device dependent
functions are stored in dynamic link libraries. New devices can
therefore be supported with the addition of a DLL. The file flow
for DCSTech is shown in fig. 4.
The shaded files represent non-design files used as part of
DCSTech's operation. The CRF files are cross-reference files
used to store information such as terminal component
connectivity and isolation switch locations (DCSim). The log
file contains reports for the user. The options file can be used as
an alternative to typing in command line switches.
The design philosophy described in this paper will be able to
provide DRL design support for any FPGA and CAD tool set
provided it complies with the following requirements:
. The FPGA is dynamically reconfigurable
. Synthesis or design translation from VHDL is
available
. A suitable component can be found to lock the ends of
hanging nets to a particular location on the logic array
. A method is available to prevent unconnected circuits
being removed from the design
. Components can be assigned a bounding-box
constraining them to a location on the array
. Areas of the array can be reserved, prohibiting other
logic from being placed within that area
. The APR tools produce back annotated VITAL VHDL
and SDF files
. The names of elements instantiated into the design in a
structural manner are predictable within the SDF and
VITAL VHDL models. Components generated by
synthesis tools generally have unpredictable names,
but structural components are usually named after their
instantiation label and the hierarchy above them in the
original design. DCSTech has to be able to find the
terminal components that are added to the design in the
dynamic-to-static conversion as part of the static-to-
dynamic conversion after APR
. The configuration file is open to modification, via an
open file format or APIs such as JBits. This is not
necessary for DCSTech itself, but would be necessary
to modify the bitstreams in order to actually implement
the system
Since most modern CAD packages fulfil these requirements,
with the exception of bitstream access, support for the majority
of modern dynamically reconfigurable FPGAs should be
possible with only minor alterations in addition to those
described in sections 5 and 6.
8. BITSTREAM GENERATION
Conventional CAD tools can provide a configuration bitstream
for each of the partial circuits produced by DCSTech's dynamic-
to-static conversion process. As shown in fig. 3, the partial
circuits consist of one configuration representing all the static
circuits and a configuration for each dynamic circuit. The static
circuits are connected to terminal components that lock the ends
of floating connections to dynamic circuits in place. Similarly,
floating connections to the static circuits within each dynamic
task are locked in place by identically located terminals. These
overlying terminal components must be converted to a
connection between the two routes, by altering the configuration
bitstream.
Unless the tools are capable of producing partial configuration
files, their output files represent a configuration of each partial
circuit on an otherwise unconfigured FPGA. If these files were
applied to the FPGA, they would blank out all the existing
circuitry. For the system to operate correctly, however, only
circuitry that shares resources with the partial circuit to be
loaded should be disrupted when it is activated. The partial
circuit configurations need to be converted to partial
configurations, which reconfigure only the area occupied by a
dynamic task within its mutex set zone.
A further complication is caused by the lack of control over
routing placement noted in section 5. It is possible that routing
in a dynamic task will use the same line as routing in a static
task. If the dynamic task is then configured onto the array, the
routing conflict will cause errors in operation and possibly
device damage. The designer must ensure that the routing
resources used by each dynamic task are not shared by static
tasks or dynamic tasks in other mutex sets.
The target device configuration mechanism is another factor in
the strategy used to produce partial configurations. The XC6200
allows individual parts of the logic array to be altered; therefore,
only parts of the array in the dynamic task bounding-box need be
considered. In the Virtex, however, reconfiguration takes place
in columns. The smallest unit of configuration data that can be
applied is a frame, which configures a subset of the resources in
a column. Forty-eight frames are required to completely
configure a column [17]. As a result, all the logic and routing in
any column which makes up part of a dynamic task bounding-box
must be included in the partial reconfiguration bitstreams.
Therefore, any static logic or routing that overlaps these
columns, must be included in the partial configuration bitstream
of that dynamic task otherwise it could be overwritten.
For devices that contain bidirectional routing resources, care
must be taken not to configure a second driver onto a line during
the course of a partial reconfiguration otherwise device damage
may occur. One possible solution to this problem is to apply a
deactivate configuration, which blanks out existing circuitry on
part of the array, prior to loading a new dynamic task, but this
would increase the reconfiguration interval. To prevent static
circuit disruption, the deactivate configuration needs to contain
any static logic within the reconfiguration zone.
The generation of partial bitstreams for the Virtex device
therefore consists of several steps. Firstly, all the routing
resources used by each partial circuit must be evaluated. JRoute
[18], part of the JBits SDK includes functions that perform this
step. The routing should then be checked for conflicts between
circuits that can reside on the array concurrently. The physical
bounding-box for each dynamic task (which includes both logic
and routing) should then be determined and, from this, the area
occupied by each mutex set. The circuitry to be reconfigured for
each dynamic task therefore includes all logic and routing within
all the columns occupied by the mutex set area. In the Virtex
FPGA, the terminal components can be converted to
connections, simply by connecting the routes to both sides of the
LUT (i.e. merging the routing to and from the two overlapping
terminals). This is because the LUT is configured to behave like
a wire. Once these processes have been completed, partial
bitstreams for the affected FPGA areas can be generated
(possibly including deactivate configurations). JBits includes
support for this process via JRTR [15].
9. EXAMPLE COMPLEX NUMBER
MULTIPLIER
As a simple example to demonstrate the operation of DCSTech,
a dynamically reconfigurable constant complex number
multiplier is presented. Complex numbers consist of two parts:
the real part and the imaginary part, which is a coefficient of j
(the square root of -1). The product of two complex numbers is
calculated as follows:
imag
imag
a
real
real
a
real
_
_
_
_
real
imag
a
imag
real
a
imag
_
_
_
_
where p_real and p_imag are the real and imaginary parts of the
product, p, of complex numbers a and b. The operation therefore
requires four multipliers, an adder and a subtractor.
In the example, the complex product is formed by multiplying
the input complex number, x, by a constant complex coefficient.
The constant coefficient values can be hardwired into constant
coefficient multipliers potentially saving area and improving
performance. A diagram of the system, with a coefficient of 10
j12, is presented in fig. 5. The constant complex coefficient is
dependent on the multiplication factors of the four multiplier
circuits. Therefore, to support a different coefficient, the four
constant coefficient multipliers need to be changed.
p_real
p_imag
x_real
x_imag
p_real
p_imag
x_real
x_imag
Figure
5. Circuit to multiply by 10+j12
The multipliers can be reconfigured to alter their multiplication
factor and thus allow the system to support other coefficients.
The remaining circuitry does not require alteration in any way.
The set of four multipliers therefore forms a dynamic task. One
dynamic task is required for each coefficient supported. As the
different coefficients are mutually exclusive, the dynamic tasks
are all members of the same mutex set and can be assigned the
same area of the logic array. Since the registers and adders
surrounding the dynamic multipliers are not altered during the
reconfigurations, they constitute its static circuitry. Based on
these assignments, DCSTech can partition the dynamic design
into multiple static designs that can be placed and routed as
shown in fig. 1.
Figure
7. Layout of the complex multiplier's static circuitry.
This consists of the registers, adder and subtractor in fig. 5,
with terminal components locking the ends of connections to
and from the multipliers in place.
A complex multiplier with two dynamic tasks allowing
multiplication by the coefficients (10
created. The layout of the (10 task and the static
circuitry after APR on a XCV50 is shown in figs. 6 and 7. Fig. 6
Figure
6. Post-APR layout of the 10+j12 dynamic
task. This comprises the four multipliers shown in
fig. 5, surrounded by terminal components. The
areas highlighted in gray indicate terminal
components, while the area highlighted in white
indicates the dynamic task bounding-box
shows evidence of routing exceeding the dynamic task's
bounding-box. Similarly, fig. 7 shows that some of the static
circuit's routing has been placed within the bounding-box. When
implemented, the partial configuration bitstreams should include
such stray routing as discussed in section 8.
After APR, the circuits timing can be verified. DCSTech is used
to reassemble the static parts of the system into a VITAL
compliant gate-level model of the dynamic system and create a
matching SDF file. A new RIF file is written as part of this
process, to match any design hierarchy changes which occurred
during synthesis and APR. This model is then further processed
by DCSim to produce a dynamic simulation model, making use
of the new RIF file. A waveform for the timing simulation of
the system is shown in fig. 8.
The input number is represented by x_real and x_imag and is set
to ns, the n_Rst (reset) input is de-asserted,
allowing the multiplier to begin operation. The two status
signals at the bottom of fig. 8 indicate the configuration status of
the two dynamic tasks. Initially, task activates. The
first multiplication is therefore:
which matching the result displayed on the
outputs p_real and p_imag after 200 ns. At 240 ns task
activates. For simplicity, a time of 50 ns is assumed for the
reconfiguration. Two clock edges occur during the
reconfiguration interval. The exact configuration of the mutex
set zone is uncertain during this time. The simulation model
therefore puts 'X' on all the dynamic task outputs during this
period. These can be seen emerging from the pipeline between
290 and 355 ns. Thereafter, the result of multiplication by (10
j12), which is (-42 + j340), is displayed on the p output.
10. FUTURE WORK
For large systems, the use of timing simulation to verify timing
is a slow process. Not only does the simulator require long run-
times, but also a lot of effort is required to generate a testbench
with sufficient test-vectors and coverage. Static Timing
Analysis (STA) is a timing verification approach that evaluates
the timing of each path through the circuit without test-vectors.
These tools can read a variety of file formats including VHDL
and SDF. Since the new version of DCSTech produces both
these files, it therefore may enable the application of STA to the
dynamic design. While this would not take into account the time
consumed by reconfigurations, it would allow the verification of
all the timing issues that affect circuit performance, such as
maximum clock speed, critical path and set-up and hold times.
In the DRL design flow presented in this paper, the designer is
faced with the problem of partitioning the design at the RT level,
rather than a lower level of abstraction. At this level, the exact
area occupied by each block is unknown, although it can be
estimated approximately. Therefore, some iteration and
refinement may be required to obtain a suitable partitioning. A
design management tool could simplify this process, by
estimating area requirements for each task in the application and
presenting the information graphically. Temporal floorplanners
for netlists have already been developed. This would be a
similar idea but at a higher level of abstraction.
Most of the bitstream generation steps outlined in section 8 are
currently carried out manually. As the APIs in JBits carry out
many of the more complex functions associated with Virtex
partial bitstream generation, it is possible to automate the
process and this is the focus of future work.
11. CONCLUSIONS
This paper shows how the major similarities between the
standard CAD tools available for different FPGA architectures
can be exploited to implement an easily portable CAD
framework for DRL design. The technique relies on a select set
of capabilities, supported by most CAD toolsets, within the
underlying FPGA platform's supporting tools. From this,
automated support for the main stages of the DRL design flow
can be provided, including design specification, simulation,
synthesis, APR and timing extraction.
The final stage of the design flow is partial bitstream generation.
The ideas behind partial bitstream generation, which are
common across different FPGA families, were outlined.
However, the exact method used to produce these bitstreams
depends on both the capabilities of the standard CAD tools and
the FPGA's configuration interface. The broad similarities
Figure
8. A backannotated timing simulation waveform for the dynamically reconfigurable complex multiplier
n_Rst
x_real
x_imag
p_imag
p_real
10+j12Status
15+j14Status
n_Rst
x_real
x_imag
p_imag
p_real
10+j12Status
15+j14Status
evident in the standard CAD tool support for most platforms are
not replicated at this level. Indeed, most vendors provide no
mechanism for accessing configuration bitstreams at all, since
this compromises design security. As a result, bitstream
generation techniques will not port well between families. For
the Virtex, however, the availability of the JBits SDK provides
convenient access to its bitstream along with a number of
functions useful in bitstream generation.
12.
--R
"Verification of Dynamically Reconfigurable Logic"
"Methods of Exploiting Simulation Technology for Simulating the Timing of Dynamically Reconfigurable Logic"
"Hardware-Software CoDesign of Embedded Reconfigurable Architectures"
"The GARP Architecture and C Compiler"
"JHDL-An HDL for Reconfigurable Systems"
"Synthesizing RTL Hardware from Java Byte Codes"
"Compilation Tools for Run-Time Reconfigurable Designs"
"DYNASTY: A Temporal Floorplanning Based CAD Framework for Dynamically Reconfigurable Logic Systems"
"New HDL Research Challenges posed by Dynamically Reprogrammable Hardware"
"Pebble: A Language for Parameterised and Reconfigurable Hardware Design"
"Lava and JBits: From HDL to Bitstreams in Seconds"
"Modelling and Synthesis of Configuration Controllers for Dynamically Reconfigurable Logic Systems Using the DCS CAD Framework"
"CHASTE: a Hardware/Software Co-design Testbed for the Xilinx XC6200"
"JBits: Java based interface for reconfigurable computing"
"Partial Run-Time Reconfiguration Using JRTR"
"Xilinx Alliance 3.1i Modular Design"
"Virtex Series Configuration Architecture User Guide"
"JRoute: A Run-Time Routing API for FPGA Hardware"
--TR
Hardware-software co-design of embedded reconfigurable architectures
The Garp Architecture and C Compiler
JRoute
Pebble
Modelling and Synthesis of Configuration Controllers for Dynamically Reconfigurable Logic Systems Using the DCS CAD Framework
Partial Run-Time Reconfiguration Using JRTR
Verification of Dynamically Reconfigurable Logic
Synthesizing RTL Hardware from Java Byte Codes
Compilation tools for run-time reconfigurable designs
JHDL - An HDL for Reconfigurable Systems
--CTR
Mahmoud Meribout , Masato Motomura, New design methodology with efficient prediction of quality metrics for logic level design towards dynamic reconfigurable logic, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.8-10, p.285-310, March
Mahmoud Meribout , Masato Motomura, Efficient metrics and high-level synthesis for dynamically reconfigurable logic, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.12 n.6, p.603-621, June 2004
Ian Robertson , James Irvine, A design flow for partially reconfigurable hardware, ACM Transactions on Embedded Computing Systems (TECS), v.3 n.2, p.257-283, May 2004 | run-time reconfiguration;verification;dynamic reconfiguration;FPGA |
503224 | The structure and value of modularity in software design. | The concept of information hiding modularity is a cornerstone of modern software design thought, but its formulation remains casual and its emphasis on changeability is imperfectly related to the goal of creating added value in a given context. We need better explanatory and prescriptive models of the nature and value of information hiding. We evaluate the potential of a new theory---developed to account for the influence of modularity on the evolution of the computer industry---to inform software design. The theory uses design structure matrices to model designs and real options techniques to value them. To test the potential utility of the theory for software we apply it to Parnas's KWIC designs. We contribute an extension to design structure matrices, and we show that the options results are consistent with Parnas's conclusions. Our results suggest that such a theory does have potential to help inform software design. | Figure
1: An elementary DSM with three design parameters.
Groups of interdependent design parameters are clustered into a
proto-module to show that the decisions are managed collectively
as a single design task (See Figure 2; the dark lines denote the
desired proto-module clusters). In essence, such a proto-module
is a composite design parameter. To be a true module, in the
lexicon of Baldwin and Clark, there can be no marks in the rows
or columns outside the bounding box of its cluster connecting it
to other modules or proto-modules in the system.
A
. X
X .
Figure
2: A clustered ( prot o-modular ) design.
Merely clustering cannot convert a monolithic design comprising
one large, tightly coupled proto-module into a modular design.
Interdependent parameter cycles must be broken to define
modules of reasonable size and complexity. Breaking a cycle
between two interdependent parameters like B and C requires an
additional step called splitting.
The first step in splitting identifies the cause of the cycle-say, a
shared data structure definition-and splits it out as its own
design parameter (D). B and C no longer cyclically depend on
each other, instead taking on a simple hierarchical dependence
on D. However, B and C must still wait for the completion of
D's design process in order to undertake their own.
To counter this, a design as represented by a DSM can be further
modularized during the process of splitting by the introduction of
design rules. Design rules are additional design parameters that
decouple otherwise linked parameters by asserting "global" rules
that the rest of the design parameters must follow. Thus, design
rules are de facto hierarchical parameters with respect to the
other parameters in a DSM. The most prevalent kind of design
rule in software is a module interface. For example, for the
DSM in
Figure
2, an "A interface" rule could be added that
asserts that the implementation of B can only access the
implementation of A through an interface defined for A. Thus, A
could change details of its implementation freely without
affecting B, as long as A's interface did not have to be changed as
well. The effects of splitting B and C and adding design rules to
break the non-modular dependencies is shown in Figure 3.
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
A-I B-C D-IA B-C D B C
A interface
B-C D interface
A
B-C data struct.
Figure
3: A modular DSM resulting from splitting, adding
design rules, and clustering.
In Baldwin and Clark's terminology, a design rule is a (or part of
a) visible module, and any module that depends only on design
rules is a hidden module. A hidden module can be adapted or
improved without affecting other modules by the application of a
second operator called substitution.
The splitting and substitution operations are examples of six
atomic modular operators that Baldwin and Clark introduced to
parsimoniously and intuitively describe the operations by which
modular designs evolve. The others are augmentation, which
adds a module to a system, exclusion, which removes a module,
which standardizes a common design element, and
porting, which transports a module for use in another system.
We do not address these other operators any further in this paper.
3.3 Net Options Value of a Modular Design
Not all modularizations are equally good. Thus, in evolving a
design, it is useful to be able to evaluate alternative paths based
on quantitative models of value. Such models need not be
perfect. What is essential is that they capture the most important
terms and that their assumptions and operation be known and
understood so that analysts can evaluate their predictions.
3.3.1 Introduction to Real Options Concepts
Baldwin and Clark's theory is based on the idea that modularity
in design multiplies and decentralizes real options that increase
the value of a design. A monolithic system can be replaced only
as a whole. There is only one option to replace, and exercising it
requires that both the good and the bad parts of the new system
be accepted. In a sense, the designer has one option on a
portfolio of assets. A system that has two modules, by contrast,
can be kept as is, or either or both of the new modules can be
accepted, for a total of four options. The designer can accept only
the good new modules. By contrast, this designer has a portfolio
of options on the modules of the system. A key result in modern
finance shows that all else remaining equal, a portfolio of options
is worth more than an option on a portfolio.
Baldwin and Clark's theory defines a model for reasoning about
the value added to a base system by modularity. They formalize
the options value of each modular operator: How much is it
worth to be able to substitute modules, augment, etc.
3.3.2 The Net Options Value of a Modular Design
In this paper, we address only substitution options. Splitting a
design into n modules increases its base value S0 by a fraction
that is obtained by summing the net option values (NOVi) of the
resulting options:
R
NOV is the benefit gained by exercising an option optimally
accounting for both payoffs and exercise costs.
Baldwin and Clark present a model for calculating NOV. A
module creates an opportunity to invest in k experiments to (a)
create candidate replacements, (b) each at a cost related to the
complexity of the module, and, (c) if any of the results are better
than the existing choice, to substitute in the best of them, (d) at a
cost that related to the visibility of the module to other modules
in the system:
First, for module i, sini1/2Q(ki) is the expected benefit by the best
of ki independently developed candidate replacements under
certain assumptions about the distribution of such values.
Ci(ni)ki is the cost to run ki experiments as a function Ci of the
module complexity ni. is the cost to replace the
module given the number of other modules in the system that
directly depend on it, the complexity ni of each, and the cost to
redesign each of its parameters. The max picks the number of
experiments ki that maximizes the gain from module i.
Figure
4 presents a typical scenario: module value-added
increases in the number of experiments (better candidates found)
until experiment costs meet diminishing returns. The max is the
peak. In this case, six experiments maximizes the net gain and is
expected to add about 41% value over the existing module.0.40.30.20.10
Number of Experiments
Figure
4. The value added by k experiments.
The NOVi formula assumes that the value added by a candidate
replacement is a random variable normally distributed about the
value of the existing module choice (normalized to zero), with a
variance si2ni that reflects the technical potential si of the module
(the standard deviation on its returns) and the complexity ni of
the module. The assumption of a normal distribution is
consistent with the empirical observation that high and low
outcomes are rare, with middle outcomes more common. The
Q(k) represents the expected value of the best of k independent
draws from a standard normal distribution, assuming they are
positive, and is the maximum order statistic of a sample of size k.
4.
OVERVIEW
OF ANALYSIS APPROACH
As an initial test of the potential for DSM's and NOV to improve
software design, we apply the ideas to a reformulation of
Parnas's comparative analysis of modularizations of KWIC (a
program to compute permuted indices) [1]. The use of KWICas a
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
benchmark for assessing concepts in software design is well
established [7][9].
Parnas presents two modularizations: a traditional strawman
based on the sequence of abstract steps in converting the input to
the output, and a new one based on information hiding. The new
design used abstract data type interfaces to decouple key design
decisions involving data structure and algorithm choices so that
they could be changed without unduly expensive ripple effects.
Parnas then presents a comparative analysis of the changeability
of the two designs. He postulates changes and assesses how well
each modularization can accommodate them, measured by the
number of modules that would have to be redesigned for each
change. He finds that the information-hiding modularization is
better. He concludes that designers should prefer to use an
information hiding design process: begin the design by
identifying decisions that are likely to change; then define a
module to hide each such decision.
Our reformulation of Parnas's example is given in two basic
steps. First, we develop DSM's for his two modularizations in
order to answer several questions. Do DSM's, as presented by
Baldwin and Clark (as well as in the works of Eppinger and
Steward [5][10], who invented DSMs), have the expressive
capacity to capture the relevant information in the Parnas
examples? Do the DSM's reveal key aspects of the designs? Do
we learn anything about how to use DSM's to model software?
Second, we apply Baldwin and Clark's substitution NOV model
to compute quantitative values of the two modularizations, using
parameter values derived from information in the DSM's
combined with the judgments of a designer. The results are
back-of-the-envelope predictions, not precise market valuations;
yet they are useful and revealing. We answer two questions. Do
the DSM's contain all of the information that we need to justify
estimates of values of the NOV parameters? Do the results
comport with the accepted conclusions of Parnas?
Our evaluation revealed one shortcoming in the DSM framework
relative to our needs. DSM's as used by Baldwin and Clark and
in earlier work do not appear to model the environment in which
a design is embedded. Consequently, we were unable to model
the forces that drove the design changes that Parnas hypothesized
for KWIC. Thus, DSM's, as defined, did not permit sufficiently
rich reasoning about change and did not provide enough
information to justify estimates of the environment-dependent
technical potential parameters of the NOV model.
We thus extended the DSM modeling framework to model what
we call environment parameters (EP). We call such models
environment and design structure matrices (EDSM). DP's are
under the control of the designer. Even design rules can be
changed, albeit possibly at great cost. However, the designer
does not control EP's. Our extension to the EDSM framework
appears to be both novel and useful. In particular, it captures a
number of important issues in software design and, at least in the
case of the Parnas modularization, it allows us to infer some of
Parnas's tacit assumptions about change drivers.
The next section presents our DSM's for Parnas's KWIC. Next
we present our NOV results. Finally, we close with a discussion.
5. DSM-BASED ANALYSIS OF KWIC
For the first modularization, Parnas describes five modules:
Input, Circular Shift, Alphabetizing, Output, and Master Control.
He concludes, The defining documents would include a number
of pictures showing core format, pointer conventions, calling
conventions, etc. All of the interfaces between the four modules
must be specified before work could begin.This is a
modularization in the sense meant by all proponents of modular
programming. The system is divided into a number of modules
with well-defined interfaces; each one is small enough and
simple enough to be thoroughly understood and well
programmed [8].
5.1 A DSM Model of the Strawman Design
We surmise Parnas viewed each module interface as comprising
two parts: an exported data structure and a procedure invoked by
Master Control. We thus took the choice of data structures,
procedure declarations, and algorithms as the DP's of this
design. The resulting DSM is presented in Figure 5. DP's A, D,
G, and J model the procedure interfaces, as design rules, for
running the input, shift, sort and output algorithms. B, E, H, and
K model the data structure choices as design rules. Parnas states
that agreement on them has to occur before independent module
implementation can begin. C, F, I, L, and M model the
remaining unbound parameters: the choices of algorithms to
manipulate the fixed data structures. The DP dependencies are
derived directly from Parnas's definitions.
A D G J
A - Input Type
G - Alph Type
In Data
Alph Data
. X X
I - Alph Alg
Figure
5: DSM for strawman modularization
The DSM immediately reveals key properties of the design.
First, the design is a modularization, as Parnas claims: designers
develop their parts independently as revealed by the absence of
unboxed marks in the lower right quadrant of the DSM. Second,
only a small part-the algorithms-is hidden and independently
changeable. Third, the algorithms are tightly constrained by the
data structure design rules. Moreover, the data structures are an
interdependent knot (in the upper left quadrant). The shift data
structure points into the line data structure; the alphabetized
structure is identical to the shifted structure; etc. Change is thus
doubly constrained: Not only are the algorithms constrained by
the data structure rules, but these rules themselves would be hard
to change because of their tight interdependence.
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
5.2 A DSM Model of a Pre -Modular Design
By declaring the data structures to be design rules, the designer
asserts that there is little to gain by letting them change.
Parnas's analysis reflects the costly problems that arise when the
designer makes a mistake in prematurely accepting such a
conclusion and basing a modularization on it. Furthermore, we
can see that the design is also flawed because most of the design
parameters are off limits to valuable innovation. The designer
has cut off potentially valuable parts of the design space.
One insight emerging from this work is that there can be value in
declining to modularize until the topography of the value
landscape is understood. This conclusion is consistent with
Baldwin and Clark's view: .designers must know about
parameter interdependencies to formulate sensible design rules.
If the requisite knowledge isn't there, and designers attempt to
modularize anyway, the resulting systems will miss the 'high
peaks of value,' and, in the end, may not work at all [p. 260].
Letting the design rules revert to normal design parameters and
clustering the data structures with their respective algorithms
(because they are interdependent) produces the DSM of Figure
6. This DSM displays the typical diagonal symmetry of outlying
marks indicating a non-modular design. We have not necessarily
changed any code, but the design (and the design process) is
fundamentally different. Rather than a design overconstrained by
Draconian design rules, the sense of a potentially complex design
process with meetings among many designers is apparent.
Innovative or adaptive changes to the circular shifter might have
upstream impacts on the Line Store, for example-a kind of
change that Parnas did not consider in his analysis.
A - In Type
In Data
In Alg
G - Alph Type
Alph Data
I - Alph Alg
Figure
5.3 DSM for the Information -Hiding Design
The Line Store that is implicitly bundled with the Input Data is a
proto-module that is a prime target for modularization: many
other parameters depend on it and vice versa. Splitting the Line
Store from the Input and giving each its own interface as a design
rule is a typical design step for resolving such a problem. An
alternative might be to merely put an interface on the pair and
keep them as a single module. However, this DSM does not
show that the Line Store is doing double-duty as a buffer for the
Input Algorithm as well as serving downstream clients. Thus, it
R
is more appropriate to split the two. The other proto-modules are
modularized by establishing interface design rules for them. The
resulting DSM is shown in Figure 7 . It is notable that this
design has more hidden information (parameters O down to L in
the figure) than the earlier designs. We will see that under our
model, this permits more complex innovation on each of the
major system components, increasing the net options value of the
design.
A - In Type
G - Alph Type
. X
X .
. X
X .
. X
X .
. X
X .
O - Line Data
In Data
In Alg
Alph Data
I - Alph Alg
. X
X .
Figure
7: DSM for information hiding modularization
5.4 Introducing Environment Parameters
We can now evaluate the adequacy of DSM's to represent the
information needed to reason about modular design in the style
of Parnas. We find the DSM to be in part incomplete.
In particular, to make informed decisions about the choice of
design rules and clustering of design parameters, we found we
needed to know how changes in the environment would affect
them. For example, we can perceive the value of splitting apart
the Line Store and the Input design parameters by perceiving
how they are independently affected by different parameters in
the environment. For instance, Input is affected by the operating
system, but the line store is affected by the size of the corpus.
Indeed, the fitness functions found in evolutionary theories of
complex adaptive systems, of which Baldwin and Clark's theory
is an instance, are parameterized by the environment.
Not surprisingly perhaps, we were also finding it difficult to
estimate Baldwin and Clark's technical potential term in the
NOV formula, which models the likelihood that changing a
module will generate value. This, too, is dependent on
environmental conditions (e.g., might a change be required).
In this paper we address this lack with an extension to the DSM
framework. We introduce environment parameters (EP) to
model environments. The key property of an EP as distinct from
a DP is that the designer does not control the EP. (Designers
might be able influence EP's, however.) We call our extended
models environment and design structure matrices (EDSM's).
Figure
8 presents an EDSM for the strawman KWIC design.
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
X .
A - In Type
G - Alph Type
-In Data
Alph Data
In Alg
. X X
I - Alph Alg
. X - Computer
.
Figure
8: EDSM for strawman modularization
The rows and columns of an EDSM are indexed by both EP's
and DP's, with the EP's first by convention. The upper left
block of an EDSM thus models interactions among EP's; the
middle left block, the impact of EP's on the design rules; the
lower left block, their impact on the hidden DPs. The lower right
block is the basic DSM, partitioned as before to highlight DR's;
and the upper right block models the feedback influence of
design decisions (DP's) on the environment (EP's).
Applying the EDSM concept to Parnas's example reveals that the
EDSM provides a clear visual representation of genuine
information hiding. In particular, the sub-block of an EDSM
where the EP's intersect with the DR's should be blank,
indicating that the design rules are invariant with respect to
changes in the environment: only the decisions hidden within
modules have to change when EP's change, not the design
rules-the load bearing walls of the system. We can now
make these ideas more concrete in the context of the KWIC case
study.
Parnas implicitly valued his KWIC designs in an environment
that made it likely that certain design changes would be needed.
He noted several decisions are questionable and likely to change
under many circumstances [p. 305] such as input format,
character representation, whether the circular shifter should
precompute shifts or compute them on the fly, and similar
considerations for alphabetization. Most of these changes are
said to depend on a dramatic change in the input size or a
dramatic change in the amount of memory. What remains
unclear in Parnas's analysis is what forces would lead to such
changes in use or the computing infrastructure. We also do not
know what other possible changes were ruled out as likely or
why. At the time, these programs were written in assembler.
Should Parnas have been concerned that a new computer with a
new instruction set would render his program inoperable? A
dramatic change in input size or memory size could certainly be
accompanied by such a change.
Y
Z
X .
A - Input Type
G - Alph Type
Alph Data
I - Alph Alg
Figure
9: EDSM for inferred proto -modular design
X .
A - In Type
G - Alph Type
O - Line Data
Alph Data
I - Alph Alg
. X
X .
. X
X .
. X
X .
. X
X .
. X
X .
. X - Computer
Figure
10: EDSM for information hiding Modularization
By focusing on whether internal design decisions are
questionable rather than on the external forces that would bring
them into question, the scope of considerations is kept artificially
narrow. Not long ago, using ASCII for text would be
unquestionable. Today internationalization makes that not so. By
turning from design decisions to explicit EP's, such issues can
perhaps be discovered and accounted for to produce more
effective information-hiding designs.
To make this idea concrete, we illustrate it by extending our
DSM's for KWIC. We begin by hypothesizing three EP's that
Parnas might have selected, and which appear to be implied in
his analysis: computer configuration (e.g., device capacity,
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
corpus properties (input size, language-e.g., Japanese);
and user profile (e.g., computer savvy or not, interactive or
offline). Figure 8, 9, and 10 are EDSM's for the strawman, pre-
modular, and information hiding designs, respectively.
The key characteristic of the strawman EDSM is that the DR's
are not invariant under the EP's. We now make a key
observation: The strawman is an information-hiding
modularization in the sense of Baldwin and Clark: designers can
change non-DR DP's (algorithms) independently; but it is not an
information-hiding design in the sense of Parnas. Basic DSM's
alone are insufficient to represent Parnas's idea. We could have
annotated the DP's with change probabilities, but we would still
miss the essence: the load-bearing walls of an information hiding
design (DR's) should be invariant with respect to changes in the
environment. Our EDSM notation expresses this idea clearly.
Figure
9 is the EDSM for the pre-modular design in which the
data structures are not locked down as DR's. The remaining
DR's (the procedure type signatures) are invariant with the EP's,
but the extensive dependencies between proto-module DR's
suggest that changes in EP's will have costly ripple effects. The
design evolution challenge that this EDSM presents is to split the
proto-modules in a way that does not create new EP-dependent
DR's.
Figure
models the result: Parnas's information hiding
design. The EDSM highlights the invariance of the DRs under
the EPs in the sector where the EPs meet the DRs.
6. NOV-BASED ANALYSIS OF KWIC
We can now apply the NOV model to model how much the
flexibility is worth in both Parnas designs as a fraction of the
value of the base system, taking Parnas's notion of information
hiding into account. This analysis is illustrative, of course, and
the outputs are a function of the inputs. We justify our estimates
of the model parameters using the EDSM's and reasonable back-
of-the-envelope assumptions. A benefit of the mathematical
model is that it supports rigorous sensitivity analysis. Such an
analysis is beyond the scope of this paper; but we will pursue
this issue in the future. We make the following assumptions and
use the following notations in our analysis:
N is the number of design parameters in a given design.
For the proto-modular and strawman modularizations,
13. In the information hiding design
Given a module of p parameters, its complexity is
p/N.
The value of one experiment on an unmodularized
design, sN1/2Q(1)= 1, is the value of the original
system.
The design cost c=1/N of each design parameter is the
same, and the cost to redesign the whole system is cN
1.
The visibility cost of a module i of size n is
icn.
One experiment on an unmodularized system breaks
even:
Balwin and Clark make the break-even assumption for an
example in their book [1]. For a given system size, it implies a
choice of technical potential for an unmodularized design: in our
R
case, 2.5. We take this as the maximum technical potential
of any module in a modularized version. This assumption for the
unmodularized KWIC is a modeling assumption, not a precisely
justified estimate. In practice, a designer would have to justify
the choices of parameter values.
The model of Baldwin and Clark in quite sensitive to technical
potential, but they give little guidance in how to estimate it. We
have observed that the environment is what determines whether
variants on a design are likely to have added value. If there is
little added value to be gained by replacing a module in a given
environment, no matter how complex it is, that means the
module has low technical potential.
We chose to estimate the technical potential of each module as
the system technical potential scaled by the fraction of the EP's
relevant to the module. We further scaled the technical potential
of the modules in the strawman design by 0.5, for two reasons.
First, about half of the interactions of the EPs with the strawman
design are with the design rules (but as we will see, their
visibility makes the cost to change them prohibitive). Second-
and more of a judgment call-the hidden modules in this design
(algorithms) are tightly constrained by the design rules (data
structures that are assumed not to change). There would appear
to be little to be gained by varying the algorithm
implementations, alone. Figure 11 shows our assumptions about
the technical potential of the modules in the strawman and
information-hiding designs.
Strawman Info Hiding
Module Name sigma Z sigma Z
Design Rules 2.5 1
Line Storage NA NA 1.6 0
Input 1.25 0 2.5 0
Circular Shift 1.25 0 2.5 0
Alphabetizing 1.25 0 2.5 0
Output
Figure
11. Assumed Technical Potential and Visibility
Figures
12 present the NOV data per module and Figure 13 the
corresponding plots for the information hiding design. Figure 14
presents the plots for the strawman. The option value of each
module is the value at the peak. We omit this disaggregated data
for the strawman design. What matters is the bottom line:
Summing the module NOV's gives that the system NOV is 0.26
for the strawman design but 1.56 for the information-hiding
design. These numbers are percentages of the value of the non-
modularized system, which has base value 1.
Thus the value of the system with the information-hiding design
is 2.6 times that of the system with the unmodularized design,
and the strawman's is worth only 1.26 times as much. Thus, the
information-hiding version of the system is twice as valuable as
the strawman. Ignoring the base value and focusing just on
modularity, we observe that the information-hiding design
provides 6 times more value in the form of modularitythan the
strawman's design.
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
Baldwin and Clark acknowledge that designing modularizations
is not free; but, once done, the costs are amortized over future
evolution; so the NOV model ignores those costs. Accounting for
them is important, but not included in our model. It is doubtful
they are anywhere near 150% of the system value. On the other
hand, they would come much closer to 26%, which would tend to
further reduce the value added by the strawman modularization.
Design Rules 0 -1.3 -1.6 -1.9 -2.25 -2.56 -2.88 -3.2 -3.5 -3.81 -4.1 0
Alpha
MsCon.
Fig 12. Option Values for Information Hiding Design0.40
-0.2
-0.4
Number of Experiments
Line Store Input
CirShift Alpha
Output Master Cont.
Figure
13: Options Values for Information Hiding Design0.1-0.1
-0.2
-0.3
-0.4
Number of Experiments
Input CirShift Alpha
Output MsControl
Figure
14: Options Values for Strawman Design
7. DISCUSSION AND CONCLUSION
Parnas's information-hiding criterion for modularity has been
enormously influential in computer science. Because it is a
qualitative method lacking an independent evaluation criterion, it
is not possible to perform a precise comparison of differing
designs deriving from the same desiderata.
This paper is a novel application of Baldwin and Clark's options-
theoretic method of modular design and valuation to the subject
of information-hiding modularity. Our goal was to lend insight
into both information-hiding modularity and the ability of options
theory to capture Parnas's intent of designing for change. We
have provided an early validation of the application of their
method to software design by reformulating Parnas's KWIC
modularizations in the Baldwin and Clark framework.
Baldwin and Clark's method has two main components, the
Design Structure Matrix (DSM) and the Net Option Value
formula (NOV). DSM's provide an intuitive, qualitative
framework for design. NOV quantifies the consequences of a
particular design, thus permitting a precise comparison of
differing designs of the same system.
We have shown that these tools provide significant insight into
the modularity in the design of software. Yet, precisely modeling
Parnas's information-hiding criterion requires explicitly
modeling the environment-the context in which the software is
intended to be used-in order to capture the notion of design
stability in the face of change. We model the environment by
extending DSM's to include environment parameters alongside
the traditional design parameters. The environment parameters
then inform the estimation of the technical potential in the NOV
computation. In the process, we learned that Parnas had largely
conceived of change in terms of intrinsic properties of the design,
rather than in terms of the properties of the environment in
which the software is embedded.
With these extensions to the Baldwin and Clark model, we were
able to both model the Parnas designs and quantitatively show-
under a set of assumptions-that the information-hiding design is
indeed superior, consistent with the accepted results of Parnas.
This result has value in at least three dimensions. First, it
provides a quantitative account of the benefits of good design.
Second, it provides limited but significant evidence that such
models have the potential to aid technical decision-making in
design with value added as an explicit objective function. This
paper is thus an early result in the emerging area of strategic
software design [4], which aims for a descriptive and prescriptive
theoretical account of software design as a value-maximizing
investment activity. Third, the result supports further
investigation of implications that follow from acceptance of such
a model. For example, because the value of an option increases
with technical potential (risk), modularity creates seemingly
paradoxical incentives to seek risks in software design, provided
they can be managed by active creation and exploitation of
options. The paradox is resolved in large part by the options
model, which clarifies that one has the right, but not a
requirement, to exercise an option, thus the downside risk (cost)
is largely limited to the purchase of the option itself.
In the introduction, we also raised the question of when is the
right time to modularize or commit to a software architecture?
eport CS -2001-13. Submitted for Publication to ESEC/FSE 2001.
Parnas's method says to write down the design decisions that are
likely to change and then design modules to hide them. This
implicitly encourages programmers to modularize at the early
stages of design. The NOV calculations of the two KWIC
modularizations make the possible consequences clear: without
knowledge of the environment parameters, a designer might rush
in to implement the strawman design, effectively sacrificing the
opportunity to profit from the superior modularization. Yet,
designers often do not have the luxury to wait until there is
sufficient information to choose the optimal modularization. It
may be difficult to precisely estimate how the environment is
going to change-innovation and competitive marketplaces are
hard to predict. Moreover, many of the best ideas come from the
users of the software, so uncertainty is almost certain until the
product is released. New design techniques that create options to
delay modularizing until sufficient information is available might
be explored as a possible solution to this conundrum.
The inclusion of environment parameters in the design process
has additional implications. For example, making the most of
these parameters requires being able to sense when they are
changing and to influence them (slow their change) when
possible. Careful design of the coupling between the
development process and the environment is critical in strategic
software design. For example, for parameters whose values are
subject to change, sensor technologies-perhaps as simple as
being on the mailing list of a standards-setting committee-can
help to detect changes and report them to the designers in a
timely fashion. Conversely, lobbying a standards-setting
organization to, say, deprecate interfaces rather than change them
outright can slow environmental change. Thus, accommodating
environmental change is not limited to just anticipating change,
as originally stated by Parnas, but includes more generally both
responsiveness to change and manipulation of change.
This paper represents a first step in the validation of Baldwin
and Clark's option-theoretic approach for quantifying the value
of modularity in software. Additional studies are required to
adequately validate the theory and provide insight into its
practical application. Also, in our paper study, we found it
difficult to estimate the technical potentials of the modules,
despite the added resolution provided by the environment
parameters. Validation in an industrial project would not only
provide realistic scale, but it would also have considerable
historical data to draw upon for the computation of NOV. Such
studies would help move the field of software design further
down the path to having powerful quantitative models for design.
ACKNOWL EDGMENTS
This work was supported in part by the National Science
Foundation under grants CCR-9804078, CCR-9970985, and ITR-
0086003. Our discussions with graduate students at the
University of Virginia in CS 851, Spring 2001, at the have been
very helpful.
--R
Design Rules: The Power of Modularity
Using Tools to Compose Systems.
On the Criteria to be Used in Decomposing System into Modules
Candidate Model Problems in Software Architecture.
--TR
Using Tool Abstraction to Compose Systems
Software design
Extreme programming explained
Software economics
On the criteria to be used in decomposing systems into modules
Design Rules
Value based software reuse investment
Software Design Decisions As Real Options
--CTR
Verifying design modularity, hierarchy, and interaction locality using data clustering techniques, Proceedings of the 45th annual southeast regional conference, March 23-24, 2007, Winston-Salem, North Carolina
Yuanyuan Song, Adaptation Hiding Modularity for Self-Adaptive Systems, Companion to the proceedings of the 29th International Conference on Software Engineering, p.87-88, May 20-26, 2007
Neeraj Sangal , Ev Jordan , Vineet Sinha , Daniel Jackson, Using dependency models to manage complex software architecture, ACM SIGPLAN Notices, v.40 n.10, October 2005
Mikio Aoyama , Sanjiva Weerawarana , Hiroshi Maruyama , Clemens Szyperski , Kevin Sullivan , Doug Lea, Web services engineering: promises and challenges, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Barry Boehm , Li Guo Huang, Value-Based Software Engineering: A Case Study, Computer, v.36 n.3, p.33-41, March
Sushil Krishna Bajracharya , Trung Chi Ngo , Cristina Videira Lopes, On using Net Options Value as a value based design framework, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
John M. Hunt , John D. McGregor, A series of choices variability in the development process, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Rami Bahsoon , Wolfgang Emmerich, Economics-Driven Software Mining, Proceedings of the First International Workshop on The Economics of Software and Computation, p.3, May 20-26, 2007
Yuanfang Cai , Kevin J. Sullivan, A value-oriented theory of modularity in design, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Sunny Huynh , Yuanfang Cai, An Evolutionary Approach to Software Modularity Analysis, Proceedings of the First International Workshop on Assessment of Contemporary Modularization Techniques, p.6, May 20-26, 2007
Yuanfang Cai , Kevin J. Sullivan, Simon: modeling and analysis of design space structures, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA
Yuangfang Cai , Sunny Huynh, An Evolution Model for Software Modularity Assessment, Proceedings of the 5th International Workshop on Software Quality, p.3, May 20-26, 2007
Kevin Sullivan , William G. Griswold , Yuanyuan Song , Yuanfang Cai , Macneil Shonle , Nishit Tewari , Hridesh Rajan, Information hiding interfaces for aspect-oriented design, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005
Barry Boehm, Value-based software engineering, ACM SIGSOFT Software Engineering Notes, v.28 n.2, March
Cristina Videira Lopes , Sushil Krishna Bajracharya, An analysis of modularity in aspect oriented design, Proceedings of the 4th international conference on Aspect-oriented software development, p.15-26, March 14-18, 2005, Chicago, Illinois
Barry Boehm, Value-based software engineering: reinventing, ACM SIGSOFT Software Engineering Notes, v.28 n.2, March | modularity;design structure matrix;real options;software |
503244 | Coverage criteria for GUI testing. | A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This paper's focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code. | INTRODUCTION
The importance of graphical user interfaces (GUIs) as critical
components of today's software is increasing with the
recognition of their usefulness. The widespread use of GUIs
has led to the construction of more and more complex GUIs.
Although the use of GUIs continues to grow, GUI testing
has, until recently, remained a neglected research area. Because
GUIs have characteristics different from conventional
software, techniques developed to test conventional software
cannot be directly applied to GUI testing. Recent advances
in GUI testing have focused on the development of test case
generators [8, 11, 12, 14, 18, 6] and test oracles [9] for GUIs.
However, development of coverage criteria for GUIs has not
been addressed.
Coverage criteria are sets of rules used to help determine
whether a test suite has adequately tested a program and
to guide the testing process. The most well-known coverage
criteria are statement coverage, branch coverage, and path
coverage, which require that every statement, branch and
path in the program's code be executed by the test suite
respectively. However such criteria do not address the adequacy
of GUI test cases for a number of reasons. First, GUIs
are typically developed using instances of precompiled elements
stored in a library. The source code of these elements
may not always be available to be used for coverage evalu-
ation. Second, the input to a GUI consists of a sequence of
events. The number of possible permutations of the events
may lead to a large number of GUI states and for adequate
testing, a GUI event may need to be tested in a large number
of these states. Moreover, the event sequences that the
GUI must be tested for are conceptually at a much higher
level of abstraction than the code and hence cannot be obtained
from the code. For the same reason, the code cannot
be used to determine whether an adequate number of these
sequences have been tested on the GUI.
The above challenges suggest the need to develop coverage
criteria based on events in a GUI. The development of such
coverage criteria has certain requirements. First, since there
are a large number of possible permutations of GUI events,
the GUI must be decomposed into manageable parts. GUIs,
by their very nature, are hierarchical and this hierarchy may
be exploited to identify groups of GUI events that can be
tested in isolation. Hence, each group forms a unit of test-
ing. Such a decomposition also allows coverage criteria to
be developed for events within a unit. Intuitively, a unit of
testing has a well-defined interface to the other parts of the
software. It may be invoked by other units when needed and
then terminated. For example, when performing code-based
testing, a unit of testing may be a basic block, procedure,
an object, or a class, consisting of statements, branches, etc.
Next, interactions among units must be identified and coverage
developed to determine the adequacy of tested interac-
tions. Second, it should be possible to satisfy the coverage
criterion by a finite-sized test suite. The finite applicability
[20] requirement holds if a coverage criterion can always
be satisfied by a finite-sized test suite. Finally, the test designer
should recognize whether a coverage criterion can ever
be fully satisfied [16, 17]. For example, it may not always
be possible to satisfy path coverage because of the presence
of infeasible paths, which are not executable because of the
context of some instructions. Detecting infeasible paths in
general is a NP complete problem. No test case can execute
along an infeasible path, perhaps resulting in loss of
coverage. Infeasibility can also occur in GUIs. Similar to
infeasible paths in code, static analysis of the GUI may not
reveal infeasible sequences of events. For example, by performing
static analysis of the menu structure of MS Word-
pad, one may construct a test case with Paste as the first
event. However, experience of using the software shows that
such a test case will not execute since Paste is highlighted
only after a Cut or Copy. 1
In this paper, we define a new class of coverage criteria called
event-based coverage criteria to determine the adequacy of
tested event sequences, focusing on GUIs. The key idea is
to define the coverage of a test suite in terms of GUI events
and their interactions. Since the total number permutations
of event sequences in any non-trivial GUI is extremely
large, the GUI's hierarchical structure is exploited to identify
the important event sequences to be tested. The GUI
is decomposed into GUI components, 2 each of which is a
unit of testing. Events within a component do not interleave
with events in other components without explicit invocation
or termination events. Because of this well-defined
behavior, a component may be tested in isolation. Two
kinds of coverage criteria are developed from the decomposition
intra-component coverage criteria for events within a
component and inter-component coverage criteria for events
among components. Intra-component criteria include event,
event-selection, and length-n event-sequence coverage. Inter-component
criteria include invocation, invocation-termination
and length-n event-sequence coverage. A GUI component is
represented by a new structure called an event-flow graph
that identifies events within a component. The interactions
among GUI components are captured by a representation
called the integration tree. We present algorithms to
automatically construct event-flow graphs and the integra-
1 Note that Paste will be available if the ClipBoard is not
empty, perhaps because of an external software. External
software is ignored in this simplified example.
components should not be confused with GUI elements
that are used as building blocks during GUI develop-
ment. We later provide a formal definition of a GUI component
tion tree for a given GUI and to evaluate intra- and inter-component
coverage for a given test suite. We present a
case study to demonstrate the correlation between event-based
coverage of our version of WordPad's GUI and the
statement coverage of its underlying code for a test suite.
The important contributions of the coverage method presented
in this paper include:
1. a class of coverage criteria for GUI testing in terms of
GUI events.
2. the identification of a GUI component, useful for GUI
testing.
3. a representation of a GUI component called an event-
flow graph that captures the flow of events within a
component and a representation called the integration
tree to identify interactions between components.
4. an automated technique to decompose the GUI into
interacting components and coverage criteria for intra-component
and inter-component testing.
5. a technique to compute the coverage of a given test
suite.
6. a case study demonstrating the correlation between
coverage in terms of events and code.
In the next section we present a classification of GUI events
and use the classification to identify GUI components. In
Section 3 we present coverage criteria for event interactions
within a component and between components. Section 4
presents algorithms to construct event-flow graphs and an
integration tree for a given GUI and then evaluate intra- and
inter-component coverage of the GUI for a given test suite.
In Section 5, we present details of a case study conducted
on our version of the WordPad software. Lastly, Section 6
presents related work and in Section 7 we conclude with a
discussion of ongoing and future work.
2. STRUCTURE OF GUIS
A GUI uses one or more metaphors for objects familiar in
real life, such as buttons, menus, a desktop, and the view
through a window. The software user performs events to interact
with the GUI, manipulating GUI objects as one would
real objects. These events may cause deterministic changes
to the state of the software that may be reflected by a change
in the appearance of one or more GUI objects. Moreover,
GUIs, by their very nature, are hierarchical. This hierarchy
is reflected in the grouping of events in windows, dialogs,
and hierarchical menus. For example, all the "options" in
MS Internet Explorer can be set by interacting with events
in one window of the software's GUI.
The important characteristics of GUIs include their graphical
orientation, event-driven input, hierarchical structure,
the objects they contain, and the properties (attributes) of
those objects. Formally, a GUI may be defined as follows:
Graphical User Interface (GUI) is a hierar-
chical, graphical front-end to a software that accepts
user-generated and system-generated events, from a
fixed set of events, as input and produces deterministic
graphical output. A GUI contains graphical objects;
each object has a fixed set of properties. At any time
during the execution of the GUI, these properties have
discrete values, the set of which constitutes the state
of the GUI. 2
The above definition specifies a class of GUIs that have a
fixed set of events with deterministic outcome that can be
performed on objects with discrete valued properties. This
paper develops coverage criteria for the class of GUIs defined
above.
In this section, a new unit of testing called a GUI component
is defined. It consists of a number of events, selections,
invocations, and terminations that restrict the focus of a
GUI user. The user interacts with a component by explicitly
invoking it, performing events, and then terminating
the component. Note that since events within a component
cannot be interleaved with events in other components, the
interaction among events within a component may be tested
independently of other components. A classification of GUI
events is used to identify GUI components. GUI events may
be classified as:
Menu-open events open menus, i.e., they expand the set
of GUI events available to the user. By definition,
menu-open events do not interact with the underlying
software. The most common example of menu-open
events are generated by buttons that open pull-down
menus, e.g., File and Edit.
Restricted-focus events open modal windows, i.e., windows
that have the special property that once invoked,
they monopolize the GUI interaction, restricting the
focus of the user to a specific range of events within the
window until the window is explicitly terminated by a
termination event. Preference setting is an example
of restricted-focus events in many GUI systems; the
user clicks on Edit and Preferences, a window opens
and the user then spends time modifying the prefer-
ences, and finally explicitly terminates the interaction
by either clicking OK or Cancel.
Unrestricted-focus events open modeless windows that
do not restrict the user's focus; they merely expand
the set of GUI events available to the user. Note
that the only difference between menu-open events and
unrestricted-focus events is that the latter open windows
that have to be explicitly terminated. For exam-
ple, in the MS PowerPoint software, the Basic Shapes
are displayed in an unrestricted-focus window.
System-interaction events interact with the underlying
software to perform some action; common examples
include cutting and pasting text, and opening object
windows.
Termination events close modal windows; common examples
include Ok and Cancel.
At all times during interaction with the GUI, the user interacts
with events within a limited focus. This limited focus
consists of a restricted-focus window X and a set of
unrestricted-focus windows that have been invoked, either
directly or indirectly by X. The limited focus remains in
place until X is explicitly terminated using a termination
event such as OK, Cancel, or Close. Intuitively, the events
within the limited focus form a GUI component.
component C is an ordered pair (RF,
UF), where RF represents a modal window in terms
File
Edit
Help
Open Save
Cut Copy Paste
and Help
Figure
1: An Event-flow Graph for a Part of MS
WordPad.
of its events and UF is a set whose elements represent
modeless windows also in terms of their events. Each
element of UF is invoked either by an event in UF or
A common example of a GUI component is the FileOpen
modal window (and its associated modeless windows) found
in most of today's software. The user interacts with events
within this component, selects a file and terminates the component
by performing the Open event (or sometimes the
Cancel event).
Formally, a GUI component can be represented as a flow
graph.
An event-flow graph for a GUI component C is
a 4-tuple !V, E, B, I? where:
1. V is a set of vertices representing all the events in
the component. Each v 2V represents an event
in C.
2. is a set of directed edges between
vertices. We say that event e i follows e j iff e j
may be performed immediately after e i . An edge
the event represented by vy follows
the event represented by vx .
3. is a set of vertices representing those
events of C that are available to the user when
the component is first invoked.
4. I ' V is the set of restricted-focus events of the
component.An example of an event-flow graph for a part of the Maincomponent of MS WordPad is shown in Figure 1. At the
top are three vertices (File, Edit, and Help) that represent
part of the pull-down menu of MS WordPad. They are
menu-open events that are available when the Main component
is first invoked. Hence they form the set B. Once
File has been performed in WordPad any of Edit, Help,
Open, and Save may be performed. Hence there are edges
in the event-flow graph from File to each of these events.
Note that Open is shown with a dashed oval. We use this
representation for restricted-focus events, i.e., events that
invoke components. Similarly, About and Contents are also
3 We assume that all GUIs have a Main component, i.e., the
component that is presented to the user when the GUI is
first invoked.
Main
FileNew FileOpen Print FormatFont
Properties
FileSave PageSetup ViewOptions
Figure
2: An Integration Tree for a Part of MS
WordPad.
restricted-focus events, i.e., for this component, I = fOpen,
About, Contentsg. Other events (i.e., Save, Cut, Copy, and
Paste) are all system-interaction events. After any of these
is performed in MS WordPad, the user may perform File,
Edit, or Help, shown as edges in the event-flow graph.
Once all the components of the GUI have been represented
as event-flow graphs, the remaining step is to identify their
interactions. Testing interactions among components is also
an area of research in object-oriented software testing [5] and
inter-procedural data-flow testing [4]. The identification of
interactions among objects and procedures is aided by structures
such as function-decomposition trees and call-graphs
[4]. Similarly, we develop a structure to identify interactions
among components. We call this structure an integration
tree because it shows how the GUI components are
integrated to form the GUI. Formally, an integration tree is
defined as:
An integration tree is a 3-tuple
where N is the set of components in the GUI, R 2 N
is a designated component called the Main component.
We say that a component Cx invokes component Cy
if Cx contains a restricted-focus event ex that invokes
Cy . B is the set of directed edges showing the invokes
relation between components, i.e., (Cx ; Cy
invokes Cy . 2
Figure
2 shows an example of an integration tree representing
a part of the MS WordPad's GUI. The nodes represent
the components of the MS WordPad GUI and the edges
represent the invokes relationship between the components.
Main is the top-level component that is available when Word-
Pad is invoked. Other components' names indicate their
functionality. For example, FileOpen is the component of
WordPad used to open files. The tree in Figure 2 has an
edge from Main to FileOpen showing that Main contains an
event, namely Open (see Figure 1) that invokes FileOpen.
3. COVERAGE CRITERIA
Having created representations for GUI components and
events among components, we are ready to define the coverage
criteria. We will first define coverage criteria for events
within a component, i.e., intra-component coverage criteria
and then for events among components, i.e., inter-component
criteria.
3.1 Intra-component Coverage
In this section, we define several coverage criteria for events
and their interactions within a component. We first formally
define an event sequence.
An event-sequence is en ? where
All the new coverage criteria that we define next are based
on event-sequences.
3.1.1 Event Coverage
Intuitively, event coverage requires each event in the component
to be performed at least once. Such a requirement is
necessary to check whether each event executes as expected.
set P of event-sequences satisfies the event
coverage criterion if and only if for all events v 2 V,
there is at least one event-sequence
event v is in p. 2
3.1.2 Event-selection Coverage
Another important aspect of GUI testing is to check the
interactions among all possible pairs of events in the com-
ponent. However, we want to restrict the checks to pairs of
events that may be performed in a sequence. We focus on
the possible implicit selection of events that the user may
encounter during interaction with the GUI.
Definition: The event-selections for an event e is the set
In this criterion, we require that after an event e has been
performed, all event-selections of e should be executed at
least once. Note that this requirement is equivalent to requiring
that each element in E be covered by at least one
test case.
set P of event-sequences satisfies the event-
selection coverage criterion if and only if for all elements
there is at least one event-sequence
contains
3.1.3 Length-n Event-sequence Coverage
In certain cases, the behavior of events may change when
performed in different contexts. In such cases event coverage
and event-selection coverage on their own are weak
requirements for sufficient testing. We now define a criterion
that captures the contextual impact. We first formally
define a context.
Definition: The context of an event en in the event-sequence
Intuitively, the context for an event e is the sequence of
events performed before e. An event may be performed in
an infinite number of contexts. For finite applicability, we
define a limit on the length of the event-sequence. Hence,
we define the length-n event-sequence criterion.
set P of event-sequences satisfies the length-n
event-sequence coverage criterion if and only if P
contains all event-sequences of length equal to n. 2
Note the similarity of this criterion to the length-n path coverage
criterion defined by Gourlay for conventional software
[2], which requires coverage of all subpaths in the program's
flow-graph of length less than or equal to n. As the length
of the event-sequence increases, the number of possible contexts
also increases.
3.2 Subsumption
A coverage criterion C1 subsumes criterion C2 if every
test suite that satisfies C1 also satisfies C2 [13]. Since event
coverage and event-selection coverage are special cases of
length-n event-sequence coverage, i.e., length 1 event-sequence
and length 2 event-sequence coverage respectively, it follows
that length-n event-sequence coverage subsumes event and
event-selection coverage. Moreover, if a test suite satisfies
event-selection coverage, it must also satisfy event coverage.
Hence, event-selection subsumes event coverage.
3.3 Inter-component Criteria
The goal of inter-component coverage criteria is to ensure
that all interactions among components are tested. In GUIs,
the interactions take the form of invocation of components,
termination of components, and event-sequences that start
with an event in one component and end with an event in
another component.
3.3.1 Invocation Coverage
Intuitively, invocation coverage requires that each restricted-
focus event in the GUI be performed at least once. Such a
requirement is necessary to check whether each component
can be invoked.
set P of event-sequences satisfies the invocation
coverage criterion if and only if for all restricted-
focus events i 2 I, where I is the set of all restricted-
focus events in the GUI, there is at least one event-
sequence event i is in p. 2
Note that event coverage subsumes invocation coverage since
it requires that all events be performed at least once, including
restricted-focus events.
3.3.2 Invocation-termination Coverage
It is important to check whether a component can be invoked
and terminated.
Definition: The invocation-termination set IT of a GUI
is the set of all possible length 2 event sequences !
component Cx and e j terminates
component Cx , for all components Cx 2 N .Intuitively, the invocation-termination coverage requires that
all length 2 event sequences consisting of a restricted-focus
event followed by the invoked component's termination events
be tested.
set P of event-sequences satisfies the invocation-
termination coverage criterion if and only if for all i 2
IT , there is at least one event-sequence
that i is in p. 2
Satisfying the invocation-termination coverage criterion assures
that each component is invoked at least once and then
terminated immediately, if allowed by the GUI's specifica-
tions. For example, in WordPad, the component FileOpen
is invoked by the event Open and terminated by either Open
or Cancel. Note that WordPad's specification do not allow
Open to terminate the component unless a file has been se-
lected. On the other hand, Cancel can always be used to
terminate the component.
v: Vertex or Event)f 1
system-interaction 7
return(B of Invoking
Invoked component); 12
return(B of Invoked component); 14
Figure
3: Computing follow set(v) for a Vertex v.
3.3.3 Inter-component Length-n Event-sequence Cov-
erage
Finally, the inter-component length-n event-sequence coverage
criterion requires testing all event-sequences that start
with an event in one component and end with an event in
another component. Note that such an event-sequence may
use events from a number of components. A criterion is
defined to cover all such interactions.
set P of event-sequences satisfies the inter-component
length-n event-sequence coverage criterion
for components C1 and C2 if and only if P contains
all length-n event-sequences
that
may belong to C1 or C2 or any other
component C i . 2
Note that the inter-component length-n event-sequence coverage
subsumes invocation-termination coverage since length-n
event sequences also include length 2 sequences.
4. EVALUATING COVERAGE
Having formally presented intra- and inter-component coverage
criteria, we now present algorithms to evaluate the
coverage of a test suite using these criteria. In this section,
we present algorithms to evaluate the coverage of the GUI
for a given test suite. We show how to construct an event-
flow graph and use it to evaluate intra-component coverage.
Then we show how to construct an integration tree and use
it to evaluate inter-component coverage.
4.1 Construction of Event-flow Graphs
The construction of event-flow graphs is based on the structure
of the GUI. The classification of events in the previous
section aids the automated construction of the event-flow
graphs, which we describe next.
For each v 2 V, we define follow set(v) as the set of all
events vx such that vx follows v. Note that follow set(v)
is the set of outgoing edges in the event-flow graph. We
determine follow set(v) using the algorithm in Figure 3
for each vertex v. The recursive algorithm contains a switch
structure that assigns follow set(v) according to the type
of each event. If the type of the event v is a menu-open event
(line represents events that are
available when a component is invoked) then the user may
either perform v again, its sub-menu choices, or any event
in B (line 4). However, if v 62 B then the user may either
perform all sub-menu choices of v, v itself, or all events in
follow set(parent(v)) (line 6). We define parent(v) as
any event that makes v available. If v is a system-interaction
event, then after performing v, the GUI reverts back to the
events in B (line 8). If v is an exit event, i.e., an event that
terminates a component, then follow set(v) consists of all
the top-level events of the invoking component (line 10).
If the event type of v is an unrestricted-focus event then
the available events are all top-level events of the invoked
component available as well as all events of the invoking
component (line 12). Lastly, if v is a restricted-focus event,
then only the events of the invoked component are available.
4.2 Evaluating Intra-component Coverage
Having constructed an event-flow graph, we are now ready
to evaluate the intra-component coverage of any given test
suite using the elements of this graph. Figure 4 shows a
dynamic programming algorithm to compute the percentage
of length-n event-sequences tested. The final result of the
algorithm is Matrix, where Matrix i;j is the percentage of
length-j event-sequences tested on component i.
The main algorithm is ComputePercentageTested. In this
algorithm, two matrices are computed (line 6,7). Count i;j
is the number of length-j event-sequences in component i
that have been covered by the test suite T (line 6). Total i;j
is the total number of all possible length-j event-sequences
in component i (line 7). The subroutine ComputeCounts
calculates the elements in count matrix. For each test
case in T, ComputeCounts finds all possible event-sequences
of different lengths (line 19.21). The number of event-
sequences of each length are counted in (lines 22, 23). In-
tuitively, the ComputeTotals subroutine starts with single-length
event-sequences, i.e., individual events in the GUI
(lines 31.33). Using follow set (line 38), the event-
sequences are lengthened one event at each step. A counter
keeps track of the number of event-sequences created (line
39). Note that since ComputeCounts takes a union of the
event sequences, there is no danger of counting the same
event sequence twice.
The result of the algorithm is Matrix, the entries of which
can be interpreted as follows:
Event Coverage requires that individual events in the GUI
be exercised. These individual events correspond to
length 1 event-sequences in the GUI. Matrix j;1 j 2 S
represents the percentage of individual events covered
in each component.
Event-selection Coverage requires that all the edges of
the event-flow graph be covered by at least one test
case. Each edge is effectively captured as a length-2
event-sequence. Matrix j;2 j 2 S represents the percentage
of branches covered in each component j.
Length-n Event-sequence Coverage is available directly
from Matrix. Each column i of Matrix represents the
number of length-i event-sequences in the GUI.
S: Set of Components; 2
T: Test
M: Maximum Event-sequence Length) 4
fcount /\Gamma ComputeCounts(T, S, M); 5,6
count i;j is the tested number
of length-j event-sequences in component i */
total
/* total i;j is the total number
of length-j event-sequences in component i */
Matrix i;j /\Gamma (count i;j /total i;j ) \Theta 100; 10
return(Matrix)g 11
T: Test
M: Maximum Event-sequence Length) 14
A /\Gamma fg; /* Empty Set */ 17
A
/* count number of sets of length j */
count i;j /\Gamma NumberOfSetsOfLength(S, j); 23
return(count)g
S: Set of Components; 26
M: Maximum Event-sequence Length) 27
28
x
total j;k /\Gamma total j;k
newfreq i /\Gamma 0; 44
return(total)g
Figure
4: Computing Percentage of Tested Length-n
Event-sequences of All Components.
4.3 Evaluating Inter-component Coverage
Once all the components in the GUI have been identified,
the integration tree may be constructed by adding, for each
restricted-focus event ex , the element (Cx ; Cy ) to B where
Cx is the component that contains ex and Cy is the component
that it invokes. The integration tree may be used in
several ways to identify interactions among components. For
example, in Figure 2 a subset of all possible pairs of components
that interact would be f (Main, FileNew), (Main,
FileOpen), (Main, Print), (Main, FormatFont), and (Print,
g. To identify sequences such as the ones from
Main to Properties, we traverse the integration tree in a
bottom-up manner, identifying interactions among Print
and Properties. We then merge Print and Properties to
form a super-component called PrintProperties. We then
check interactions among Main and PrintProperties. This
process continues until all components have been merged
into a single super-component.
Evaluating the inter-component coverage of a given test suite
requires computing the (1) invocation coverage, (2) invocation-
termination coverage, and (3) length-n event sequence cover-
age. The total number of length 1 event sequences required
to satisfy the invocation coverage criterion is equal to the
number of restricted-focus events available in the GUI. The
percentage of restricted-focus events actually covered by the
test cases is (x=I) \Theta 100, where x is the number of restricted-
focus events in the test cases, and I is the total number of
restricted-focus events available in the GUI. Similarly, the
total number of length 2 event sequences required to satisfy
the invocation-termination criterion is
and T i are the number of restricted-focus and termination
events that invoke and terminate component C i respectively.
The percentage of invocation-termination pairs actually covered
by the test cases is (x=
where x is the
number of invocation-termination pairs in the test cases.
Computing the percentage of length-n event sequences is
slightly more involved. The algorithm shown in Figure 5
computes the percentage of length-n event sequences tested
among GUI components. Intuitively, the algorithm obtains
the number of event sequences that end at a certain restricted-
focus event. It then counts the number of event sequences
that can be extended from these sequences into the invoked
component. The main algorithm called Integrate is recursive
and performs a bottom-up traversal of the integration
tree T (line 2). Other than the recursive call (line 8),
Integrate makes a call to ComputeTotalInteractions that
takes two components as parameters (lines 13,14). It initializes
the vector Total for all path lengths i (1 i
M) (line 16,17). We assume that a freq matrix has been
stored for each component. The freq matrix is similar to
the freq vector already computed in the algorithm in Figure
4. freq i;j is the number of event-sequences that start
with event i and end with event j. After obtaining both
frequency matrices for both C1 and C2 , for all path lengths
(lines 21,26), the new vector Total is obtained by adding
the frequency entries from F1 and F2 (lines 28.30). A
new frequency matrix is computed for the super-component
This new frequency matrix will be utilized
by the same algorithm to integrate "C1C2 " to other
components.
The results of the above algorithm are summarized in Ma-
trix. Matrix i;j is the percentage of length-j event-sequences
that have been tested in the super-component represented
by the label i.
5. CASE STUDY
We performed a case study on our version of WordPad to
determine the (1) total number of event sequences required
T: Integration Tree) 2
ComputeTotalInteractions(newT, c); 9
MatrixnewT+c
Component
Component
Total
x
/* get freq table of C1 for event-seq of length i */ 20
/* Add all values in column x */ 22
/* get freq table of C2 for event-seq of length j */ 25
28
Total i+j /\Gamma Total i+j
ComputeFreqMatrix(C1 , C2 ); 31
Figure
5: Computing Percentage of Tested Length-n
Event-sequences of All Components.
to test the GUI and hence enable a test designer to compute
the percentage of event sequences tested, (2) correlation between
event-based coverage of the GUI and statement coverage
of the underlying code, and (3) time taken to evaluate
the coverage of a given test suite and usefulness of the coverage
report to guide further testing.
In the case study, we employed our own specifications and
implementation of the WordPad software. The software consists
of 36 modal windows, and 362 events (not counting
short-cuts). Our implementation of WordPad is similar to
Microsoft's WordPad except for the Help menu, which we
did not model.
5.1 ComputingTotalNumber of Event-sequences
for WordPad
In this case study, we wanted to determine the total number
of event sequences that our new criteria specify to test parts
of WordPad. We performed the following steps:
Components and Events: Individual Word-
Pad components and events within each component
were identified. Table 1 shows some of the components
of WordPad that we used in our case study. Each row
represents a component and each column shows the
Component
Name
Open
System
Interaction
Restricted
Focus
Unrestricted
Focus Termination Sum
Main 7
FileOpen
FileSave
Properties
Sum 7 78
Event Type
Table
1: Types of Events in Some Components of
MS WordPad.
different types of events available within each component
Creating Event-flow Graphs: The next step was to construct
an event-flow graph for each component. In
Figure
1 we showed a part of the event-flow graph of
the most important component, i.e., Main. Recall that
each node in the event-flow graph represents an event.
Computing Event-sequences: Once the event-flow graphs
were available, we computed the total number of possible
event-sequences of different lengths in each component
by using the computeTotals subroutine in Figure
4. Note that these event-sequences may also include
infeasible event-sequences. The total number of
event-sequences is shown in Table 2. The rows represent
the components and the shaded rows represent
the inter-component interactions. The columns represent
different event-sequence lengths. Recall that
an event-sequence of length 1 represents event coverage
whereas an event-sequence of length 2 represents
event-selection coverage. The columns 1' and 2' represent
invocation and invocation-termination coverage
respectively.
The results of this case study show that the total number
of event sequences grows with increasing length. Note that
longer sequences subsume shorter sequences; e.g., if all event
sequences of length 5 are tested, then so are all sequences of
length-i, where 4. It is difficult to determine the maximum
length of event sequences needed to test a GUI. The
large number of event sequences show that it is impractical
to test a GUI for all possible event sequences. Rather, depending
on the resources, a subset of "important" event sequences
should be identified, generated and executed. Identifying
such important sequences requires that they be ordered
by assigning a priority to each event sequence. For
example, event sequences that are performed in the Main
component may be given higher priority since they will be
used more frequently; all the users start interacting with
the GUI using the Main component. The components that
are deepest in the integration tree may be used the least.
This observation leads to a heuristic for ordering the testing
of event sequences within components of the GUI. The
structure of the integration tree may be used to assign priorities
to components; Main will have the highest priority, decreasing
for components at the second level, with the deepest
components having the lowest priority. A large number
Component Name 1'
Main 56 791 14354 255720 4490626 78385288
FileOpen
FileSave
Print 12 108 972 8748 78732 708588
Properties 13 143 1573 17303 190333 2093663
PageSetup 11 88 704 5632 45056 360448
FormatFont 9 63 441 3087 21609 151263
Print+Properties
Main+FileSave
Main+FormatFont
Main+Print+Properties 12 145 1930 28987 466578
Event-sequence Length
Table
2: Total Number of Event-sequences for Selected
Components of WordPad. Shaded Rows
Show Number of Interactions Among Components.
of event sequences in the high priority components may be
tested first; the number will decrease for low priority components
5.2 Correlation Between Event-based Coverage
and Statement Coverage
In this case study, we wanted to determine exactly which
percentage of the underlying code is executed when event-
sequences of increasing length are executed on the GUI. We
wanted to see whether testing longer sequences adds to the
coverage of the underlying code. We performed the following
steps:
Code Instrumentation: We instrumented the underlying
code of WordPad to produce a statement trace, i.e., a
sequence of statements in the order in which they are
executed. Examining such a trace allowed us to determine
which statements are executed by a test case.
Event-sequence Generation: We wanted to generate all
event-sequences up to a specific length. We modified
ComputeTotals in Figure 4 resulting in an event-
sequence generation algorithm that constructs event
sequences of increasing length. The dynamic programming
algorithm constructs all event sequences of length
1. It then uses follow set to extend each event sequence
by one event, hence creating all length 2 event-
sequences. We generated all event-sequences up to
length 3. In all we obtained 21659 event-sequences.
Controlling GUI's State: Bringing a software to a state
S i in which a test case T i may be executed on it is
traditionally known as the controllability problem [1].
This problem also occurs in GUIs and for each test
case, appropriate events may need to be performed
on the GUI to bring it to the state S i . We call this
sequence of events the prefix, P i , of the test case. Although
generating the prefix in general may require the
development of expensive solutions, we used a heuristic
for this study. We executed each test case in a
fixed state S0 in which WordPad contains text, part
of the text was highlighted, the clipboard contains a
text object, and the file system contains two text files.
We traversed the event-flow graphs and the integration
tree to produce the prefix of each test case. We
do, however, note that using this heuristic may render
Event-sequence Length
Percentage
of
Statements
Executed
Figure
The Correlation Between Event-based
Coverage and Statement Coverage of WordPad.
some of the event sequences non-executable because of
infeasibility. We will later see that such sequences do
exist but are of no consequence to the results of this
study. We have modified WordPad so that no statement
trace is produced for P i .
Test-case Execution: After all event-sequences up to length
3 were obtained, we executed them on the GUI using
our automated test executor [10] and obtained all
the execution traces. The test case executor executed
without any intervention for hours. We note that
(or 19.3%) of the test cases could not be executed
because of infeasibility.
Analysis: In analyzing the traces for our study, we determined
the new statements executed by event-sequences
of length 1, i.e., individual events. The graph in Figure
6 shows that almost 92% of the statements were
executed by just these individual events. As the length
of the event sequences increases, very few new statements
are executed (5%). Hence, a high statement
coverage of the underlying code may be obtained by
executing short event sequences.
The results of this case study can be explained in terms of
the design of the WordPad GUI. Since the GUI is an event-driven
software, a method called an event handler is implemented
for each event. Executing an event caused the execution
of its corresponding event handler. Code inspection
of the WordPad implementation revealed that there were
few or no branch statements in the code of the event han-
dler. Consequently, when an event was performed, most of
the statements in the event-handler were executed. Hence
high statement coverage was obtained by just performing
individual events. Whether other GUIs exhibit similar behavior
requires a detailed analysis of a number of GUIs and
their underlying code.
The result shows that statement coverage of the underlying
code can be a misleading coverage criterion for GUI test-
ing. A test designer who relies on statement coverage of
the underlying code for GUI testing may test only short
event sequences. However, testing only short sequences is
not enough. Longer event sequences lead to different states
of the GUI and that testing these sequences may help detect
a larger number of faults than short event sequences.
Component Name 1'
Main
FileOpen 9
FileSave 9 33 132
Print 11 37 313 787 3085 1314
Properties 12
Print+Properties
Main+FileSave
Main+Print+Properties 6 56 123 189 423
Event-sequence Length
Table
3: The Number of Event-sequences for Selected
Components of WordPad Covered by the Test
Cases.
For example, in WordPad, the event Find Next (obtained
by clicking on the Edit menu) can only be executed after at
least 6 events have been performed; the shortest sequence
of events needed to execute Find Next is !Edit, Find,
TypeInText, FindNext2, OK, Edit, Find Next?, which has
7 events. If only short sequences (! are executed on the
GUI, a bug in Find Next may not be detected. Extensive
studies of the fault-detection capabilities of executing short
and long event sequences for GUI testing are needed, and is
targeted for future work. Another possible extension to this
experiment is to determine the correlation between event-based
coverage and other code-based coverage, e.g., branch
coverage.
5.3 Evaluating the Coverage of a Test Suite
We wanted to determine the time taken to evaluate the
coverage of a given test suite and how the resulting coverage
report could guide further testing. We used our earlier
developed planning-based test case generation system
called Planning Assisted Tester for grapHical user interface
Systems(PATHS) to generate test cases [8]. We performed
the following steps:
Tasks: In PATHS, commonly used tasks were
identified. A task is an activity to be performed by
using the events in the GUI. In PATHS, the test designer
inputs tasks as pairs (I, G), where I is the initial
GUI state before the task is performed and G is
the final GUI state after the task has been performed.
We carefully identified 72 different tasks, making sure
that each task exercised at least one unique feature of
WordPad. For example, in one task we modified the
font of text, in another we printed the document on
size paper.
Generating Test Cases: Multiple test cases were generated
using a plan generation system to achieve these
tasks. In this manner, we generated 500 test cases
(multiple cases for each task).
Coverage Evaluation: After the test cases were available,
we executed the algorithms of Figures 4 and 5. The algorithms
were implemented using Perl and Mathematica
[19] and were executed on a Sun UltraSPARC workstation
(Sparc Ultra running SunOS 5.5.1. Even
Component Name 1'
Main
FileOpen 90 56 17.50 0.72 0.06
FileSave 90 41 20.63 1.27 0.47 0.02
Print
Properties
PageSetup 91
FormatFont
Print+Properties 100 0 46 51.15 8.18 3.87
Main+FileOpen
Main+FileSave 100 0 20 13.00 8.64 1.26 0.28
Main+FormatFont 100 0 33 28.40 5.17 0.97 0.10
Main+Print+Properties 50 38.62 6.37 0.65 0.09
Event-sequence Length
Table
4: The Percentage of Total Event-sequences
for Selected Components of WordPad Covered by
the Test Cases.
with the inefficiencies inherent in the Perl and Mathematica
implementation, we could process the 500 test
cases in 47 minutes (clock time). The results of applying
the algorithms are summarized as coverage reports
in
Tables
and 4. Table 3 shows the actual number
of event-sequences that the test cases covered. Table 4
presents the same data, but as a percentage of the total
number of event-sequences. Column 1 in Table 4
shows close to 90% event coverage. The remaining 10%
of the events (such as Cancel) were never used by the
planner since they did not contribute to a goal. Column
2 shows the event-selection coverage and the test
cases achieved 40-55% coverage. Note that since all
the components were invoked at least once, 100% invocation
coverage (column 1') was obtained. However,
none of the components were terminated immediately
after being invoked. Hence, no invocation-termination
coverage (column 2') was obtained.
This result shows that the coverage of a large test suite can
be obtained in a reasonable amount of time. Looking at
columns 4, 5, and 6 of Table 4, we note that only a small
percentage of length 4, 5, and 6 event sequences were tested.
The test designer can evaluate the importance of testing
these longer sequences and perform additional testing. Also,
the two-dimensional structure of Table 4 helps target specific
components and component-interactions. For example,
60% of of length 2 interactions among Main and PageSetup
have been tested whereas only 11% of the interactions among
Main and FileOpen have been tested. Depending on the relative
importance of these components and their interactions,
the test designer can focus on testing these specific parts of
the GUI.
6. RELATED WORK
Very little research has been reported on developing coverage
criteria for GUI. The only exception is the work by Ostrand
et al. who briefly indicate that a model-based method may
be useful for improving the coverage of a test suite [12].
However, they have deferred a detailed study of the coverage
of the generated test cases using this type of GUI model to
future work.
There is a close relationship between test-case generation
techniques and the underlying coverage criteria used. Much
of the literature on GUI test case generation focuses on describing
the algorithms used to generate the test cases [14,
6]. Little or no discussion about the underlying coverage
criteria is presented. In the next few paragraphs, we
present a discussion of some of the methods used to develop
test cases for GUIs and their underlying coverage criteria.
We also present a discussion of automated test case generation
techniques that offer a unique perspective of GUI
coverage.
The most commonly available tools to aid the test designer
in the GUI testing process include record/playback tools
[15, 3]. These tools record the user events and GUI screens
during an interactive session. The recorded sessions are later
played back whenever it is necessary to generate the same
GUI events. Record/playback tools provide no functionality
to evaluate the coverage of a test suite. The primary reason
for no coverage support is that these tools lack a global
view of the GUI. The test cases are constructed individually
with a local perspective. Several attempts have been made
to provide more sophisticated tools for GUI testing. One
popular technique is programming the test case generator
[7]. The test designer develops programs to generate test
cases for a GUI. The use of loops, conditionals, and data
selection switches in the test case generation program gives
the test designer a broader view of the generated test cases'
coverage.
Several finite-state machine (FSM) models have also been
proposed to generate test cases [14]. Once an FSM is built,
coverage of a test suite is evaluated by the number of states
visited by the test case. This method of evaluating coverage
of a test suite needs to be studied further as an accurate
representation of the GUI's navigation results in an infinite
number of states.
White et al. presents a new test case generation technique
for GUIs [18]. The test designer/expert manually identifies
a responsibility, i.e., a GUI activity. For each responsibility,
a machine model called the complete interaction sequence
(CIS) is identified manually. To reduce the size of the test
suite, the CIS is reduced using constructs/patterns in the
CIS. The example presented therein showed that testing
could be performed by using 8 test cases instead of 48. How-
ever, there is no discussion of why no loss of coverage will
occur during this reduction. Moreover, further loss of coverage
may occur in identifying responsibilities and creating
the CIS. The merit of the technique will perhaps be clearer
when interactions between the CIS are investigated.
7. CONCLUSION
In this paper, we presented new coverage criteria for GUI
testing based on GUI events and their interactions. A unit of
testing called a GUI component was defined. We identified
the events within each component and represented them as
an event-flow graph. Three new coverage criteria were de-
fined: event, event-selection, and length-n event-sequence
coverage. We defined an integration tree to identify events
among components and defined three inter-component coverage
criteria: invocation, invocation-termination and inter-component
length-n event-sequence coverage.
In the future we plan to examine the effects of the GUI's
structure on its testability. As GUIs become more struc-
tured, the integration tree becomes more complex and inter-component
testing becomes more important.
We also plan to explore the possibility of using the event-based
coverage criteria for software other than GUIs. We
foresee the use of these criteria for (1) object-oriented soft-
ware, which use messages/events for communication among
objects, (2) networking software, which use messages for
communication, and (3) the broader class of reactive soft-
ware, which responds to events.
8.
--R
A framework for testing database applications.
A mathematical framework for the investigation of testing.
Integrated data capture and analysis tools for research and testing an graphical user interfaces.
Interprocedual data flow testing.
Toward automatic generation of novice user test scripts.
The black art of GUI testing.
Using a goal-driven approach to generate test cases for GUIs
Automated test oracles for GUIs.
A planning-based approach to GUI testing
Hierarchical GUI test case generation using automated planning.
A visual test development environment for GUI systems.
Selecting software test data using data flow information.
A method to automate user interface testing using variable finite state machines.
The applicability of program schema results to programs.
Translatability and decidability questions for restricted classes of program schemas.
Generating test cases for GUI responsibilities using complete interaction sequences.
A System for Doing Mathematics by Computer.
Test data adequacy measurements.
--TR
Selecting software test data using data flow information
Mathematica: a system for doing mathematics by computer
Integrated data capture and analysis tools for research and testing on graphical user interfaces
Test data adequacy measurement
Object-oriented integration testing
Toward automatic generation of novice user test scripts
A visual test development environment for GUI systems
Using a goal-driven approach to generate test cases for GUIs
A framework for testing database applications
Automated test oracles for GUIs
Hierarchical GUI Test Case Generation Using Automated Planning
A Method to Automate User Interface Testing Using Variable Finite State Machines
Generating Test Cases for GUI Responsibilities Using Complete Interaction Sequences
--CTR
Yanhong Sun , Edward L. Jones, Specification-driven automated testing of GUI-based Java programs, Proceedings of the 42nd annual Southeast regional conference, April 02-03, 2004, Huntsville, Alabama
Aine Mitchell , James F. Power, An approach to quantifying the run-time behaviour of Java GUI applications, Proceedings of the winter international synposium on Information and communication technologies, January 05-08, 2004, Cancun, Mexico
Philippe Palanque , Regina Bernhaupt , Ronald Boring , Chris Johnson, Testing Interactive Software: A Challenge for Usability and Reliability, CHI '06 extended abstracts on Human factors in computing systems, April 22-27, 2006, Montral, Qubec, Canada
Ping Li , Toan Huynh , Marek Reformat , James Miller, A practical approach to testing GUI systems, Empirical Software Engineering, v.12 n.4, p.331-357, August 2007
Christopher J. Howell , Gregory M. Kapfhammer , Robert S. Roos, An examination of the run-time performance of GUI creation frameworks, Proceedings of the 2nd international conference on Principles and practice of programming in Java, June 16-18, 2003, Kilkenny City, Ireland
Geoffrey R. Gray , Colin A. Higgins, An introspective approach to marking graphical user interfaces, ACM SIGCSE Bulletin, v.38 n.3, September 2006
Atif M. Memon , Mary Lou Soffa, Regression testing of GUIs, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Mikael Lindvall , Ioana Rus , Paolo Donzelli , Atif Memon , Marvin Zelkowitz , Aysu Betin-Can , Tevfik Bultan , Chris Ackermann , Bettina Anders , Sima Asgari , Victor Basili , Lorin Hochstein , Jrg Fellmann , Forrest Shull , Roseanne Tvedt , Daniel Pech , Daniel Hirschbach, Experimenting with software testbeds for evaluating new technologies, Empirical Software Engineering, v.12 n.4, p.417-444, August 2007
Qing Xie , Atif M. Memon, Designing and comparing automated test oracles for GUI-based software applications, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.1, p.4-es, February 2007
Atif Memon , Adithya Nagarajan , Qing Xie, Automating regression testing for evolving GUI software: Research Articles, Journal of Software Maintenance and Evolution: Research and Practice, v.17 n.1, p.27-64, January 2005
Atif Memon , Adithya Nagarajan , Qing Xie, Automating regression testing for evolving GUI software, Journal of Software Maintenance: Research and Practice, v.17 n.1, p.27-64, January 2005 | component testing;integration tree;event-flow graph;event-based coverage;GUI testing;GUI test coverage |
503246 | An empirical study on the utility of formal routines to transfer knowledge and experience. | Most quality and software process improvement frameworks emphasize written (i.e. formal) documentation to convey recommended work practices. However, there is considerable skepticism among developers to learn from and adhere to prescribed process models. The latter are often perceived as overly "structured" or implying too much "control". Further, what is relevant knowledge has often been decided by "others"---often the quality manager. The study was carried out in the context of a national software process improvement program in Norway for small- and medium-sized companies to assess the attitude to formalized knowledge and experience sources. The results show that developers are rather skeptical at using written routines, while quality and technical managers are taking this for granted. This is an explosive combination. The conclusion is that formal routines must be supplemented by collaborative, social processes to promote effective dissemination and organizational learning. Trying to force a (well-intended) quality system down the developers' throats is both futile and demoralizing. The wider implications for quality and improvement work is that we must strike a balance between the "disciplined" or "rational" and the "creative" way of working. | Figure
1. A model of knowledge conversion between tacit and
explicit knowledge [27].
Figure
1 expresses that practitioners first internalize new
knowledge (i.e. individual learning). The new knowledge is then
socialized into revised work processes and changed behavior
(group learning). The new work processes and the changed
behavior are then observed and abstracted, i.e. externalized. This
new knowledge is then combined to refine and extend the existing
knowledge (organizational learning). This process continues in
new cycles etc.
To enable learning is the crucial issue, both at the individual,
group, and organizational level. The latter means creating and
sustaining a learning organization that constantly improves its
work, by letting employees share experience with each other.
Around the underlying experience bases, there may be special
(sub-)organizations to manage and disseminate the stored
experience and knowledge, as exemplified by the Experience
Factory [7]. We also refer to the workshop series of Learning
Software Organizations [4] [9].
Other fields have introduced the term organizational or corporate
memory to characterize an organization's strategic assets,
although not only from a learning point of view [1].
The knowledge engineering community has also worked on
experience bases, often with emphasis on effective knowledge
representations, deduction techniques etc., and towards a wide
range of applications. The subfield of Case-Based Reasoning [3]
has sprung up from this work, enabling reuse of similar, past
information (cases) to better master new situations. We will also
mention the subfield of Data Mining [20].
Social anthropologists and psychologists have studied how
organizations learn, and how their employees make use of
information sources in their daily work. Much R&D effort has
been spent on the externalizing" flow, looking for valid
experience that can be analyzed, generalized, synthesized,
packaged and disseminated in the form of improved models or
concepts. For instance, to make, calibrate and improve an
estimation model based on the performance of previous software
projects. Explicit knowledge may nevertheless be misunderstood
due to lack of context and nuances, e.g. how to understand the
context of post-mortems?
However, the hard part is the internalizing" flow. That is, how to
make an impact on current practice, even if updated knowledge
may be convincingly available? See for instance the ethnographic
study on the use of quality systems in [40]. Typical inhibitors are
not-invented-here", mistrust (been-burned-before"), lack of
extra time/resources (not-getting started), or plain unwillingness
to try something new or different (like adhering to formal
procedures in a quality system). A study of maintenance
technicians for copy machines indicated that such experts were
most likely to ask their colleagues for advice, rather than to look it
up in or even to follow the book [10]. Indeed, how many times
have not computer scientists asked their office mates about
commands in Word or NT-Windows, instead of directly
consulting relevant documentation - although a query into the
latter can be hard to formulate.
Furthermore, the existence of software quality manuals, either on
paper in thick binders (sometimes 1-2 m in the shelves) or in web
documents on an Intranet, is no guarantee for their use. In fact,
since manuals may dictate people on how to perform their job,
traditional quality departments in many software organizations are
not looked upon with high esteem by developers. For instance,
there are over 250 proposed software standards [32], many of
them recommending standard process models, but how many of
these are in practical use?
So, if we are to succeed with formal routines and explicit
knowledge in a quality system or a SEB to achieve learning, we
must not carry the traditional QA hat of control. This does not
mean that all, formal knowledge in the form of books, reports etc.
(like this article) has to be discarded. The lesson is just that formal
routines must be formulated and introduced with proper
participation from the people involved, in order to have the
intended effect on practice.
Lastly, many of the ideas and techniques on quality improvement
(TQM and similar) come from manufacturing, with rather stable
products, processes and organizations. Information technology, on
the other hand, is characterized by rapid product innovation, not
gradual process refinement [33]. One IT year is like a dog
year (7 years) in other disciplines, and time-to-market seems
sacred (i.e. schedule pressure). The strength of many software
SMEs (Small and Medium-sized Enterprises) lies in their very
ability to turn around fast and to convert next week's technologies
into radically new products and services. Barrett [6] has used the
term improvisation, a jazz metaphor, to characterize performers
that execute evolving activities while employing a large
competence base. With reference to our software development
context, we must carefully adopt a set of quality and improvement
technologies that can function in a very dynamic environment - so
how to manage constant change [17]? Since SPI assumes that
there is something stable that can be improved, we must pick
our learning focus accordingly. For instance, the Norwegian
Computer Society (www.dnd.no) is now offering a course in
chaos and complexity theory as an alternative to manage highly
evolving projects.
However, it is fair to say that especially TQM is aware of the
cultural and social dimensions of quality work. TQM has a strong
emphasis on creating a learning organization, and having all
employees participate and involve themselves in order to satisfy
their customers.
So, how can software organizations best systematize, organize and
exploit previous experience in order to improve their work?
3. CONTEXT, QUESTIONS, AND
3.1 The SPIQ Project
The SPIQ project [12] was run for three years in 1997-99, after a
half-year pre-study in 1996. SPIQ stands for SPI for better
Quality. The project, which was funded in part by the Research
Council of norway, involved three research institutions and 12 IT
companies, mostly SMEs. More than 20 SPI pilot projects were
run in these companies. A follow-up project called PROFIT is
now carried out in 2000-2002.
The main result of the SPIQ project was a pragmatic method
handbook [16], with the following components:
A dual, top-down/bottom-up approach, using TQM [15] and
Quality Improvement Paradigm [7] ideas.
An adapted process for ESSI-type [19] Process Improvement
Experiments (PIEs).
The Goal-Question-Metric (GQM) method [8], and e.g. GQM
feedback sessions.
The Experience Factory concept [7], to refine and disseminate
project experiences.
An incremental approach, relying on action research [22].
Reported empirical studies from five SPIQ companies.
Typical work in the 12 SPIQ companies included pilot projects to
test out a certain improvement technology, like novel inspection
techniques, incremental development, or use of measurement and
software experience bases.
For further results from comparative studies of SPI success in the
SPIQ companies and in several other Scandinavian PIEs, all
emphasizing organizational and cultural factors see e.g. [13], [14],
[18], and [37].
3.2 How the Study Was Performed
organization: The actual study was carried out between
NTNU/SINTEF and five companies participating in the SPIQ
project. Data collection was carried out by two NTNU students in
the last year of their M.Sc. study, as part of a pre-thesis project
[11]. The two students were advised by the two authors of this
paper, the former being their teacher and also a SPIQ researcher,
the latter being a researcher and Ph.D. student attached to the
project.
Preparation: First, the student read and learnt about the project
and relevant literature. Then we tried an initial formulation of
some important issues in the form of research questions, and
discussed these. At the same time, we contacted potential
companies and checked their willingness to participate and their
availability. Then a more detailed interview guide (see 3.3) was
designed, in dialogue between the students and their advisors. The
companies had been briefed about the questions, and when and
how the interviews were going to be run.
Research questions - important issues to address are:
Q1: What is the knowledge of the routines being used?
Q2: How are these routines being used?
Q3: How are they updated?
Q4: How effective are they as a medium for transfer of
knowledge and experience?
And, furthermore, are there important differences between
developers and managers, and how much cooperation is involved
in making and updating the routines?
Subjects: Initially, we tried to have a dozen companies involved,
but the time frame of the students' availability (three months in
spring of 1999) only allowed five companies. One of these was in
Trondheim, the rest in Oslo. Three of the companies were ISO-
9001 certified. Two of the companies were IT/telecom companies,
the rest were software houses. A convenience sample (i.e.
available volunteers) of 23 persons were interviewed based on
their experience with SPI, whereof 13 developers and 10
managers. The latter group included one quality manager and one
software manager (e.g. division or project manager) from each of
the five companies.
Data collection: After finishing the interview guide, this was sent
by email to the respondents. A few days later, the two students
visited the companies, and spent a full day at each company. At
each place they spent up to one hour with each respondent in
semi-structured interviews. In each interview, the four questions
were treated one after another. One of the students was asking the
questions, and both students made notes (interview records)
during the interview. The other student served as a scribe, and
wrote down a structured summary immediately after the
interview. The first student then checked this against his own
notes.
Data analysis: The ensuing categorization and data analysis was
done by the two students, in cooperation with the authors, and
reported in the students' pre-diploma thesis.
3.3 The Interview Guide
As mentioned, we formulated four main research questions,
with a total of 14 sub-questions:
Q1. Knowledge of the routines.
1.1 Describe the (possible) contents in routines being used for
software development.
1.2 How were these routines introduced in the company?
1.3 What was the purpose of these routines?
1.4 What is you personal impression of the routines?
Q2. Use of routines.
2.1 What status does a given routine have among developers?
2.2 To what degree are the routines actually being used?
2.3 Who are the most active/passive users?
2.4 What is the availability of the routines?
2.5 How is follow-up and control of usage done?
Q3. Updating of routines.
3.1 What procedures are used to update the routines?
3.2 Who participates in the update activities?
Q4. Routines as a medium for transfer of knowledge and
experience.
4.1 Do you regard written routines as an efficient medium for
transfer of knowledge and experience?
4.2 What alternatives to written routines do you think are useful
or in use in the company?
4.3 What barriers against transfer of experiences do you think
are most important?
The interview guide contained advice on how to deal with
structured questions, usually with three answer categories such as
yes-maybe-no or little-some-much. We allowed more
unstructured commentaries in the form of prose answers to solicit
more direct and commentary opinions.
4. RESULTS
In this section, we present the results of our study regarding the
utility of formal routines as a medium for transfer of knowledge
and experience. The focus is on participation and potential
differences in opinion between developers and managers
regarding the utility of the routines.
4.1 Knowledge of the Routines
All respondents had a fairly good knowledge of the routines that
were in place in their respective companies. In fact, two thirds of
the respondents showed good knowledge about the content of the
routines. Table 1 illustrates this, and shows how well developers
and managers were able to describe the specific contents of the
routines in their company.
Table
1. Knowledge of company routines.
Software developers Managers
Frequency Percent Frequency Percent
Little -
Some 646220
Much 754880
However, when it came to knowledge about how the routines
were introduced, 50% of the developers did not know anything
about this process. On the other hand, only one manager did not
know about the introduction process. All in all, it turned out that
about 30% of the developers and 70% of the managers had
actively participated in the introduction of routines (Table 2).
Table
2. Degree of involvement during introduction of
routines.
Degree of involvement
Low High
Freq. Percent Freq. Percent
Developers 969431
Managers 330770
Furthermore, it seemed to be a common understanding regarding
the objective of having formal routines. Most respondents said
that such routines were useful with respect to quality assurance.
Other respondents said that they would enable a more unified way
of working. However they emphasized that:
Routines should not be formalistic, but rather useful and
necessary.
Respondents in the three ISO-9001 certified companies claimed
that their routines were first and foremost established to get the
certificate on the wall, and that the quality of their software
processes had gained little or nothing from the ISO certification.
One of the respondents expressed his views on this by the
following example:
You might be ISO certified to produce lifebelts in concrete,
as long as you put the exact same amount of concrete in each
lifebelt.
Although some of the respondents were critical to the routines,
stating that:
10% of the routines are useful, while the remaining 90% is
nonsense
Most respondents, nevertheless, had a good impression of the
routines, typically stating that:
Routines are a prerequisite for internal collaboration.
Routines are a reassurance and of great help.
4.2 Use of Routines
Software developers and managers agreed on the degree to which
the routines were used. In general, they answered that about 50%
of the routines were in use, and that the more experienced
developers used the routines to a lesser extent than the more
inexperienced developers do. Furthermore, it was a common
agreement that:
There is no point in having routines that are not considered
useful.
However, the status of the routines among the software developers
was highly divergent, as seen from the following statements:
The routines are generally good and useful, but some
developers are frustrated regarding their use.
The system is bureaucratic - it was better before, when we
had more freedom to decide for ourselves what should best be
done.
The routines are easy to use.
Routines are uninteresting and revision meetings are
boring.
4.3 Updating of Routines
None of the companies had scheduled revisions as part of the
process for updating their routines. Most answers to this issue
were rather vague. Some respondents explained that such
revisions were informally triggered, while other respondents did
not know how to propose and implement changes to existing
routines.
However, respondents from all of the companies, both managers
and software developers, said that all employees in their
respective companies could participate in the revision activities if
they wanted to.
4.4 Routines as a Medium for Transfer of
Knowledge and Experience
The answers to this issue varied a lot, and indicated highly
different attitudes regarding the effectiveness of formal routines
for knowledge and experience transfer. Particularly, it seemed to
be a clear difference in judgment between software developers
and managers. While seven of the ten managers regarded written
routines as an efficient medium for knowledge transfer, none of
the developers did! Furthermore, half of the developers considered
such routines to be inefficient for knowledge transfer, while only
one of the managers shared this view.
Typically, managers said that written routines were important as
means for replacing the knowledge of the people that had left the
company. Software developers, on the other hand, did not make
such a clear connection between experience, knowledge transfer
and formal routines. One software developer said that different
groups within the company never read each other's reports, while
another developer maintained that it would take too much time to
learn about the experience of the other groups. Several of the
developers explained their views by stating that the
documentation was not good enough, it was hard to find, boring to
use and that it takes too much time.
When asked about useful alternatives to written routines, the
respondents answered that they regarded some kind of
Experience base or Newsgroup as the highest ranked
alternative. Other high-ranked alternatives were Socialization,
Discussion groups, Experience reports, Group meetings,
and On-the-job training. Table 3 shows these alternatives in
rank order (1 is best) for software developers and managers
respectively.
We also asked the respondents about what they regarded as the
most important barriers against transfer of knowledge and
experience. Nearly all of them said that such transfer, first and
foremost, is a personal matter depending on how much each
individual whishes to teach their lessons-learned to others.
Furthermore, the willingness to share depends on available time,
personality, self-interest, and company culture.
Table
3. Alternative media for knowledge transfer.
Rank
Medium Developers Managers
Experience base/newsgroups 1 1
Socialization
Discussion groups 3 2
Experience reports 4 3
On-the-job-training 5 6
Work with ext. consultants 6 -
Group meetings 7 5
As shown in Table 4, seven (six developers and one manager) of
the 23 respondents answered a definite No to the question
regarding the efficiency of written routines as a medium for
transfer of knowledge and experience. Likewise, seven
respondents (managers only) answered an equally clear Yes to
the same question. The last nine respondents answered somewhere
in between, and said that in some cases written routines could be
effective, while in other situations they would rather be a barrier.
Table
4. Do you regard written routines as an efficient
medium for transfer of knowledge and experience?
Software developers Managers
Frequency Percent Frequency Percent
Both 754220
Due to the rather small sample size in this study, and the low
expected frequency in several of the cells in Table 4, we
compared the respondents' assessments of the routines and their
job function using Fisher's exact probability test. With this test,
the exact probability (or significance level) that the obtained result
is purely a product of chance is calculated [23]. The test statistic
of 13.02 was highly significant (p=0.002, two-tailed). Thus, we
rejected the hypothesis of independence and concluded that there
is a difference in the distribution of assessment of the usefulness
of formal routines as an efficient medium for transfer of
knowledge and experience between software developers and
managers.
Since software developers had been involved in the process of
introducing the routines to a much lesser extent than the
managers, we compared the respondent's assessment of the
routines with the level of involvement using Fisher's exact test
Table
5). The test statistic of 14.71 was highly significant
(p<0.0005, two-tailed). Thus, we concluded that there is a
difference in the assessment of the usefulness of formal routines
as an efficient medium for transfer of knowledge and experience
with respect to the degree of involvement in the introduction
process.
Table
5. Degree of involvement vs. assessment of formal routinized solutions that often ignore the social realities of the
routines as an efficient medium for transfer of knowledge and workplace, see work by Kunda [24] and Thomas [38].
experience.
Against this background, we can more easily understand the
Degree of involvement preference of formal routines within the SPI community as
espoused by quality managers or members of Software
Efficient medium? Low High
Engineering Process Groups (SEPGs). Likewise, managers will
-7rather put emphasis on rules, procedures, and instructions than on
dialog, discussion and employee participation.
Both 54
software development is radically different from
manufacturing. The former is not a mechanical process with
strong causal models, where we just need to establish the right
formal routines. Rather, the developers view software
5. DISCUSSION development largely as an intellectual and social activity.
In this section, we restrict the discussion to possible explanations Therefore, we cannot apply a rationalistic, linear model to
of why none of the software developers in our study regarded software engineering. We should admit that reality for most
formal routines as an efficient medium for transfer of knowledge software organizations is a non-deterministic, multi-directional
and experience. The reason for this is that we regard formalization flux that involves constant negotiation and renegotiation among
and participation as important issues for the relevance of much of and between the social groups shaping the software [17].
the SPI work done today by both researchers and practitioners.
This does not mean that we should discard discipline and
The respondents in the study were software engineers and formalization altogether. What is needed, is to balance the odd
managers with an engineering background. Furthermore, software couple of discipline and creativity in software development [21].
and quality managers with an engineering background wrote most This balance can be challenging, since losing sight of the creative,
of the routines. Thus, the routines were for a large part written by design-intense nature of software work leads to stifling rigidity,
engineers - for engineers. Still, there was a highly significant while losing sight of the need for discipline leads to chaos.
difference in attitudes regarding the usefulness of the routines for
This leads us to the second possible reason for the divergent
transferring knowledge and experience between software
attitudes between developer and managers; that of employee
engineers and managers.
participation around formal routines.
As seen from our point of view, there are three main reasons for
the observed diversity regarding the assessment of the efficiency 5.2 Participation
of routines. One is the potential conflict between the occupational
Employee participation, and the way people are treated, has been
cultures of software developers and managers. The second reason
noted as a crucial factor in organizational management and
has to do with the degree of developer participation in developing
development ever since the famous productivity studies at
and introducing the routines. The third explanation has to do with
Western Electric's Hawthorne plant in the 1920s. The results of
the views of working and learning and thus, the ability of written
these studies started a revolution in management thinking,
routines in general to transfer human knowledge and experience.
showing that even routine jobs can be improved if the workers are
These reasons are discussed in 5.1-5.3 below.
treated with respect.
5.1 Occupational Culture Interestingly our study shows, that not only did managers
participate significantly more during the introduction of routines,
There was a general agreement among all respondents that the
but also during the actual development of the routines. However,
intention of introducing formal routines was to contribute to an
no one is more expert in the realities of a software company's
efficient process of developing quality software. In other words,
business than the software developers themselves. They are not
the intention behind the formal routines was to provide
only experts on how to do the work - they are also the experts on
appropriate methods and techniques, and standardize work
how to improve it. Thus, the developers are a software
processes needed to solve the problems at hand.
organization's most important source of productivity and profits -
The differences we observed in attitude to the efficiency of formal the human capital view. It is therefore important to involve all
routines between software developers and managers has close the people that take part in a problem or its solution, and have
resemblance to the lack of alignment among executives, decisions made by these. In this respect, all of the companies
engineers, and operators described by Schein [34]. He explained violated one of the most important aspects of employee
these differences from a cultural perspective, defining culture as involvement on their own work environment. They may even
a set of basic tacit assumptions about how the world is and ought have violated the Norwegian work environment legislation!
to be, that a group of people share and that determines their
Formalization is a central feature of Weber's [39] bureaucratic
perceptions, thoughts, feelings, and, to some degree, their overt
ideal type. Viewed in the light of our results, it is not surprising
behavior (ibid., p. 11). Schein claimed that major occupational
that research on formalization often presents conflicting empirical
communities do not really understand each other, and that this
findings regarding its efficiency. Adler and Borys [2] explained
leads to failures in organizational learning. According to Schein,
this divergence by saying that prior research has focused on
the engineering culture and the executive culture has a common
different degrees of formalization, and has paid insufficient
preference to see people as impersonal resources that generate
attention to different types of formalization. They emphasize an
problems rather than solutions. Furthermore, the engineers' need
enabling type of formalization, where procedures provide
to do real engineering will drive them toward simplicity and
organizational memory as a resource to capture lessons-learned or
best practice. The opposite is the coercive type of formalization,
where procedures are presented without a motivating rationale and
thus tend to be disobeyed, resulting in a non-compliant process.
Our results regarding the developers' assessment of the routines
closely resemble the coercive type of formalization. The
developers are clearly not against formal routines. In fact, they
expressed views in favor of such routines, especially those that
captured prior project experience. Contrary to the existing
routines, which they deemed coercive, they wanted routines of the
enabling type of formalization. Thus, the highest ranked
alternative to formal routines was some sort of experience base
or newsgroup.
5.3 Working and Learning
Another aspect of our results is that they support Brown and
Duguid's [10] perspective on learning-in-working. That is, we
should emphasize informal, as opposed to formal learning. The
same authors referred to these learning modes, respectively, as
non-canonical and canonical practices. They suggested that
training and socialization processes are likely to be ineffective if
based on canonical practice, instead of the more realistic non-canonical
practice:
People are typically viewed as performing their jobs
according to formal job descriptions, despite the fact that
daily evidence points to the contrary. They are held
accountable to the map, not to road conditions. (ibid., p. 42)
Thus, formal routines alone are inadequate, and might very well
demand more improvisational skills among developers. This is
because of the rigidities of the routines, and the fact that they do
not reflect actual experience [17]. Although many routines are
prescriptive and simple, they are still hard to change, and they
cannot help in all the complex situations of actual practice from
which they are abstracted.
It is not surprising, therefore, that socialization and discussion
groups were among the highest ranked alternatives to formal
routines. This is also in agreement with Brown and Duguid's
finding that story-telling is of utmost importance for dealing
with the complexities of day-to-day practice. Furthermore these
authors highlighted story telling as a means of diagnosing
problems and as shared repositories of accumulated wisdom. This
is similar to Zuboff's [40] emphasis on story-telling to deal with
smart machines, and to Greenwood and Levin's [22] use of
narratives in action research. Thus, contrary to the rigidities of
formal routines, stories and the tacit social activities are seen as
more flexible, adaptable and relevant by the software developers
in our study.
Furthermore, our results support the assertion that significant
learning should not be divorced from its specific context - so-called
situated learning. Therefore, any routines, generalizations
or other means that strip away context should be examined with
caution. Indeed, it seems that learning could be regarded as a
product of a community i.e. organizational learning, rather than of
the individual in it. Thus lessons-learned cannot easily be
transferred from one setting to another, see Lave and Wenger [25].
5.4 Implications
Although the study is limited, the discussion above suggests
several implications. First, studies of the effects of formalization,
whether they are enabling or coercive, should focus on the
features of the actual routines as well as their implementation. In
addition, we should pay attention to the process of designing the
features and the goals that govern this process.
Second, we must recognize and confront the implications of the
deeply embedded and tacit assumptions of the different
occupational cultures. And, furthermore, learn how to establish
better cross-cultural dialogues in order to enable organizational
learning and SPI.
Third, a major practical implication is that managers should
recognize the needs of balancing discipline and creativity, in
order to supplement formal routines with collaborative, social
processes. Only by a deep and honest appreciation of this, can
managers expect effective dissemination of knowledge and
experience within their organization.
Based on the findings of this study, we conclude that both
software managers and developers must maintain an open
dialogue regarding the utility of formal routines. Such a dialogue
will open the way for empirically based learning and SPI, and thus
attain the rewards of an enabling type of formalization.
5.5 Limitations and Recommendations for
Future Research
This study focused on the utility of formal routines to transfer
knowledge and experience. Although it can provide valuable
insights for introduction of formal routines in the software
industry, our study is not without limitations.
First, the small sample and lack of randomness in the choice of
respondents may be a threat to external validity. In general, most
work on SPI suffers from non-representative participation, since
companies that voluntarily engage in systematic improvement
activities must be assumed to be better-than-average.
Second, a major threat to internal validity is that we have not
assessed the reliability of our measures. Variables such as degree
of involvement and efficiency of routines are measured on a
subjective ordinal scale. An important issue for future studies is
therefore to ensure reliability and validity of all measures used,
see [18]. We may also ask if the respondents were truthful in their
answers. For instance, they may have sensed we were looking for
trouble, and thus giving us what we wanted - i.e. exaggerating
possible problems. However, their answers to the four main
questions and their added qualitative comments show a consistent
picture of skepticism and lack of participation concerning formal
routines. We therefore choose to generally believe their answers.
Despite the mentioned limitations and lack of cross-checks, we
feel that this study makes an important contribution to the
understanding of formal routines and their role in organizational
learning and SPI.
Future studies should examine the enabling features of formal
routines in much more detail. The features could be refined and
operationalized and used for cross-sectional and longitudinal
studies of a much larger number of companies. Furthermore, such
studies should include a multiple respondent approach to cover all
major occupational cultures. They should also perform
supplementary, ethnographic studies on how developers really
work and how their work relate to formal routines - see [31] on
observational studies of developers at ATT.
6. CONCLUSION
Results from the survey reported in this paper show that software
developers do not perceive formal routines alone as an efficient
way to transfer knowledge and experience. Furthermore, the study
confirms our suspicions about large differences in perception of
the utility of formal routines to transfer experiences and
knowledge. That is, developers are skeptical to adopt formal
routines found in traditional quality systems. They also want that
such routines are introduced and updated in a more cooperative
manner.
These results are not revolutionary and in line with many other
investigations on similar themes [2], [24], [34]. See also Parnas
and Clements' article [29] on how to fake a rational design
process. So in spite of a small sample, we think that the results are
representative for a large class of software companies.
The remedy seems to create a more cooperative and open work
atmosphere, with strong developer participation in designing and
promoting future quality systems. The developers also seem open
to start exploiting new electronic media as a means for
collaboration and linking to newer SEB technologies - see also
our previous studies [13] on this. However, the major and most
difficult work remains non-technical, that is, to build a learning
organization.
Lastly, we were not able to make precise hypotheses on our four
issues beforehand, so the study has a character of a preliminary
investigation. Later studies may be undertaken with more precise
hypotheses and on a larger sample.
7.
ACKNOWLEDGEMENTS
Thanks to colleagues in the SPIQ project, to colleagues at NTNU
and SINTEF, and not at least to the students Jon E. Carlsen and
Marius Fornss that did the fieldwork.
8.
--R
Wolfgang M
Carlsen and Marius Forn
"Software Experience Bases: A Consolidated Evaluation and Status Report"
Out of the crisis
Tore Dyb
Tore Dyb
Tore Dyb
ESSI project office
Chapter on
Introduction to Action Research: Social Research for Social Change
Control and Commitment in a High-Tech Corporation
Legitimate Peripheral Participation
Marciniak, editor, Encyclopedia of Software Engineering - 2 Volume <Volume>Set</Volume>
The Knowledge-Creating Company
The Capability Maturity Model for Software: Guidelines for Improving the Software Process
"Discipline of Market Leaders and Other Accelerators to Measurement"
The Fifth Discipline: The Art and Practice of the Learning Organization
What Machines Can't Do
Makt og byr
In the Age of the Smart Machine
--TR
A rational design process: How and why to fake it
In the age of the smart machine: the future of work and power
Encyclopedia of software engineering
People, Organizations, and Process Improvement
Software creativity
The capability maturity model
An ISO 9000 approach to building quality software
From data mining to knowledge discovery
Reexamining organizational memory
An Instrument for Measuring the Key Factors of Success in Software Process Improvement
Evaluating software engineering standards
Improvisation in Small Software Organizations
Software Experience Bases
<I>Coda</I>
--CTR
Tore Dyb, Enabling Software Process Improvement: An Investigation of the Importance of Organizational Issues, Empirical Software Engineering, v.7 n.4, p.387-390, December 2002
Reidar Conradi , Alfonso Fuggetta, Improving Software Process Improvement, IEEE Software, v.19 n.4, p.92-99, July 2002
Tore Dyba, An Empirical Investigation of the Key Factors for Success in Software Process Improvement, IEEE Transactions on Software Engineering, v.31 n.5, p.410-424, May 2005 | knowledge transfer;software process improvement;knowledge management;formal routines;developer attitudes |
503505 | Featherweight Java. | Several recent studies have introduced lightweight versions of Java: reduced languages in which complex features like threads and reflection are dropped to enable rigorous arguments about key properties such as type safety. We carry this process a step further, omitting almost all features of the full language (including interfaces and even assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus does to languages such as ML and Haskell. It offers a similar computational "feel," providing classes, methods, fields, inheritance, and dynamic typecasts with a semantics closely following Java's. A proof of type safety for Featherweight Java thus illustrates many of the interesting features of a safety proof for the full language, while remaining pleasingly compact. The minimal syntax, typing rules, and operational semantics of Featherweight Java make it a handy tool for studying the consequences of extensions and variations. As an illustration of its utility in this regard, we extend Featherweight Java with generic classes in the style of GJ (Bracha, Odersky, Stoutamire, and Wadler) and give a detailed proof of type safety. The extended system formalizes for the first time some of the key features of GJ. | Introduction
"Inside every large language is a small language
struggling to get out."
Formal modeling can offer a significant boost to the design
of complex real-world artifacts such as programming
languages. A formal model may be used to describe
some aspect of a design precisely, to state and
prove its properties, and to direct attention to issues
that might otherwise be overlooked. In formulating a
model, however, there is a tension between completeness
and compactness: the more aspects the model addresses
at the same time, the more unwieldy it becomes. Often
it is sensible to choose a model that is less complete but
more compact, offering maximum insight for minimum
investment. This strategy may be seen in a flurry of
recent papers on the formal properties of Java, which
omit advanced features such as concurrency and reflection
and concentrate on fragments of the full language
to which well-understood theory can be applied.
We propose Featherweight Java, or FJ, as a new contender
for a minimal core calculus for modeling Java's
type system. The design of FJ favors compactness over
completeness almost obsessively, having just five forms
of expression: object creation, method invocation, field
access, casting, and variables. Its syntax, typing rules,
and operational semantics fit comfortably on a single
page. Indeed, our aim has been to omit as many features
as possible - even assignment - while retaining
the core features of Java typing. There is a direct correspondence
between FJ and a purely functional core of
Java, in the sense that every FJ program is literally an
executable Java program.
FJ is only a little larger than Church's lambda calculus
[3] or Abadi and Cardelli's object calculus [1],
and is significantly smaller than previous formal models
of class-based languages like Java, including those put
forth by Drossopoulou, Eisenbach, and Khurshid [?],
Syme [20], Nipkow and Oheimb [17], and Flatt, Krish-
namurthi, and Felleisen [14]. Being smaller, FJ lets us
focus on just a few key issues. For example, we have
discovered that capturing the behavior of Java's cast
construct in a traditional "small-step" operational semantics
is trickier than we would have expected, a point
that has been overlooked or underemphasized in other
models.
One use of FJ is as a starting point for modeling
languages that extend Java. Because FJ is so compact,
we can focus attention on essential aspects of the exten-
sion. Moreover, because the proof of soundness for pure
FJ is very simple, a rigorous soundness proof for even a
significant extension may remain manageable. The second
part of the paper illustrates this utility by enriching
FJ with generic classes and methods 'a la GJ [7]. Al-
though, the model omits a few important aspects of GJ
(such as "raw types" and type argument inference for
generic method calls), it has already revealed portions
of the design that were underspecified and bugs in the
GJ compiler.
Our main goal in designing FJ was to make a proof of
type soundness ("well-typed programs don't get stuck'')
as concise as possible, while still capturing the essence
of the soundness argument for the full Java language.
Any language feature that made the soundness proof
longer without making it significantly different was a
candidate for omission. As in previous studies of type
soundness in Java, we don't treat advanced features
such as concurrency, inner classes, and reflection. Other
Java features omitted from FJ include assignment, in-
terfaces, overloading, messages to super, null pointers,
base types (int, bool, etc.), abstract method declara-
tions, shadowing of superclass fields by subclass fields,
access control (public, private, etc.), and exceptions.
The features of Java that we do model include mutually
recursive class definitions, object creation, field access,
method invocation, method override, method recursion
through this, subtyping, and casting.
One key simplification in FJ is the omission of as-
signment. We assume that an object's fields are initialized
by its constructor and never changed afterwards.
This restricts FJ to a "functional" fragment of Java,
in which many common Java idioms, such as use of
enumerations, cannot be represented. Nonetheless, this
fragment is computationally complete (it is easy to encode
the lambda calculus into it), and is large enough
to include many useful programs (many of the programs
in Felleisen and Friedman's Java text [12] use a purely
functional style). Moreover, most of the tricky typing
issues in both Java and GJ are independent of assign-
ment. An important exception is that the type inference
algorithm for generic method invocation in GJ has some
twists imposed on it by the need to maintain soundness
in the presence of assignment. This paper treats a simplified
version of GJ without type inference.
The remainder of this paper is organized as follows.
Section 2 introduces the main ideas of Featherweight
Java, presents its syntax, type rules, and reduction
rules, and sketches a type soundness proof. Section 3
extends Featherweight Java to Featherweight GJ, which
includes generic classes and methods. Section 4 presents
an erasure map from FGJ to FJ, modeling the techniques
used to compile GJ into Java. Section 5 discusses
related work, and Section 6 concludes.
In FJ, a program consists of a collection of class definitions
plus an expression to be evaluated. (This expression
corresponds to the body of the main method in
Java.) Here are some typical class definitions in FJ.
class Pair extends Object -
Object fst;
Object snd;
Pair(Object fst, Object snd) -
Pair setfst(Object newfst) -
return new Pair(newfst, this.snd);
class A extends Object -
class B extends Object -
For the sake of syntactic regularity, we always include
the supertype (even when it is Object), we always
write out the constructor (even for the trivial classes
A and B), and we always write the receiver for a field
access (as in this.snd) or a method invocation. Constructors
always take the same stylized form: there is
one parameter for each field, with the same name as the
field; the super constructor is invoked on the fields of
the supertype; and the remaining fields are initialized
to the corresponding parameters. Here the supertype is
always Object, which has no fields, so the invocations
of super have no arguments. Constructors are the only
place where super or = appears in an FJ program. Since
FJ provides no side-effecting operations, a method body
always consists of return followed by an expression, as
in the body of setfst().
In the context of the above definitions, the expres-
sion
new Pair(new A(), new B()).setfst(new B())
evaluates to the expression
new Pair(new B(), new B()).
There are five forms of expression in FJ. Here, new A(),
new B(), and new Pair(e1,e2) are object constructors,
and e3.setfst(e4) is a method invocation. In the body
of setfst, the expression this.snd is a field access, and
the occurrences of newfst and this are variables. FJ
differs from Java in that this is an ordinary variable
rather than a special keyword.
The remaining form of expression is a cast. The
expression
((Pair)new Pair(new Pair(new A(), new B()),
new A()).fst).snd
evaluates to the expression
new B().
Here, ((Pair)e7), where e7 is new Pair(.fst, is
a cast. The cast is required, because e7 is a field access
to fst, which is declared to contain an Object, whereas
the next field access, to snd, is only valid on a Pair. At
run time, it is checked whether the Object stored in the
fst field is a Pair (and in this case the check succeeds).
In Java, one may prefix a field or parameter declaration
with the keyword final to indicate that it may not
be assigned to, and all parameters accessed from an inner
class must be declared final. Since FJ contains
no assignment and no inner classes, it matters little
whether or not final appears, so we omit it for brevity.
Dropping side effects has a pleasant side effect: evaluation
can be easily formalized entirely within the syntax
of FJ, with no additional mechanisms for modeling
the heap. Moreover, in the absence of side effects,
the order in which expressions are evaluated does not
affect the final outcome, so we can define the operational
semantics of FJ straightforwardly using a nondeterministic
small-step reduction relation, following long-standing
tradition in the lambda calculus. Of course,
Java's call-by-value evaluation strategy is subsumed by
this more general relation, so the soundness properties
we prove for reduction will hold for Java's evaluation
strategy as a special case.
There are three basic computation rules: one for field
access, one for method invocation, and one for casts.
Recall that, in the lambda calculus, the beta-reduction
rule for applications assumes that the function is first
simplified to a lambda abstraction. Similarly, in FJ the
reduction rules assume the object operated upon is first
simplified to a new expression. Thus, just as the slogan
for the lambda calculus is "everything is a function,"
here the slogan is "everything is an object."
Here is the rule for field access in action:
new Pair(new A(), new B()).snd \Gamma! new B()
Because of the stylized form for object constructors, we
know that the constructor has one parameter for each
field, in the same order that the fields are declared. Here
the fields are fst and snd, and an access to the snd field
selects the second parameter.
Here is the rule for method invocation in action (=
denotes substitution):
new Pair(new A(), new B()).setfst(new B())
\Gamma!
new B()=newfst;
new Pair(new A(),new B())=this
new Pair(newfst, this.snd)
i.e., new Pair(new B(),
new Pair(new A(), new B()).snd)
The receiver of the invocation is the object
new Pair(new A(), new B()), so we look up the
setfst method in the Pair class, where we find
that it has formal parameter newfst and body
new Pair(newfst, this.snd). The invocation reduces
to the body with the formal parameter replaced by
the actual, and the special variable this replaced
by the receiver object. This is similar to the beta
rule of the lambda calculus, (x.e0)e1 \Gamma! [e1=x]e0.
The key differences are the fact that the class of
the receiver determines where to look for the body
(supporting method override), and the substitution of
the receiver for this (supporting "recursion through
self"). Readers familiar with Abadi and Cardelli's
Object Calculus will see a strong similarity to their &
reduction rule [1]. In FJ, as in the lambda calculus and
the pure Abadi-Cardelli calculus, if a formal parameter
appears more than once in the body this may lead
duplication of the actual, but since there are no side
effects this causes no problems.
Here is the rule for a cast in action:
(Pair)new Pair(new A(), new B())
\Gamma! new Pair(new A(), new B())
Once the subject of the cast is reduced to an object, it
is easy to check that the class of the constructor is a
subclass of the target of the cast. If so, as is the case
here, then the reduction removes the cast. If not, as in
the expression (A)new B(), then no rule applies and the
computation is stuck, denoting a run-time error.
There are three ways in which a computation may
get stuck: an attempt to access a field not declared for
the class, an attempt to invoke a method not declared
for the class ("message not understood"), or an attempt
to cast to something other than a superclass of the class.
We will prove that the first two of these never happen
in well-typed programs, and the third never happens
in well-typed programs that contain no downcasts (or
"stupid casts"-a technicality explained below).
As usual, we allow reductions to apply to any subexpression
of an expression. Here is a computation for the
second example expression, where the next subexpression
to be reduced is underlined at each step.
((Pair)new Pair(new Pair(new A(),
new B()), new A()).fst).snd
\Gamma! ((Pair)new Pair(new A(),new B())).snd
\Gamma! new Pair(new A(), new B()).snd
\Gamma! new B()
We will prove a type soundness result for FJ: if an expression
e reduces to expression e
0 , and if e is well typed,
then e
0 is also well typed and its type is a subtype of
the type of e.
With this informal introduction in mind, we may
now proceed to a formal definition of FJ.
2.1 Syntax
The syntax, typing rules, and computation rules for FJ
are given in Figure 1, with a few auxiliary functions in
Figure
2.
The metavariables A, B, C, D, and E range over class
names; f and g range over field names; m ranges over
method names; x ranges over parameter names; d and
e range over expressions; CL ranges over class decla-
rations; K ranges over constructor declarations; and M
ranges over method declarations. We write f as short-hand
for f1,. ,fn (and similarly for C, x, e, etc.) and
write M as shorthand for M1 . Mn (with no commas). We
write the empty sequence as ffl and denote concatenation
of sequences using a comma. The length of a sequence x
is written #(x). We abbreviate operations on pairs of sequences
in the obvious way, writing "C f" as shorthand
for "C 1 f1,. ,Cn fn ", and similarly "C f;" as short-hand
for "C 1 f1;. Cn fn;", and "this.f=f;" as short-hand
for "this.f1=f1;. ;this.fn=fn;". Sequences of
field declarations, parameter names, and method declarations
are assumed to contain no duplicate names.
A class table CT is a mapping from class names C
to class declarations CL. A program is a pair (CT ; e) of
a class table and an expression. To lighten the notation
in what follows, we always assume a fixed class table
CT .
The abstract syntax of FJ class declarations, constructor
declarations, method declarations, and expressions
is given at the top left of Figure 1. As in Java, we
assume that casts bind less tightly than other forms of
expression. We assume that the set of variables includes
the special variable this, but that this is never used
as the name of an argument to a method.
Every class has a superclass, declared with extends.
This raises a question: what is the superclass of the
Syntax:
CL ::= class C extends C -C f; K M
e ::= x
new C(e)
Subtyping:
class C extends D -.
Computation:
(new C(e)).f i \Gamma! e i
(R-Field)
(new C(e)).m(d)
\Gamma! [x 7! d; this 7! new C(e)]e0
(D)(new C(e)) \Gamma! new C(e)
(R-Cast)
Expression typing:
(T-Field)
stupid warning
Method typing:
class C extends D -.
Class typing:
class C extends D -C f; K M OK
Figure
1: FJ: Main definitions
Field lookup:
class C extends D -C f; K M
class C extends D -C f; K M
class C extends D -C f; K M
m is not defined in M
Method body lookup:
class C extends D -C f; K M
class C extends D -C f; K M
m is not defined in M
Valid method overriding:
Figure
2: FJ: Auxiliary definitions
Object class? There are various ways to deal with this
issue; the simplest one that we have found is to take
Object as a distinguished class name whose definition
does not appear in the class table. The auxiliary functions
that look up fields and method declarations in the
class table are equipped with special cases for Object
that return the empty sequence of fields and the empty
set of methods. (In full Java, the class Object does have
several methods. We ignore these in FJ.)
By looking at the class table, we can read off the sub-type
relation between classes. We write when C is
a subtype of D - i.e., subtyping is the reflexive and transitive
closure of the immediate subclass relation given
by the extends clauses in CT . Formally, it is defined in
the middle of the left column of Figure 1.
The given class table is assumed to satisfy some
sanity conditions: (1) CT class C. for every
dom(CT ); (3) for every
class name C (except Object) appearing anywhere in
CT , we have C 2 dom(CT ); and (4) there are no cycles
in the subtype relation induced by CT - that is, the
relation is antisymmetric.
For the typing and reduction rules, we need a few
auxiliary definitions, given in Figure 2. The fields of a
class C, written fields(C), is a sequence C f pairing the
class of a field with its name, for all the fields declared
in class C and all of its superclasses. The type of the
method m in class C, written mtype(m; C), is a pair, written
B!B, of a sequence of argument types B and a result
type B. Similarly, the body of the method m in class C,
written mbody(m; C), is a pair, written (x,e), of a sequence
of parameters x and an expression e. The predicate
override(C0!C; m; D) judges if a method m with
argument types C and a result type C0 may be defined
in a subclass of D. In case of overriding, if a method
with the same name is declared in the superclass then
it must have the same type.
2.2 Typing
The typing rules for expressions, method declarations,
and class declarations are in the right column of Figure
1. An environment \Gamma is a finite mapping from variables
to types, written x:C.
The typing judgment for expressions has the form
"in the environment \Gamma, expression e has
type C." The typing rules are syntax directed, with one
rule for each form of expression, save that there are three
rules for casts. The typing rules for constructors and
method invocations check that each actual parameter
has a type that is a subtype of the corresponding formal.
We abbreviate typing judgments on sequences in the
obvious way, writing
C1 , . , \Gamma ' en 2 Cn and writing
for
One technical innovation in FJ is the introduction
of "stupid" casts. There are three rules for type casts:
in an upcast the subject is a subclass of the target, in
a downcast the target is a subclass of the subject, and
in a stupid cast the target is unrelated to the subject.
The Java compiler rejects as ill typed an expression containing
a stupid cast, but we must allow stupid casts in
FJ if we are to formulate type soundness as a subject
reduction theorem for a small-step semantics. This is
because a sensible expression may be reduced to one
containing a stupid cast. For example, consider the fol-
lowing, which uses classes A and B as defined as in the
previous section:
(A)(Object)new B() \Gamma! (A)new B()
We indicate the special nature of stupid casts by including
the hypothesis stupid warning in the type rule for
stupid casts (T-SCast); an FJ typing corresponds to a
legal Java typing only if it does not contain this rule.
(Stupid casts were omitted from Classic Java [14], causing
its published proof of type soundness to be incorrect;
this error was discovered independently by ourselves and
the Classic Java authors.)
The typing judgment for method declarations has
the form M OK IN C, read "method declaration M is ok
if it occurs in class C." It uses the expression typing
judgment on the body of the method, where the free
variables are the parameters of the method with their
declared types, plus the special variable this with type
C.
The typing judgment for class declarations has the
form CL OK, read "class declaration CL is ok." It checks
that the constructor applies super to the fields of the
superclass and initializes the fields declared in this class,
and that each method declaration in the class is ok.
The type of an expression may depend on the type
of any methods it invokes, and the type of a method
depends on the type of an expression (its body), so it
behooves us to check that there is no ill-defined circularity
here. Indeed there is none: the circle is broken
because the type of each method is explicitly declared.
It is possible to load and use the class table before all
the classes in it are checked, so long as each class is
eventually checked.
2.3 Computation
The reduction relation is of the form e \Gamma! e 0 , read
"expression e reduces to expression e 0 in one step." We
write \Gamma! for the reflexive and transitive closure of \Gamma!.
The reduction rules are given in the bottom left column
of
Figure
1. There are three reduction rules, one
for field access, one for method invocation, and one for
casting. These were already explained in the introduction
to this section. We write [d=x; e=y]e0 for the result
of replacing x1 by d1 , . , xn by dn , and y by e in expression
e0 .
The reduction rules may be applied at any point in
an expression, so we also need the obvious congruence
rules (if e \Gamma! e
.f, and the like), which
we omit here.
2.4 Properties
Formal definitions are fun, but the proof of the pudding
is in. well, the proof. If our definitions are sensible, we
should be able to prove a type soundness result, which
relates typing to computation. Indeed we can prove
such a result: if a term is well typed and it reduces to
a second term, then the second term is well typed, and
furthermore its type is a subtype of the type of the first
term.
2.4.1 Theorem [Subject Reduction]: If \Gamma ' e 2 C
and e \Gamma! e
Proof sketch: The main property required in the
proof is the following term-substitution lemma:
If
This is proved by induction on the derivation of \Gamma; x :
interesting case is when
the final rule used in the derivation is T-DCast. Suppose
the type of e0 is C0 and . By the induction
since D0 and C may or may not be in the subtype rela-
tion, the derivation of \Gamma ' (C)[d=x]e 2 C may involve a
stupid warning. On the other hand, if (C)e0 is derived
using T-UCast, then (C)[d=x]e will also be an upcast.
The theorem itself is now proved by induction on the
derivation of e \Gamma! e
, with a case analysis on the last
rule used. The case for R-Invk is easy, using the lemma
above. Other base cases are also straightforward, as are
most of the induction steps. The only interesting case is
the congruence rule for casting-that is, the case where
(C)e \Gamma! (C)e
0 is derived using e \Gamma! e
. Using an
argument similar to the term substitution lemma above,
we see that a downcast expression may be reduced to
a stupid cast and an upcast expression will be always
reduced to an upcast. \Xi
We can also show that if a program is well typed,
then the only way it can get stuck is if it reaches a
point where it cannot perform a downcast.
2.4.2 Theorem [Progress]: Suppose e is a well-typed
expression.
(1) If e includes new C0(e).f as a subexpression, then
(2) If e includes new C0(e).m(d) as a subexpression,
then mbody(m;
To state a similar property for casts, we say that an
expression e is safe in \Gamma if the type derivations of the
underlying CT and \Gamma ' e 2 C contain no downcasts
or stupid casts (uses of rules T-DCast or T-SCast).
In other words, a safe program includes only upcasts.
Then we see that a safe expression always reduces to
another safe expression, and, moreover, typecasts in a
safe expression will never fail, as shown in the following
pair of theorems.
2.4.3 Theorem: [Reduction preserves safety] If e
is safe in \Gamma and e\Gamma!e 0 , then e 0 is safe in \Gamma.
2.4.4 Theorem [Progress of safe programs]:
Suppose e is safe in \Gamma. If e has (C)new C0(e) as a
subexpression, then
3 Featherweight GJ
Just as GJ adds generic types to Java, Featherweight
GJ (or FGJ, for short) adds generic types to FJ. Here
is the class definition for pairs in FJ, rewritten with
type parameters in FGJ.
class Pair!X extends Object, Y extends Object?
extends Object -
Y snd;
!Z extends Object?
Pair!Z,Y? setfst(Z newfst) -
return new Pair!Z,Y?(newfst, this.snd);
class A extends Object -
class B extends Object -
Both classes and methods may have generic type pa-
rameters. Here X and Y are parameters of the class, and
Z is a parameter of the setfst method. Each type parameter
has a bound ; here X, Y, and Z are each bounded
by Object.
In the context of the above definitions, the expres-
sion
new Pair!A,B?(new A(), new B()).setfst!B?(new B())
evaluates to the expression
new Pair!B,B?(new B(), new B())
If we were being extraordinarily pedantic, we would
write A!? and B!? instead of A and B, but we allow the
latter as an abbreviation for the former in order that FJ
is a proper subset of FGJ.
In GJ, type parameters to generic method invocations
are inferred. Thus, in GJ the expression above
would be written
new Pair!A,B?(new A(), new B()).setfst(new B())
with no !B? in the invocation of setfst. So while FJ is
a subset of Java, FGJ is not quite a subset of GJ. We
regard FGJ as an intermediate language - the form that
would result after type parameters have been inferred.
While parameter inference is an important aspect of GJ,
we chose in FGJ to concentrate on modeling other aspects
of GJ.
The bound of a type variable may not be a type
variable, but may be a type expression involving type
variables, and may be recursive (or even, if there are
several bounds, mutually recursive). For example, if
C!X? and D!Y? are classes with one parameter each,
one may have bounds such as !X extends C!X?? or
even !X extends C!Y?, Y extends D!X??. For more
on bounds, including examples of the utility of recursive
bounds, see the GJ paper [7].
GJ and FGJ are intended to support either of two
implementation styles. They may be implemented di-
rectly, augmenting the run-time system to carry information
about type parameters, or they may be implemented
by erasure, removing all information about type
parameters at run-time. This section explores the first
style, giving a direct semantics for FGJ that maintains
type parameters, and proving a type soundness theo-
rem. Section 4 explores the second style, giving an erasure
mapping from FGJ into FJ and showing a correspondence
between reductions on FGJ expressions and
reductions on FJ expressions. The second style corresponds
to the current implementation of GJ, which compiles
GJ into the Java Virtual Machine (JVM), which of
course maintains no information about type parameters
at run-time; the first style would correspond to using
an augmented JVM that maintains information about
type parameters.
3.1
In what follows, for the sake of conciseness we abbreviate
the keyword extends to the symbol / .
The syntax, typing rules, and computation rules for
FGJ are given in Figure 3, with a few auxiliary functions
in
Figure
4. The metavariables X, Y, and Z range over
type variables; T, U, and V range over types; and N and
O range over nonvariable types (types other than type
variables). We write X as shorthand for X1,. ,Xn (and
similarly for T, N, etc.), and assume sequences of type
variables contain no duplicate names.
The abstract syntax of FGJ is given at the top left
of
Figure
3. We allow C!? and m!? to be abbreviated as
C and m, respectively.
As before, we assume a fixed class table CT , which is
a mapping from class names C to class declarations CL,
obeying the same sanity conditions as given previously.
3.2 Typing
environment \Delta is a finite mapping from type
variables to nonvariable types, written
takes each type variable to its bound.
Bounds of types
We write bound \Delta (T) for the upper bound of T in \Delta, as
defined Figure 4. Unlike calculi such as F [9], this
promotion relation does not need to be defined recur-
sively: the bound of a type variable is always a nonva-
riable type.
Subtyping
The subtyping relation is defined in the left column of
Figure
3. As before, subtyping is the reflexive and transitive
closure of the / relation. Type parameters are invariant
with regard to subtyping (for reasons explained
in the GJ paper), so does not imply
Well-formed types
If the declaration of a class C begins class C!X / N?,
then a type like C!T? is well formed only if substituting
respects the bounds N, that is if
We write well-formed in context
\Delta. The rules for well-formed types appear in Figure 3.
Note that we perform a simultaneous substitution, so
any variable in X may appear in N, permitting recursion
and mutual recursion between variables and bounds.
A type environment \Delta is well formed if \Delta ' \Delta(X) ok
for all X in dom (\Delta). We also say that an environment
\Gamma is well formed with respect to \Delta, written
if \Delta ' \Gamma(x) ok for all x in dom (\Gamma).
Field and method lookup
For the typing and reduction rules, we need a few auxiliary
definitions, given in Figure 4; these are fairly
straightforward adaptations of the lookup rules given
previously. The fields of a nonvariable type N, written
fields(N), are a sequence of corresponding types and
field names, T f. The type of the method invocation m
at nonvariable type N, written mtype(m; N), is a type of
the form !X / N?U!U. Similarly, the body of the method
invocation m at nonvariable type N with type parameters
V, written mbody(m!V?; N), is a pair, written (x,e), of a
sequence of parameters x and an expression e.
Syntax:
CL ::= class C!X / N? / N -T f; K M
e ::= x
new N(e)
Subtyping:
class C!X / N? / N -.
Well-formed types:
class C!X / N? / N -.
Computation:
(new N(e)).f i \Gamma! e i
(new N(e)).m!V?(d)
\Gamma! [d=x; new N(e)=this]e0
(O)(new N(e)) \Gamma! new N(e)
Expression typing:
stupid warning
Method typing:
class C!X / N? / N -.
Class typing:
class C!X / N? / N -T f; K M OK
Figure
3: FGJ: Main definitions
Bound of type:
bound
bound
Field lookup:
class C!X / N? / N -S f; K M
class C!X / N? / N -S f; K M
class C!X / N? / N -S f; K M
m is not defined in M
Method body lookup:
class C!X / N? / N -S f; K M
class C!X / N? / N -S f; K M
m is not defined in M
Valid method overriding:
implies
Figure
4: FGJ: Auxiliary definitions
Typing rules
Typing rules for expressions, methods, and classes appear
in Figure 3.
The typing judgment for expressions is of form
read as "in the type environment \Delta and
the environment \Gamma, e has type T." Most of the subtleties
are in the field and method lookup relations that
we have already seen; the typing rules themselves are
straightforward.
In the rule GT-DCast, the last premise ensures that
the result of the cast will be the same at run time, no
matter whether we use the high-level (type-passing) reduction
rules defined later in this section or the erasure
semantics considered in Section 4. For example, suppose
we have defined:
is, the result type of a method may be a subtype of
the result type of the corresponding method in the su-
perclass, although the bounds of type variables and the
argument types must be identical (modulo renaming of
type variables).
As before, a class table is ok if all its class definitions
are ok.
3.3 Reduction
The operational semantics of FGJ programs is only a
little more complicated than what we had in FJ. The
rules appear in Figure 3.
3.4 Properties
FGJ programs enjoy subject reduction and progress
properties exactly like programs in FJ (2.4.1 and 2.4.2).
The basic structures of the proofs are similar to those
of Theorem 2.4.1 and 2.4.2. For subject reduction, how-
ever, since we now have parametric polymorphism combined
with subtyping, we need a few more lemmas. The
main lemmas required are a term substitution lemma as
before, plus similar lemmas about the preservation of
subtyping and typing under type substitution. (Read-
ers familiar with proofs of subject reduction for typed
lambda-calculi like F [9] will notice many similarities).
We begin with the three substitution lemmas, which are
proved by straightforward induction on a derivation of
3.4.1 Lemma: [Type substitution preserves sub-
with and none of X appearing in \Delta 1 , then
3.4.2 Lemma: [Type substitution preserves typ-
and none of X appears in \Delta 1 , then
such that
3.4.3 Lemma: [Term substitution preserves typ-
such that
3.4.4 Theorem [Subject reduction]: If \Delta; \Gamma ' e 2
that
Proof By induction on the derivation of
e \Gamma! e
0 with a case analysis on the reduction rule used.
We show in detail just the base case where e is a method
invocation. From the premises of the rule GR-Invk, we
have
e
By the rule GT-Invk and GT-New, we also have
By examining the derivation of mtype(m; bound \Delta (N)),
we can find a supertype C!T? of N where
and none of the Y appear in T. Now, by Lemma 3.4.2,
From this, a straightforward weakening lemma (not
shown here), plus Lemma 3.4.3 and Lemma 3.4.1, gives
Letting T
finishes the case, since \Delta ' S
by S-Trans. \Xi
Theorem [Progress]: Suppose e is a well-typed
expression.
(1) If e includes new N0(e).f as a subexpression, then
(2) If e includes new N0(e).m!V?(d) as a subexpres-
sion, then mbody(m!V?;
#(d).
FGJ is backward compatible with FJ. Intuitively,
this means that an implementation of FGJ can be used
to typecheck and execute FJ programs without changing
their meaning. We can show that a well-typed FJ program
is always a well-typed FGJ program and that FJ
and FGJ reduction correspond. (Note that it isn't quite
the case that the well-typedness of an FJ program under
the FGJ rules implies its well-typedness in FJ, because
FGJ allows covariant overriding and FJ does not.) In
the statement of the theorem, we use \Gamma! FJ and \Gamma! FGJ
to show which set of reduction rules is used.
3.4.6 Theorem [Backward compatibility]: If an
FJ program (e; CT ) is well typed under the typing
rules of FJ, then it is also well-typed under the rules of
FGJ. Moreover, for all FJ programs e and e
(whether
well typed or not), e \Gamma! FJ e
Proof: The first half is shown by straightforward induction
on the derivation of \Gamma ' e 2 C (using FJ typing
rules), followed by an analysis of the rules GT-Method
and GT-Class. In the second half, both directions are
shown by induction on a derivation of the reduction re-
lation, with a case analysis on the last rule used. \Xi
Compiling FGJ to FJ
We now explore the second implementation style for GJ
and FGJ. The current GJ compiler works by translation
into the standard JVM, which maintains no information
about type parameters at run-time. We model this
compilation in our framework by an erasure translation
from FGJ into FJ. We show that this translation maps
well-typed FGJ programs into well-typed FJ programs,
and that the behavior of a program in FGJ matches (in
a suitable sense) the behavior of its erasure under the
FJ reduction rules.
A program is erased by replacing types with their
erasures, inserting downcasts where required. A type is
erased by removing type parameters, and replacing type
variables with the erasure of their bounds. For example,
the class Pair!X,Y? in the previous section erases to the
class Pair extends Object -
Object fst;
Object snd;
Pair(Object fst, Object snd) -
Pair setfst(Object newfst) -
return new Pair(newfst, this.snd);
Similarly, the field selection
new Pair!A,B?(new A(), new B()).snd
erases to
(B)new Pair(new A(), new B()).snd
where the added downcast (B) recovers type information
of the original program. We call such downcasts
inserted by erasure synthetic.
4.1 Erasure of Types
To erase a type, we remove any type parameters and
replace type variables with the erasure of their bounds.
Write jTj \Delta for the erasure of type T with respect to type
environment \Delta
4.2 Field and Method Lookup
In FGJ (and GJ), a subclass may extend an instantiated
superclass. This means that, unlike in FJ (and Java),
the types of the fields and the methods in the subclass
may not be identical to the types in the superclass. In
order to specify a type-preserving erasure from FGJ to
FJ, it is necessary to define additional auxiliary functions
that look up the type of a field or method in the
highest superclass in which it is defined.
For example, we previously defined the generic class
Pair!X,Y?. We may declare a specialized subclass
PairOfA as a subclass of the instantiation Pair!A,A?,
which instantiates both X and Y to a given class A.
class PairOfA extends Pair!A,A? -
super(fst, snd);
PairOfA
return new PairOfA(newfst, this.fst);
Note that, in the setfst method, the argument type
A matches the argument type of setfst in Pair!A,A?,
while the result type PairOfA is a subtype of the result
type in Pair!A,A?; this is permitted by FGJ's covariant
subtyping, as discussed in the previous section. Erasing
the class PairOfA yields the following:
class PairOfA extends Pair -
PairOfA(Object fst, Object snd) -
super(fst, snd);
Pair setfst(Object newfst) -
return new PairOfA(newfst, this.fst);
Here arguments to the constructor and the method are
given type Object, even though the erasure of A is itself;
and the result of the method is given type Pair, even
though the erasure of PairOfA is itself. In both cases,
the types are chosen to correspond to types in Pair, the
highest superclass in which the fields and method are
defined.
We define variants of the auxiliary functions that
find the types of fields and methods in the highest superclass
in which they are defined. The maximum field
types of a class C, written fieldsmax (C), is the sequence
of pairs of a type and a field name defined as follows:
class C!X / N? / D!U? -T f; .
The maximum method type of m in C, written
(m, C), is defined as follows:
class C!X / N? / D!U? -.
class C!X / N? / D!U? -.
undefined
We also need a way to look up the maximum type
of a given field. If fieldsmax then we set
4.3 Erasure of Expressions
The erasure of an expression depends on the typing of
that expression, since the types are used to determine
which downcasts to insert. The erasure rules are optimized
to omit casts when it is trivially safe to do so;
this happens when the maximum type is equal to the
erased type.
Write jej \Delta;\Gamma for the erasure of a well-typed expression
e with respect to environment \Gamma and type environment
\Delta:
jnew N(e)j
(Strictly speaking, one should think of the erasure
operation as acting on typing derivations rather than
expressions. Since well-typed expressions are in 1-1 correspondence
with their typing derivations, the abuse of
notation creates no confusion.)
4.4 Erasure of Methods and Classes
The erasure of a method m with respect to type environment
\Delta in class C, written jMj \Delta;C , is defined as follows:
e
(In GJ, the actual erasure is somewhat more complex,
involving the introduction of bridge methods, so that
one ends up with two overloaded methods: one with
the maximum type, and one with the instantiated type.
We don't model that extra complexity here, because it
depends on overloading of method names, which is not
modeled in FJ.)
The erasure of constructors and classes is:
jclass C!X / N? / N -T f; K Mj
class C / jNj
4.5 Properties of Erasure
Having defined erasure, we may investigate some of its
properties. First, a well-typed FGJ program erases to a
well-typed FJ program, as expected:
4.5.1 Theorem [Erasure preserves typing]: If an
FGJ class table CT is ok and \Delta; \Gamma ' e 2 T, then
using FJ rules.
Proof By induction on the derivation of
using the following lemmas: (1) if
if \Delta ' N ok and methodtype FGJ (m;
then mtypemax (m; jNj
some well-formed type environment \Delta, then
More interestingly, we would intuitively expect that
erasure from FGJ to FJ should also preserve the reduction
behavior of FGJ programs:
e
reduce
erase
eerase
reduce
Unfortunately, this is not quite true. For example, consider
the FGJ expression
where a and b are expressions of type A and B, respec-
tively, and its erasure:
(A)new Pair(jaj \Delta;\Gamma ,jbj \Delta;\Gamma ).fst
In FGJ, e reduces to jaj \Delta;\Gamma , while the erasure jej \Delta;\Gamma reduces
to (A)jaj \Delta;\Gamma in FJ; it does not reduce to jaj \Delta;\Gamma
when a is not a new expression. (Note that it is not
an artifact of our nondeterministic reduction strategy:
it happens even if we adopt a call-by-value reduction
strategy, since, after method invocation, we may obtain
an expression like (A)e where e is not a new expres-
sion.) Thus, the above diagram does not commute even
if one-step reduction (\Gamma!) at the bottom is replaced
with many-step reduction (\Gamma! ). In general, synthetic
casts can persist for a while in the FJ expression, although
we expect those casts will eventually turn out
to be upcasts when a reduces to a new expression.
In the example above, an FJ expression d reduced
from jej \Delta;\Gamma had more synthetic casts than je
ever, this is not always the case: d may have less casts
than je when the reduction step involves method
invocation. Consider the following class and its erasure:
class C!X extends Object? extends Object -
return new C!X?(this.f);
class C extends Object -
Object f;
C(Object f) -
return new C(this.f);
Now consider the FGJ expression
new C!A?(new A()).m()
and its erasure
new C(new A()).m():
In FGJ,
e \Gamma! FGJ new C!A?(new C!A?(new A()).f):
In FJ, on the other hand, jej \Delta;\Gamma reduces to
new C(new C(new A()).f), which has fewer synthetic
casts than new C((A)new C(new A()).f), which is the
erasure of the reduced expression in FGJ. The subtlety
we observe here is that, when the erased term is re-
duced, synthetic casts may become "coarser" than the
casts inserted when the reduced term is erased, or may
be removed entirely as in this example. (Removal of
downcasts can be considered as a combination of two
operations: replacement of (A) with the coarser cast
(Object) and removal of the upcast (Object), which
does not affect the result of computation.)
To formalize both of these observations, we define an
auxiliary relation that relates FJ expressions differing
only by the addition and replacement of some synthetic
casts. Let us call a well-typed expression d an expansion
of a well-typed expression e, written e =) d, if d is
obtained from e by some combination of (1) addition of
zero or more synthetic upcasts, (2) replacement of some
synthetic casts (D) with (C), where C is a supertype of
D, or (3) removal of some synthetic casts.
4.5.2 Theorem: [Erasure preserves reduction
modulo expansion] If \Delta; \Gamma ' e 2 T and e \Gamma! FGJ
e
, then there exists some FJ expression d
0 such that
0 and jej \Delta;\Gamma \Gamma! FJ d
0 . In other words, the
following diagram commutes.
e
reduce
erase
erase
reduce
d
As easy corollaries of this theorem, it can be shown that,
if an FGJ expression e reduces to a "fully-evaluated
expression," then the erasure of e reduces to exactly
its erasure, and that if FGJ reduction gets stuck at a
stupid cast, then FJ reduction also gets stuck because
of the same typecast. We use the metavariable v for
fully evaluated expressions, defined as follows:
4.5.3 Corollary: [Erasure preserves execution
results] If \Delta; \Gamma ' e 2 T and e \Gamma! FGJ
v, then
jvj \Delta;\Gamma .
4.5.4 Corollary: [Erasure preserves typecast er-
e
ehas a stuck subexpression (C!S?)new D!T?(e), then
d
0 such that d
0 has a stuck subexpression
(C)new D(d), where d are expansions of the erasures of
e, in the same position (modulo synthetic casts) as the
erasure of e
5 Related Work
Core calculi for Java. There are several known
proofs in the literature of type soundness for subsets
of Java. In the earliest, Drossopoulou and Eisenbach
[11] (using a technique later mechanically checked
by Syme [20]) prove soundness for a fairly large subset
of sequential Java. Like us, they use a small-step operational
semantics, but they avoid the subtleties of "stupid
casts" by omitting casting entirely. Nipkow and Oheimb
[17] give a mechanically checked proof of soundness
for a somewhat larger core language. Their language
does include casts, but it is formulated using a "big-
step" operational semantics, which sidesteps the stupid
cast problem. Flatt, Krishnamurthi, and Felleisen [14]
use a small-step semantics and formalize a language
with both assignment and casting. Their system is
somewhat larger than ours (the syntax, typing, and operational
semantics rules take perhaps three times the
space), and the soundness proof, though correspondingly
longer, is of similar complexity. Their published
proof of subject reduction is slightly flawed - the case
that motivated our introduction of stupid casts is not
handled properly - but the problem can be repaired by
applying the same refinement we have used here.
Of these three studies, that of Flatt, Krishnamurthi,
and Felleisen is closest to ours in an important sense:
the goal there, as here, is to choose a core calculus that
is as small as possible, capturing just the features of
Java that are relevant to some particular task. In their
case, the task is analyzing an extension of Java with
Common Lisp style mixins - in ours, extensions of the
core type system. The goal of the other two systems, on
the other hand, is to include as large a subset of Java
as possible, since their primary interest is proving the
soundness of Java itself.
Other class-based object calculi. The literature
on foundations of object-oriented languages contains
many papers formalizing class-based object-oriented
languages, either taking classes as primitive (e.g., [21, 8,
6, 5]) or translating classes into lower-level mechanisms
(e.g., [13, 4, 1, 19]. Some of these systems (e.g. [19, 8])
include generic classes and methods, but only in fairly
simple forms.
Generic extensions of Java. A number of extensions
of Java with generic classes and methods have
been proposed by various groups, including the language
of Agesen, Freund, and Mitchell [2]; PolyJ, by
Myers, Bank, and Liskov [16]; Pizza, by Odersky and
Wadler [18]; GJ, by Bracha, Oderksy, Stoutamire, and
Wadler [7]; and NextGen, by Cartwright and Steele [10].
While all these languages are believed to be typesafe,
our study of FGJ is the first to give rigorous proof of
soundness for a generic extension of Java. We have used
GJ as the basis for our generic extension, but similar
techniques should apply to the forms of genericity found
in the rest of these languages.
6 Discussion
We have presented Featherweight Java, a core language
for Java modeled closely on the lambda-calculus and
embodying many of the key features of Java's type sys-
tem. FJ's definition and proof of soundness are both
concise and straightforward, making it a suitable arena
for the study of ambitious extensions to the type sys-
tem, such as the generic types of GJ. We have developed
this extension in detail, stated some of its fundamental
properties, and sketched their proofs.
FJ itself is not quite complete enough to model some
of the interesting subtleties found in GJ. In particular,
the full GJ language allows some parameters to be instantiated
by a special "bottom type" *, using a slightly
delicate rule to avoid unsoundness in the presence of as-
signment. Capturing the relevant issues in FGJ requires
extending it with assignment and null values (both of
these extensions seem straightforward, but cost us some
of the pleasing compactness of FJ as it stands). The
other somewhat subtle aspect of GJ that is not accurately
modeled in FGJ is the use of bridge methods in
the compilation from GJ to JVM bytecodes. To treat
this compilation exactly as GJ does, we would need to
extend FJ with overloading.
Our formalization of GJ also does not include raw
types, a unique aspect of the GJ design that supports
compatibility between old, unparameterized code and
new, parameterized code. We are currently experimenting
with an extension of FGJ with raw types.
Formalizing generics has proven to be a useful application
domain for FJ, but there are other areas where its
extreme simplicity may yield significant leverage. For
example, work is under way on formalizing Java 1.1's
inner classes using FJ [15].
Acknowledgments
This work was supported by the University of Pennsylvania
and the National Science Foundation under grant
CCR-9701826, Principled Foundations for Programming
with Objects. Igarashi is a research fellow of the Japan
Society for the Promotion of Science.
--R
A Theory of Ob- jects
Adding type parameterization to the Java language.
The Lambda Calculus.
An imperative first-order calculus with object extension
A core calculus of classes and mix- ins
A core calculus of classes and objects.
Making the future safe for the past: Adding genericity to the Java programming language.
"Safe type checking in a statically typed object-oriented programming language"
An extension of system F with subtyping.
Steele Jr.
Is the Java Type System Sound?
A little Java
On the relationship between classes
Classes and mixins.
On inner classes
Parameterized types for Java.
Java light is type-safe - definitely
Pizza into Java: Translating theory into practice.
Simple type-theoretic foundations for object-oriented pro- gramming
Proving Java type soundness.
Type inference for objects with instance variables and inheritance.
--TR
An extension of system <italic>F</italic> with subtyping
A syntactic approach to type soundness
Parameterized types for Java
Pizza into Java
Adding type parameterization to the Java language
Java<i><sub>light</sub></i> is type-safeMYAMPERSANDmdash;definitely
Classes and mixins
A little Java, a few patterns
Making the future safe for the past
Compatible genericity with run-time types for the Java programming language
On the relationship between classes, objects, and data abstraction
Is the Java type system sound?
Modular type-based reverse engineering of parameterized types in Java code
Parametric polymorphism in Java
Types and programming languages
A Theory of Objects
Partial Evaluation for Class-Based Object-Oriented Languages
An Imperative, First-Order Calculus with Object Extension
A Core Calculus of Classes and Mixins
On Inner Classes
True Modules for Java-like Languages
--CTR
Maurizio Cimadamore , Mirko Viroli, Reifying wildcards in Java using the EGO approach, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Peter Hui , James Riely, Typing for a minimal aspect language: preliminary report, Proceedings of the 6th workshop on Foundations of aspect-oriented languages, p.15-22, March 13-13, 2007, Vancouver, British Columbia, Canada
Giovanni Rimassa , Mirko Viroli, Understanding access restriction of variant parametric types and Java wildcards, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Atsushi Igarashi , Hideshi Nagira, Union types for object-oriented programming, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Marco Bellia , M. Eugenia Occhiuto, Higher order Programming in Java: Introspection, Subsumption and Extraction, Fundamenta Informaticae, v.67 n.1-3, p.29-44, January 2005
Tomoyuki Aotani , Hidehiko Masuhara, Towards a type system for detecting never-matching pointcut compositions, Proceedings of the 6th workshop on Foundations of aspect-oriented languages, p.23-26, March 13-13, 2007, Vancouver, British Columbia, Canada
Jeffrey Fischer , Rupak Majumdar , Todd Millstein, Tasks: language support for event-driven programming, Proceedings of the 2007 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, January 15-16, 2007, Nice, France
Jaakko Jrvi , Jeremiah Willcock , Andrew Lumsdaine, Associated types and constraint propagation for mainstream object-oriented generics, ACM SIGPLAN Notices, v.40 n.10, October 2005
Gustavo Bobeff , Jacques Noy, Component specialization, Proceedings of the 2004 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, p.39-50, August 24-25, 2004, Verona, Italy
Alex Potanin , James Noble , Robert Biddle, Generic ownership: practical ownership control in programming languages, Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 24-28, 2004, Vancouver, BC, CANADA
Neal Glew , Jens Palsberg, Type-safe method inlining, Science of Computer Programming, v.52 n.1-3, p.281-306, August 2004
Tian Zhao , Jens Palsber , Jan Vite, Lightweight confinement for featherweight java, ACM SIGPLAN Notices, v.38 n.11, November
Vijay S. Menon , Neal Glew , Brian R. Murphy , Andrew McCreight , Tatiana Shpeisman , Ali-Reza Adl-Tabatabai , Leaf Petersen, A verifiable SSA program representation for aggressive compiler optimization, ACM SIGPLAN Notices, v.41 n.1, p.397-408, January 2006
Alex Potanin , James Noble , Dave Clarke , Robert Biddle, Featherweight generic confinement, Journal of Functional Programming, v.16 n.6, p.793-811, November 2006
Nathaniel Nystrom , Stephen Chong , Andrew C. Myers, Scalable extensibility via nested inheritance, ACM SIGPLAN Notices, v.39 n.10, October 2004
G. M. Bierman, Formal semantics and analysis of object queries, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Gurevich , Benjamin Rossman , Wolfram Schulte, Semantic essence of AsmL, Theoretical Computer Science, v.343 n.3, p.370-412, 17 October 2005
Ernst , Klaus Ostermann , William R. Cook, A virtual class calculus, ACM SIGPLAN Notices, v.41 n.1, p.270-282, January 2006
Todd Millstein , Colin Bleckner , Craig Chambers, Modular typechecking for hierarchically extensible datatypes and functions, ACM SIGPLAN Notices, v.37 n.9, p.110-122, September 2002
Suresh Jagannathan , Jan Vitek , Adam Welc , Antony Hosking, A transactional object calculus, Science of Computer Programming, v.57 n.2, p.164-186, August 2005
Lorenzo Bettini , Sara Capecchi , Elena Giachino, Featherweight wrap Java, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Adriaan Moors , Frank Piessens , Wouter Joosen, An object-oriented approach to datatype-generic programming, Proceedings of the 2006 ACM SIGPLAN workshop on Generic programming, September 16-16, 2006, Portland, Oregon, USA
Christian Skalka, Trace effects and object orientation, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.139-150, July 11-13, 2005, Lisbon, Portugal
Alessandro Warth , Milan Stanojevi , Todd Millstein, Statically scoped object adaptation with expanders, ACM SIGPLAN Notices, v.41 n.10, October 2006
Dan Grossman , Jeremy Manson , William Pugh, What do high-level memory models mean for transactions?, Proceedings of the 2006 workshop on Memory system performance and correctness, October 22-22, 2006, San Jose, California
Matthew S. Tschantz , Michael D. Ernst, Javari: adding reference immutability to Java, ACM SIGPLAN Notices, v.40 n.10, October 2005
Juan Chen , David Tarditi, A simple typed intermediate language for object-oriented languages, ACM SIGPLAN Notices, v.40 n.1, p.38-49, January 2005
Polyvios Pratikakis , Jaime Spacco , Michael Hicks, Transparent proxies for java futures, ACM SIGPLAN Notices, v.39 n.10, October 2004
Marko van Dooren , Eric Steegmans, Combining the robustness of checked exceptions with the flexibility of unchecked exceptions using anchored exception declarations, ACM SIGPLAN Notices, v.40 n.10, October 2005
Franz Achermann , Oscar Nierstrasz, A calculus for reasoning about software composition, Theoretical Computer Science, v.331 n.2-3, p.367-396, 25 February 2005
Alexander Ahern , Nobuko Yoshida, Formalising Java RMI with explicit code mobility, ACM SIGPLAN Notices, v.40 n.10, October 2005
Gerwin Klein , Tobias Nipkow, A machine-checked model for a Java-like language, virtual machine, and compiler, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.4, p.619-695, July 2006
Philip W. L. Fong, Reasoning about safety properties in a JVM-like environment, Science of Computer Programming, v.67 n.2-3, p.278-300, July, 2007
Chris Andreae , Yvonne Coady , Celina Gibbs , James Noble , Jan Vitek , Tian Zhao, Scoped types and aspects for real-time Java memory management, Real-Time Systems, v.37 n.1, p.1-44, October 2007
dependent types for higher-order mobile processes, ACM SIGPLAN Notices, v.39 n.1, p.147-160, January 2004
Atsushi Igarashi , Mirko Viroli, Variant parametric types: A flexible subtyping scheme for generics, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.5, p.795-847, September 2006
Alex Potanin , James Noble , Dave Clarke , Robert Biddle, Generic ownership for generic Java, ACM SIGPLAN Notices, v.41 n.10, October 2006
Daniel J. Dougherty , Pierre Lescanne , Luigi Liquori, Addressed term rewriting systems: application to a typed object calculus, Mathematical Structures in Computer Science, v.16 n.4, p.667-709, August 2006
Tian Zhao , Jens Palsberg , Jan Vitek, Type-based confinement, Journal of Functional Programming, v.16 n.1, p.83-128, January 2006
Todd Millstein , Colin Bleckner , Craig Chambers, Modular typechecking for hierarchically extensible datatypes and functions, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.5, p.836-889, September 2004
Chris Andreae , James Noble , Shane Markstrum , Todd Millstein, A framework for implementing pluggable type systems, ACM SIGPLAN Notices, v.41 n.10, October 2006
Radha Jagadeesan , Alan Jeffrey , James Riely, Typed parametric polymorphism for aspects, Science of Computer Programming, v.63 n.3, p.267-296, 15 December 2006
Martin Abadi , Cormac Flanagan , Stephen N. Freund, Types for safe locking: Static race detection for Java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.2, p.207-255, March 2006
Einar Broch Johnsen , Olaf Owe , Ingrid Chieh Yu, Creol: a type-safe object-oriented model for distributed concurrent systems, Theoretical Computer Science, v.365 n.1, p.23-66, 10 November 2006
Anindya Banerjee , David A. Naumann, Ownership confinement ensures representation independence for object-oriented programs, Journal of the ACM (JACM), v.52 n.6, p.894-960, November 2005 | language design;compilation;generic classes;language semantics |
504212 | Analysis and comparison of two general sparse solvers for distributed memory computers. | This paper provides a comprehensive study and comparison of two state-of-the-art direct solvers for large sparse sets of linear equations on large-scale distributed-memory computers. One is a multifrontal solver called MUMPS, the other is a supernodal solver called superLU. We describe the main algorithmic features of the two solvers and compare their performance characteristics with respect to uniprocessor speed, interprocessor communication, and memory requirements. For both solvers, preorderings for numerical stability and sparsity play an important role in achieving high parallel efficiency. We analyse the results with various ordering algorithms. Our performance analysis is based on data obtained from runs on a 512-processor Cray T3E using a set of matrices from real applications. We also use regular 3D grid problems to study the scalability of the two solvers. | Introduction
We consider the direct solution of sparse linear equations on distributed memory computers
where communication is by message passing, normally using MPI. We study in detail two
state-of-the-art solvers, MUMPS (Amestoy, Duff, L'Excellent and Koster 1999, Amestoy, Duff
and L'Excellent 2000) and SuperLU (Li and Demmel 1999). The first uses a multifrontal
approach with dynamic pivoting for stability while the second is based on a supernodal
technique with static pivoting and iterative refinement. We discuss the detailed algorithms
used in these two codes in Section 3.
Two very important factors affecting the performance of both codes are the use of
preprocessing to preorder the matrix so that the diagonal entries are large relative to the
off-diagonals and the strategy used to compute an ordering for the rows and columns of
the matrix to preserve sparsity. We discuss these aspects in detail in Section 4.
We compare the performance of the two codes in Section 5, where we show that such
a comparison can be fraught with difficulties even though the authors of both codes are
involved in the study. In Section 6, regular grids problems are used to further illustrate
and analyse the difference between the two approaches. We had originally planned a
comparison of more sparse codes but, given the difficulties we have found in assessing
codes that we know well, we have for the moment shelved this more ambitious project.
However, we feel that the lessons that we have learned in this present exercise are both
invaluable to us in our future wider study and have given us some insight into the behaviour
of sparse direct codes which we feel is useful to share with a wider audience at this
stage. In addition to valuable information on the comparative merits of multifrontal versus
supernodal approaches, we have examined the parameter space for such a comparison
exercise and have identified several key parameters that influence to a differing degree the
two approaches.
Test environment
Throughout this paper, we will use a set of test problems to evaluate the performance
of our algorithms. Our test matrices come from the forthcoming Rutherford-Boeing
Sparse Matrix Collection (Duff, Grimes and Lewis 1997) 1 , the industrial partners of the
Project 2 , Tim Davis' collection 3 , SPARSEKIT2 4 and the EECS Department of
UC Berkeley 5 . The PARASOL test matrices are available from Parallab, Bergen, Norway 6 .
Two smaller matrices (garon2 and lnsp3937) are included in our set of matrices but will
be used only in Section 4.1 to illustrate differences in the numerical behaviour of the two
solvers.
Note that, for most of our experiments, we do not consider symmetric matrices in
our test set because SuperLU cannot exploit the symmetry and is unable to produce an
LDL T factorization. However, since our test examples in Section 6 are symmetric, we do
Web page http://www.cse.clrc.ac.uk/Activity/SparseMatrices/
Project 20160
3 Web page http://www.cise.ufl.edu/-davis/sparse/
4 Web page http://math.nist.gov/MatrixMarket/data/SPARSKIT/
5 Matrix ecl32 is included in the Rutherford-Boeing Collection
6 Web page http://www.parallab.uib.no/parasol/
Real Unsymmetric Assembled (rua)
Matrix name Order No. of entries
bbmat 38744 1771722 0.54 Rutherford-Boeing (CFD)
Department of UC Berkeley
garon2 13535 390607 1.00 Davis collection (CFD)
lhr71c 70304 1528092 0.00 Davis collection (Chem Eng)
Rutherford-Boeing (CFD)
mixtank 29957 1995041 1.00 PARASOL (Polyflow S.A.)
collection (CFD)
twotone 120750 1224224 0.28 Rutherford-Boeing (circuit sim)
wang4 26068 177196 1.00 Rutherford-Boeing (semiconductor)
Table
2.1: Test matrices. ( ) StrSym is the number of nonzeros matched by nonzeros
in symmetric locations divided by the total number of entries (so that a structurally
symmetric matrix has value 1.0).
show results with both the symmetric and unsymmetric factorization versions of MUMPS.
Matrices mixtank and invextr1 have been modified because of out-of-range (underflow)
values in matrix files. To keep the same sparsity pattern, we did not want to replace those
underflow values by zeros. Instead, we have replaced all entries with an exponent smaller
than -300 to numbers with the same mantissa but with an exponent of -300. For each
linear system, the right-hand side vector is generated so that the true solution is a vector
of all ones.
All results presented in this paper have been obtained on the Cray T3E-900 (512
DEC EV-5 processors, 256 Mbytes of memory per processor, 900 peak Megaflop rate per
processor) from NERSC at Lawrence Berkeley National Laboratory. We will also refer
to experiments on a 35 processor IBM SP2 (66.5 MHertz processor with 128 Mbytes
of physical memory and 512 Mbytes of virtual memory and 266 peak Megaflop rate
per processor) at GMD in Bonn, Germany, used during the PARASOL Project. The
performance characteristics of the two machines are listed in Table 2.2.
Computer CRAY T3E-900 IBM SP2
Frequency of the processor 450 MHertz 66 MHertz
Peak uniproc. performance 900 Mflops 264 Mflops
Effective uniproc. performance 340 Mflops 150 Mflops
Peak communication bandwidth 300 Mbytes/sec 36 Mbytes/sec
Latency
Bandwidth/Effective performance 0.88 0.24
Table
2.2: Characteristics of the CRAY T3E-900 and the IBM SP2. The factorization of
matrix wang4 using MUMPS was used to estimate the effective uniprocessor performance of
the computers.
3 Description of the algorithms used
In this section, we briefly describe the main characteristics of the algorithms used in the
solvers and highlight the major differences between them. For a complete description of the
algorithms, the reader should consult previous papers by the authors of these algorithms
(Amestoy et al. 1999, Amestoy et al. 2000, Li and Demmel 1998, Li and Demmel 1999).
Both algorithms can be described by a computational tree whose nodes represent
computations and whose edges represent transfer of data. In the case of the multifrontal
method, MUMPS, some steps of Gaussian elimination are performed on a dense frontal matrix
at each node and the Schur complement (or contribution block) that remains is passed for
assembly at the parent node. In the case of the supernodal code, SuperLU, the distributed
memory version uses a right-looking formulation which, having computed the factorization
of a block of columns corresponding to a node of the tree, then immediately sends the
data to update the block columns corresponding to ancestors in the tree.
Both codes can accept any pivotal ordering and both have a built-in capability
to generate an ordering based on an analysis of the pattern of A + A T , where the
summation is performed symbolically. However, for the present version of MUMPS, the
symbolic factorization is markedly less efficient if an input ordering is given since different
logic is used in this case. The default ordering used by MUMPS is approximate minimum
degree (AMD) (Amestoy, Davis and Duff 1996a) while the default for SuperLU is multiple
minimum degree (MMD) (Liu 1985). However, in our experiments using a minimum
degree ordering, we considered only the AMD ordering since both codes can generate this
using the subroutine MC47 from HSL (2000). It is usually far quicker than MMD and
produces a symbolic factorization close to that produced by MMD. We also use nested
dissection orderings (ND). Sometimes we use the ON-MeTiS ordering from MeTiS (Karypis
and Kumar 1998), and sometimes the nested dissection/haloamd ordering from SCOTCH
(Pellegrini, Roman and Amestoy 1999) depending on which performs better on each
particular problem. In addition, it is sometimes very beneficial to precede the ordering
by performing an unsymmetric permutation to place large entries on the diagonal and
then scaling the matrix so that the diagonals are all of modulus one and the off-diagonals
have modulus less than or equal to one. We use the MC64 code of HSL to perform this
preordering and scaling (Duff and Koster 1999) and indicate clearly when this is done.
The effect of using this preordering of the matrix is discussed in detail in Section 4.1.
Finally, when MC64 is not used, our matrices are always scaled.
In both approaches, a pivot order is defined by the analysis and symbolic factorization
stages. In MUMPS, the modulus of the prospective pivot is compared with the largest
modulus of an entry in the row and is only accepted if this is greater than a threshold
value, typically between 0.001 and 0.1 (our default value is 0.01). Note that, although
MUMPS can choose pivots from off the diagonal, the largest entry in the column might
be unavailable for pivoting at this stage if all entries in its row are not fully summed.
This threshold pivoting strategy is common in sparse Gaussian elimination and helps
to avoid excessive growth in the size of entries during the matrix factorization and so
directly reduces the bound on the backward error. If a prospective pivot fails the test,
all that happens is that it is kept in the Schur complement and is passed to the parent
node. Eventually all rows with entries in the column will be available for pivoting, at the
root if not before, so that a pivot can be chosen from the column. Thus the numerical
factorization can respect the threshold criterion but at the cost of increasing the size of
the frontal matrices and causing more work and fill-in than were forecast. For the SuperLU
approach, a static pivoting strategy is used and we keep to the pivotal sequence chosen in
the analysis. The magnitude of the potential pivot is tested against a threshold of ffl 1=2 jjAjj,
where ffl is the machine precision and jjAjj is the norm of A. If it is less than this value
it is immediately set to this value (with the same sign) and the modified entry is used as
pivot. This corresponds to a half-precision perturbation to the original matrix entry. The
result is that the factorization is not exact and iterative refinement may be needed. Note
that, after iterative refinement, we obtained an accurate solution in all the cases that we
tested. If problems were still to occur then extended precision BLAS (Li, Demmel, Bailey,
Henry, Hida, Iskandar, Kahan, Kapur, Martin, Tung and Yoo 2000) could be used.
3.1 MUMPS main parallel features
The parallelism within MUMPS is at two levels. The first uses the structure of the assembly
tree, exploiting the fact that computations at nodes that are not ancestors or descendents
are independent. The initial parallelism from this source (tree parallelism) is the number
of leaf nodes but this reduces to one at the root. The second level is in the subdivision
of the elimination operations through blocking of the frontal matrix. This blocking gives
rise to node parallelism and is either by rows (referred to as 1D-node parallelism) or by
rows and columns (at the root and referred to as 2D-node parallelism). Node parallelism
depends on the size of the frontal matrix which, because of delayed pivots, is only known
at factorization time. Therefore, this is determined dynamically. Each tree node is
assigned a processor a priori, but the subassignment of blocks of the frontal matrix is
done dynamically.
Most of the machine dependent parameters in MUMPS that control the efficiency of
the code are designed to take into account both the uniprocessor and multiprocessor
characteristics of the computers. Because of the dynamic distributed scheduling approach,
we do not need as precise a description of the performance characteristics of the computer
as for approaches based on static scheduling such as PaStiX (Henon, Ramet and Roman
1999). Most of the machine dependent parameters in MUMPS are associated with the block
sizes involved in the parallel blocked factorization algorithms of the dense frontal matrices.
Our main objective is to maintain a minimum granularity to efficiently exploit the potential
of the processor while providing sufficient tasks to exploit the available parallelism. Our
target machines differ in several respects. The most important ones are illustrated in
Table
2.2. We found that smaller granularity tasks could be used on the CRAY T3E
than on the IBM SP2 because of the relatively faster rate of communication to Megaflop
rate on the CRAY T3E than on the IBM SP2 (see Table 2.2). That is to say that the
communication is relatively more efficient on the CRAY T3E.
Dynamic scheduling is a major and original feature of the approach used in MUMPS.
A critical part of this algorithm is when a process associated with a tree node decides
to reassign some of its work, corresponding to a partitioning of the rows, to a set of so-called
worker processes. We call such a node a one-dimensional parallel node. In earlier
versions of MUMPS, a fixed block size is used to partition the rows and work is distributed
to processes starting with the least loaded process. (The load of a process is determined
by the amount of work (number of operations) allocated to it and not yet processed,
which can be determined very cheaply.) Since the block size is fixed, it is possible for a
process in charge of a one-dimensional parallel node to give additional work to processes
that are already more loaded than itself. This can happen near the leaf nodes of the tree
where sparsity provides enough parallelism to keep all processes busy. On the other hand,
insufficient tasks might be created to provide work to all idle processes. This situation is
more likely to occur close to the root of the tree.
In the new algorithm (available since Version 4.1 of MUMPS), the block size for the one-dimensional
partitioning can be dynamically adjusted by the process in charge of the node.
Early in the processing of the tree (that is, near the leaves) this gives a relatively bigger
block size so reducing the number of worker processes; whereas close to the root of the
tree the block size will be automatically reduced to compensate for the lack of parallelism
in the assembly tree. We bound the block size for partitioning a one-dimensional parallel
node by an interval. The lower bound is needed to maintain a minimum task granularity
and control the volume of messages. The upper bound of the interval is less critical (it is
by default chosen to be about eight times the lower bound) but it is used in estimating
the maximum size of the communication buffers and of the factors and so should not be
too large.
This "all dynamic" strategy of both partitioning and distributing work onto the
processors could cause some trouble on a large number of processors (more than 128).
In that case, it can be quite beneficial to take into account some "global" information to
help the local decisions. For example one could restrict the choice of worker processes to a
set of candidate processors determined statically during the analysis phase. This notion,
commonly used in the design of static scheduling algorithms such as that in Henon et
al. (1999), could reduce the overhead of the dynamic scheduling algorithm, reduce the
increase in the communication volume when increasing the number of processors, and
improve the local decision. The tuning of the parameters controlling the block size for
1D partitioning would then be easier and the estimation of the memory required during
factorization would be more accurate. On a large number of processors, both performance
and software improvements could thus be expected. This feature is not available in the
current Version 4.1 of MUMPS but will be implemented in a future release. We will see that
by adding this feature, one could address some of the current limitations of the MUMPS
approach, see Section 5.2.
The solution phase is also performed in parallel and uses asynchronous communications
both for the forward elimination and the back substitution. In the case of the forward
elimination, the tree is processed from the leaves to the root, in a similar way to the
factorization, while the back substitution requires a different algorithm that processes the
tree from the root to the leaves. A pool of ready-to-be-activated tasks is used. We do not
change the distribution of the factors as generated in the factorization phase. Hence, type
2 and 3 node parallelism are also used in the solution phase.
3.2 SUPERLU main parallel features
SuperLU also uses two levels of parallelism although more advantage is taken of the
node parallelism through blocking of the supernodes. Because the pivotal order is fully
determined at the analysis phase, the assignment of blocks to processors can be done
statically a priori before the factorization commences. A 2D block-cyclic layout is used
and the execution can be pipelined since the sequence is predetermined. The matrix
partitioning is based on the notion of an unsymmetric supernode introduced in Demmel,
Eisenstat, Gilbert, Li and Liu (1999). The supernode is defined over the matrix factor L.
A supernode is a range of columns of L with the triangular block just below the
diagonal being full, and the same nonzero structure elsewhere (this is either full or zero).
This supernode partition is used as the block partition in both row and column dimensions,
that is the diagonal blocks are square. If there are N supernodes in an n-by-n matrix, there
will be N 2 blocks of non-uniform size. Figure 3.1 illustrates such a block partition. The
off-diagonal blocks may be rectangular and need not be full. Furthermore, the columns in
a block of U do not necessarily have the same row structure. We call a dense subvector
in a block of U a segment. The P processes are also arranged as a 2D mesh of dimension
. By block-cyclic layout, we mean block (I; J) (of L or U) is mapped onto the
process at coordinate of the process mesh. During the
factorization, block L(I; J) is only needed by the processes on the process row
Similarly, block U(I; J) is only needed by the processes on the process column
This partitioning and mapping can be controlled by the user. First,
the user can set the maximum block size parameter. The symbolic factorization algorithm
identifies supernodes, and chops the large supernodes into smaller ones if their sizes exceed
this parameter. The supernodes may be smaller than this parameter due to sparsity and
the blocks are then defined by the supernode boundaries. (That is, supernodes can be
smaller than the maximum block size but never larger.) Our experience has shown that a
good value for this parameter on the IBM SP2 is around 40, while on the Cray T3E it is
around 24. Second, the user can set the shape of the process grid, such as 2 \Theta 3 or 3 \Theta 2.
The more square the grid, the better the performance expected. This rule of thumb was
used on the Cray T3E to define the grid shapes.4301204441 00130143
1Global Matrix5Process Mesh5
Figure
3.1: The 2D block-cyclic layout used in SuperLU.
In this 2D mapping, each block column of L resides on more than one process, namely,
a column of processes. For example in Figure 3.1, the second block column of L resides
on the column processes f1, 4g. Process 1 only owns two nonzero blocks, which are not
contiguous in the global matrix.
The main numerical kernel involved during numerical factorization is a block update
corresponding to the rank-k update to the Schur complement:
see
Figure
3.2. In the earlier versions of SuperLU, this computation was based on Level
2.5 BLAS. That is, we call the Level 2 BLAS routine GEMV (matrix-vector product) but
with multiple vectors (segments), and the matrix L(I; K) is kept in cache across these
multiple calls. This to some extent mimics the Level 3 BLAS GEMM (matrix-matrix
product) performance. However, the difference between Level 2.5 and Level 3 is still quite
large on many machines, e.g. the IBM SP2. This motivated us to modify the kernel in
the following way in order to use Level 3 BLAS. For best performance, we distinguish two
cases corresponding to the two shapes of a U(K;J) block.
ffl The segments in U(K;J) are of same height, as shown in Figure 3.2 (a).
Since the nonzero segments are stored contiguously in memory, we can call GEMM
directly, without performing operations on any zeros.
ffl The segments in U(K;J) are of different heights, as shown in Figure 3.2 (b).
In this case, we first copy the segments into a temporary working array T , with
some leading zeros padded if necessary. We then call GEMM using L(I; K) and
T (instead of U(K;J)). We perform some extra floating-point operations for those
padding zeros. The copying itself does not incur a run time cost, because the data
must be loaded in the cache anyway. The working storage T is bounded by the
maximum block size, which is a tunable parameter. For example, we usually use
\Theta 40 on the IBM SP2 and 24 \Theta 24 on the Cray T3E.
Depending on the matrix, this Level 3 BLAS kernel improved the uniprocessor
factorization time by about 20% to 40% on the IBM SP2. A performance gain was
also observed on the Cray T3E. It is clear that the extra operations are well offset by the
benefit of the more efficient Level 3 BLAS routines.
(b) U(K,
A(I, J) L(I, K) U(K, J)
(a) U(K,
Figure
3.2: Illustration of the numerical kernels used in SuperLU.
The current factorization algorithm has two limitations to parallelism. Here we explain,
by examples, what the problems are and speculate how the algorithm may be improved
in the future. In the following matrix notation, the zero blocks are left blank. For each
nonzero block we mark in box the process which owns the block.
ffl Parallelism from the sparsity.
Consider a matrix with 4-by-4 blocks mapped onto a 2-by-2 process mesh660 1
Although node 2 is the parent of node 1 in the elimination tree (associated with
A T +A), not all processes in column 2 depend on column 1. Only process 1 depends
on the L block on process 0. Process 3 could start factorizing column 2 at the
same time as process 0 is factorizing column 1, before process 1 starts factorizing
column 2. But the current algorithm requires all the column processes to factorize
the column synchronously, thereby introducing idle time for process 3. We can relax
this constraint by allowing the diagonal process (3 in this case) to factorize the
diagonal block and then send the factored block down to the off-diagonal processes
(using mpi isend), even before the off-diagonal processes are ready for this column.
This would eliminate some artificial interprocess dependencies and potentially reduce
the length of the critical path.
Note that this kind of independence comes from not only the sparsity but also the 2D
process-to-matrix mapping. An even more interesting study would be to formalize
these 2D task dependencies into a task graph, and perform some optimal scheduling
on it.
ffl Parallelism from the directed acyclic elimination graphs (Gilbert and Liu 1993) often
referred to as elimination dags or edags.
Consider another matrix with 6-by-6 blocks mapped onto a 2-by-3 process mesh66664
Columns 1 and 3 are independent in the elimination dags. The column process sets
f0, 3g and f2, 5g could start factorizing columns 1 and 3 simultaneously. However,
since process 2 is also involved in the update task of block (5; associated with
Step 1 and our algorithm gives precedence to all the tasks in Step 1 over any task
in Step 3, process 2 does not factorize column 3 immediately. We may change this
task precedence by giving the factorization task of a later step higher priority than
the update tasks of the previous steps, because the former is more likely to be on
the critical path. This would exploit better the task independence coming from the
elimination dags.
We expect the above improvements will have a large impact for very sparse and/or
very unsymmetric matrices, and for the orderings that give wide and bushy elimination
trees, such as nested dissection.
The triangular solution algorithm is also designed around the same distributed 2D
data structure. The forward substitution proceeds from the bottom of the elimination
tree to the root, whereas the back substitution proceeds from the root to the bottom.
The algorithm is based on a sequential variant called "inner product" formulation.
The execution of the program is completely message-driven. Each process is in a self-scheduling
loop, performing appropriate local computation depending on the type of the
message received. The entirely asynchronous approach enables large overlap between
communication and computation and helps to overcome the much higher communication
to computation ratio in this phase.
3.3 First comments on the algorithmic differences
Both approaches use Level 3 BLAS to perform the elimination operations. However, in
MUMPS the frontal matrices are always square. It is possible that there are zeros in the
frontal matrix especially if there are delayed pivots or the matrix structure is markedly
asymmetric but the present implementation takes no advantage of this sparsity and all the
counts measured assume the frontal matrix is dense. It is shown in Amestoy and Puglisi
(2000) that one can detect and exploit the structural asymmetry of the frontal matrices.
With this new algorithm, significant gains both in memory and in time to perform the
factorization can be obtained. For example, using MUMPS with the new algorithm, the
number of operations to factorize matrices lhr71c and twotone would be reduced by
30% and 37%, respectively. The approach, tested on a shared memory multifrontal code
(Amestoy and Duff 1993) from HSL (2000), is however not yet available in the current
version of MUMPS. In SuperLU, advantage is taken of sparsity in the blocks and usually the
dense matrix blocks are smaller than those used in MUMPS. In addition, SuperLU uses a
more sophisticated data structure to keep track of the irregularity in sparsity. Thus, the
uniprocessor Megaflop rate of SuperLU is much worse than that of MUMPS. This performance
penalty is to some extent alleviated by the reduction in floating-point operations because
of the better exploitation of sparsity. As a rule of thumb, MUMPS will tend to perform
particularly well when the matrix structure is close to symmetric while SuperLU can better
exploit asymmetry. We note that, even if the same ordering is input to the two codes,
the computational tree generated in each case will be different. In the case of MUMPS, the
assembly tree generated by MC47 is used to drive the MUMPS factorization phase, while, for
SuperLU, the directed acyclic computational graphs (dags) are built implicitly.
In
Figures
3.3 and 3.4, we use a vampir trace (Nagel, Arnold, Weber, Hoppe and
Solchenbach 1996) to illustrate the typical parallel behaviour of both approaches. These
traces correspond to a zoom in the middle of the factorization phase of matrix bbmat on
8 processors of the CRAY T3E. Black areas correspond to time spent in communications
and related MPI calls. Each line between two processes corresponds to one message
transfer. From the plots we can see that SuperLU has distinct phases for local computation
and interprocess communication, whereas for MUMPS, it is hard to distinguish when the
process performs computation and when it transfers a message. This is due to the
asynchronous scheduling algorithm used in MUMPS which may have a better chance of
overlapping communication with computation.
4 Impact of preprocessing and numerical issues
In this section, we first study the impact on both solvers of the preprocessing of the matrix.
In this preprocessing, we first use row or column permutations to permute large entries
onto the diagonal. In Section 4.1, we report and compare both the structural and the
numerical impact of this preprocessing phase on the performance and accuracy of our
solvers. After this phase, a symmetric ordering (minimum degree or nested dissection)
is used and we study the relative influence of these orderings on the performance of the
solvers in Section 4.2. We also comment on the relative cost of the analysis phase of the
two solvers.
4.1 Use of a preordering to place large entries onto the diagonal and
the cost of the analysis phase
Duff and Koster (1999) developed an algorithm for permuting a sparse matrix so that the
diagonal entries are large relative to the off-diagonal entries. They have also written a
computer code, MC64 (available from HSL (2000)), to implement this algorithm. Here, we
use option 5 of MC64 which maximizes the product of the modulus of the diagonal entries
and then scales the permuted matrix so that it has diagonal entries of modulus one and
all off-diagonals of modulus less than or equal to one.
The importance of this preordering and scaling is clear. For MUMPS it should limit the
amount of numerical pivoting during the factorization, which increases the overall cost of
the factorization. For SuperLU, we expect such a permutation to be even more crucial,
reducing the amount of small pivots that are modified and set to " 1=2 jjAjj.
The MC64 code of Duff and Koster (1999) is quite efficient and so should normally
require little time relative to the matrix factorization even if the latter is executed on many
processors while MC64 runs on only one processor. Results in this section will show that it
is not always the case. Moreover, matrices which are unsymmetric but have a symmetric
or nearly symmetric structure are a very common problem class. The problem with these
is that MC64 performs an unsymmetric permutation and will tend to destroy the symmetry
of the pattern. Since both codes use a symmetrized pattern for the sparsity ordering (see
Section 4.2) and MUMPS uses one also for the symbolic and numerical factorization, the
overheads in having a markedly unsymmetric pattern can be high. Conversely, when the
initial matrix is very unsymmetric (as for example lhr71c) the unsymmetric permutation
may actually help to increase structural symmetry thus giving a second benefit to the
subsequent matrix factorization.
We show the effects of using MC64 on some examples in Table 4.1. In Table 4.4, we
illustrate the relative cost of the main steps of the analysis phase when MC64 is used to
preprocess the matrix.
We see in Table 4.1 that, for very unsymmetric matrices (lhr71c and twotone), MC64
is really needed by MUMPS and SuperLU to factorize these matrices efficiently. Both matrices
have zeros on the diagonal. Because of the static pivoting approach used by SuperLU, unless
these zeros are made nonzero by fill-in and are then large enough, they will be perturbed
Process
Process
Process
Process
Process
Process
Process 6 108 108 5
Process 7 108
MPI
Application
9.05s
9.0s
8.95s
8.9s
Figure
3.3: Illustration of the asynchronous behaviour of the MUMPS factorization phase.
Process 0
Process 1
Process
Process 3
Process 4
Process 5
Process 6
Process 7
MPI
VT_API
Comm
9.32s
9.3s
9.28s
Figure
3.4: Illustration of the relatively more synchronous behaviour of the SuperLU
factorization phase.
Matrix Solver Ordering StrSym Nonzeros Flops
in factors
bbmat MUMPS AMD 0.54 46.1 41.5
fidapm11 MUMPS AMD 1.00 16.1 9.7
garon2 MUMPS AMD 1.00 2.4 0.3
mixtank MUMPS AMD 1.00 39.1 64.4
twotone MUMPS AMD 0.28 235.0 1221.1
wang4 MUMPS AMD 1.00 11.6 10.5
Table
4.1: Impact of permuting large entries onto the diagonal (using MC64) on the size
of the factors and the number of operations. ( ) estimation given by the analysis (not
enough memory to perform factorization). StrSym denotes the structural symmetry after
ordering.
during factorization and a factorization of a nearby matrix is obtained. In the case of
MUMPS, the dramatically higher fill-in obtained without MC64 makes it also necessary to use
MC64. For MUMPS, the main benefit from using MC64 is more structural than numerical. The
permuted matrix has in fact a larger structural symmetry (see column 4 of Table 4.1)
so that a symmetric permutation can be obtained on the permuted matrix that is more
efficient in preserving sparsity. SuperLU benefits in a similar way from symmetrization
because the computation of the symmetric permutation is based on the same assumption
even if SuperLU preserves better the asymmetric structure of the factors by performing a
symbolic analysis on a directed acyclic graph and exploiting asymmetry in the factorization
phase (compare, for example, results with MUMPS and SuperLU on matrices lhr71c, mixtank
and twotone).
Matrix Iter. No
bbmat
Table
4.2: Illustration of the convergence of iterative refinement.
The use of MC64 can also improve the quality of the factors and the numerical behaviour
of the factorization phase, and can reduce the number of steps of iterative refinement
required to reduce the backward error to machine precision. This is illustrated in
Table
4.2 where we show the number of steps of iterative refinement required to reduce
the componentwise relative backward error,
(Arioli, Demmel
and Duff 1989), to machine precision (" - 2:2 \Theta 10 \Gamma16 on the CRAY T3E). Iterative
refinement will stop when either the required accuracy is reached or the convergence rate
is too slow (Berr does not decrease by at least a factor of two). The true error is reported
as
jjx true jj
. This table illustrates the impact of the use of MC64 on the quality of
Matrix Solver WITHOUT Iter. Ref. WITH Iterative Refinement
Berr
bbmat MUMPS 7.4e-11 1.3e-06 2 3.2e-16 3.0e-09
lhr71c MUMPS Not enough memory
SuperLU Not enough memory
mixtank MUMPS 1.9e-12 4.8e-09 2 5.9e-16 1.4e-11
twotone MUMPS 5.0e-07 1.3e-05 3 1.3e-15 2.1e-11
Matrix Solver WITHOUT Iter. Ref. WITH Iterative Refinement
Berr
bbmat MUMPS 1.2e-11 6.5e-08 2 2.7e-16 3.5e-09
lhr71c MUMPS 1.1e-05 9.9e+00 3 3.2e-13 1.0e+00
mixtank MUMPS 4.8e-12 2.3e-08 2 4.2e-16 4.0e-11
twotone MUMPS 3.2e-13 1.6e-10 2 1.6e-15 2.3e-11
Table
4.3: Comparison of the numerical behaviour, backward error (Berr) and forward
error (Err), of the solvers. Nb indicates the number of steps of iterative refinement.
the initial solution obtained with both solvers prior to iterative refinement. Additionally, it
shows that, thanks to numerical partial pivoting, the initial solution is almost always more
accurate with MUMPS than with SuperLU and is usually markedly so. These observations are
further confirmed on a larger number of test matrices in Table 4.3. The same stopping
criterion was applied for these runs as for the runs in Table 4.2. In the case of MUMPS, MC64
can also result in a reduction in the number of off-diagonal pivots and in the number of
delayed pivots. For example on the matrix invextr1 the number of off-diagonal pivots
drops from 1520 to 109 and the number of delayed pivots drops from 2555 to 42. One
can also see in Table 4.2 (e.g., bbmat) that MC64 does not always improve the numerical
accuracy of the solution obtained with SuperLU.
As expected, we see that, for matrices with a fairly symmetric pattern (e.g., matrix
fidapm11 in Table 4.1), the use of MC64 leads to a significant decrease in symmetry which,
for both solvers, results in a significant increase in the number of operations during
factorization. We additionally recollect that the time spent in MC64 can dominate the
analysis time of either solver (see Table 4.4), even for matrices such as fidapm11 and
invextr1 for which it does not provide any gain for the subsequent steps. Thus, for both
solvers, the default should be to not use MC64 on fairly symmetric matrices. In practice,
the default option of the MUMPS package is such that MC64 is automatically invoked when
the structural symmetry is found to be less than 0:5. For SuperLU, zeros on the diagonal
and numerical issues must also be considered so that an automatic decision during the
analysis phase is more difficult.
We finally compare, in Figure 4.1, the time spent by the two solvers during the analysis
phase when reordering is based only on AMD (MC64 is not invoked). Since the time spent
bbmat ecl32 invextr1 fidapm11 mixtank rma10 wang42610Seconds
MUMPS
Figure
4.1: Time comparison of the analysis phases of MUMPS and SuperLU. MC64
preprocessing is NOT used and AMD ordering is used.
in AMD is very similar in both cases, this gives a good estimation of the cost difference
Matrix Solver Preprocess. Total MC64 AMD
bbmat MUMPS AMD 4.7 - 3.0
- MC64+AMD 7.2 2.1 3.1
mixtank MUMPS AMD 3.2 - 0.8
twotone MUMPS AMD 12.7 - 8.7
Table
4.4: Influence of permuting large entries onto the diagonal (using MC64) on the time
(in seconds) for the analysis phase of MUMPS and SuperLU.
of the analysis phase of the two solvers. Note that SuperLU is not currently tied to any
specific ordering code and does not take advantage of all the information available from
an ordering algorithm. A tighter coupling with an ordering, as is the case with MUMPS and
AMD, should reduce the analysis time for SuperLU. However, during the analysis phase of
SuperLU, all the asymmetric structures needed for the factorization are computed and the
directed acyclic graph (Gilbert and Liu 1993) of the unsymmetric matrix must be built
and mapped onto the processors. With MUMPS, the main data structure handled during
analysis is the assembly tree which is produced directly as a by-product of the ordering
phase. No further data structures are introduced during this phase. Dynamic scheduling
will be used during factorization so that only a simple massage of the tree and a partial
mapping of the computational tasks onto the processors are performed during analysis.
4.2 Use of orderings to preserve sparsity
On matrices for which MC64 is not used we show, in Table 4.5, the impact of the choice of
the symmetric permutation on the fill-in and floating-point operations for the factorization.
As was observed in Amestoy et al. (1999), the use of nested dissection can significantly
improve the performance of MUMPS. We see here that SuperLU will also, although to a lesser
extent, benefit from the use of a nested dissection ordering. We examine the influence
of the ordering on the performance further in Section 5. We also notice that, for both
orderings, SuperLU exploits the asymmetry of the matrix somewhat better than MUMPS (see
bbmat with structural symmetry 0:53). We expect the asymmetry of the problem to be
better exploited by MUMPS when the approach described in Amestoy and Puglisi (2000) is
implemented.
Matrix Ordering Solver NZ in LU Flops
bbmat AMD MUMPS 46.1 41.5
41.2 34.0
ND MUMPS 35.8 25.7
ND MUMPS 24.8 20.9
ND MUMPS 16.2 8.1
mixtank AMD MUMPS 39.1 64.4
ND MUMPS 19.6 13.2
Table
4.5: Influence of the symmetric sparsity orderings on the fill-in and floating-point
operations on the factorization of unsymmetric matrices. (MC64 is not used.)
5 Performance analysis on general matrices
5.1 Performance of the numerical phases
In this section, we compare the performance and study the behaviour of the numerical
phases (factorization and solve) of the two solvers.
For the sake of clarity, we will only report results with the best (in terms of factorization
time) sparsity ordering for each approach. When the best ordering for MUMPS is different
from that for SuperLU, results with both orderings will be provided. This means that results
with both nested dissection and minimum degree orderings are given that illustrate the
different sensitivity of the codes to the choice of the ordering. We note that, even when the
same ordering is given to each solver, they will not usually perform the same number of
operations. In general, SuperLU performs fewer operations than MUMPS because it exploits
better the asymmetry of the matrix although the execution time is less for MUMPS because
of the Level 3 BLAS effect.
Although results are very often matrix dependent, we will try, as much as possible, to
identify some general properties of the two solvers. We should point out that the maximum
dimension of our unsymmetric test matrices is only 120750 (see Table 2.1).
5.1.1 Study of the factorization phase
We show in Table 5.1 the factorization time of both solvers. On the smaller matrices, we
only report in Table 5.2 results with up to 64 processors.
We observe that MUMPS is usually faster than SuperLU and is significantly so on a small
number of processors. We believe there are two reasons. First, MUMPS handles symmetric
and more regular data structures better than SuperLU, because MUMPS uses Level 3 BLAS
kernels on bigger blocks than those used within SuperLU. As a result, the Megaflop rate of
MUMPS on one processor is on average about twice that of the SuperLU factorization. This
is also evident in the results on smaller test problems in Table 5.2 and from the results
on 3D grid problems in Section 6. Note that, even on the matrix twotone, for which
performs three times fewer operations than MUMPS, MUMPS is over 2.5 times faster
than SuperLU on four processors. On a small number of processors, we also notice that
SuperLU does not always fully benefit from the reduction in the number of operations due
to the use of a nested dissection ordering (see bbmat with SuperLU using 4 processors).
Furthermore, one should notice that, with matrices that are structurally very
asymmetric, SuperLU can be much less scalable than MUMPS. For example, on matrix lhr71c
in
Table
5.2, speedups of 2.5 and 8.3 are obtained with SuperLU and MUMPS, respectively.
This is due to the two parallel limitations of the current SuperLU algorithm described in
Section 3.2. First, SuperLU does not fully exploit the parallelism of the elimination dags.
Second, the pipelining mechanism does not fully benefit from the sparsity of the factors
(a blocked column factorization should be implemented). This also explains why SuperLU
does not fully benefit, as in the case for MUMPS, from the better balanced tree generated by
a nested dissection ordering.
We see that the ordering very significantly influences the performance of the codes (see
results with matrices bbmat and ecl32) and, in particular, MUMPS generally outperforms
SuperLU, even on a large number of processors, when a nested dissection ordering is used.
On the other hand, if we use the minimum degree ordering, SuperLU can be faster than
MUMPS on a large number of processors. We also see that, on most of our unsymmetric
problems, neither solver provides enough parallelism to benefit from using more than 128
processors. The only exception is matrix ecl32 using the AMD ordering (requiring
flops for the factorization), for which only SuperLU continues to decrease the factorization
time up to 512 processors. Our lack of other large unsymmetric systems gives us few data
points in this regime but one might expect that, independently of the ordering, the 2D
distribution used in SuperLU should provide better scalability (and hence eventually better
performance) on a large number of processors than the mixed 1D and 2D distribution used
in MUMPS. To further analyse the scalability of our solvers, we consider three dimensional
regular grid problems in Section 6.
Matrix Ord. Solver Number of processors
bbmat AMD MUMPS - 45.7 24.0 16.5 13.7 11.9 11.2 9.1 12.6
SuperLU 68.2 23.1 13.3 9.1 6.7 5.7 4.7 6.1 5.8
mixtank ND MUMPS 40.8 13.0 7.8 5.6 4.4 3.9 4.2 4.2 5.4
twotone MC64 MUMPS - 40.3 22.6 18.6 14.7 14.4 14.3 14.0 14.3
Table
5.1: Factorization time (in seconds) of large test matrices on the CRAY T3E. "-"
indicates not enough memory.
Matrix Ordering Solver Number of processors
fidapm11 AMD MUMPS 31.6 11.7 8.4 6.5 5.7 5.7
lhr71c MC64+AMD MUMPS 13.3 4.3 2.9 1.7 1.5 1.6
rma10 AMD MUMPS 8.1 3.1 2.2 2.1 2.0 2.1
wang4 AMD MUMPS 30.6 11.1 7.0 5.2 4.3 3.9
56.3 19.4 13.9 7.9 5.8 5.6
Table
5.2: Factorization time (in seconds) of small test matrices on the CRAY T3E. "-"
indicates not enough memory.
To better understand the performance differences observed in Tables 5.1 and 5.2 and
to identify the main characteristics of our solvers we show, in Table 5.3, the average
communication volume. The speed of communication can depend very much on the
number and the size of the messages and we also indicate the maximum size of the messages
and the average number of messages. To overlap communication by computation, MUMPS
uses fully asynchronous communications (during both sends and receives). The use of
non-blocking sends during the more synchronous scheduled approach used by SuperLU also
enables overlapping between communication and computation.
Matrix Ord Solver Number of processors
Max Vol. #Mess Max Vol. #Mess Max Vol. #Mess
bbmat AMD MUMPS 4.9 44 3240 3.3 63 1700 2.9 20 2257
ND MUMPS 2.2 7 2214 2.8 43 1441 1.5 48 3228
fidapm11 AMD MUMPS 2.5 28 3000 2.4 22 1471 2.4 6 1323
mixtank ND MUMPS 3.5
twotone MC64 MUMPS 8.8 61 5076 2.9 139 4144 2.1
Table
5.3: Maximum size of the messages (Max in Mbytes), average volume of
communication (Vol. in Mbytes) and number of messages per processor (#Mess) for
large matrices during factorization.
From the results in Table 5.3, it is difficult to make any definitive comment on the
average volume of communication. Overall it is broadly comparable with sometimes
MUMPS and sometimes SuperLU having lower volume, occasionally by a significant amount.
However, although the average volume of messages with 64 processors can be comparable
with both solvers, there is between one and two orders of magnitude difference in the
average number of messages and therefore in the average size of the messages. This is due
to the much larger number of messages involved in a fan-out approach (SuperLU) compared
to a multifrontal approach (MUMPS). Note that, with MUMPS, the number of messages includes
the messages (one integer) required by the dynamic scheduling algorithm to update the
load on the processes.
The average volume of communication per processor of each solver depends very much
on the number of processors. While, with SuperLU, increasing the number of processors will
generally decrease the communication volume per processor it is not always the case with
MUMPS. Note that adding some global information to the local dynamic scheduling algorithm
of MUMPS will help to increase the granularity of the level 2 node subtasks without losing
parallelism (see Section 3.1) and thus can result in a decrease in the average volume of
communication on a large number of processors.
5.1.2 Study of the solve phase
We already discussed in Section 4.1 the difference in the numerical behaviour of the two
solvers, showing that, in general, SuperLU will involve more steps of iterative refinement
than MUMPS to obtain the same accuracy in the solution.
In this section, we focus on the time spent to obtain the solution. We apply enough
steps of iterative refinement to ensure that the componentwise relative backward error
(Berr) is less than p
\Gamma8 . Each step of iterative refinement involves not
only a forward and a backward solve but also a matrix-vector product with the original
matrix. With MUMPS, the user can provide the input matrix in a very general distributed
format (Amestoy et al. 1999). This functionality was used to parallelize the matrix-vector
products. With SuperLU, the parallelization of the matrix-vector product was easier
because the input matrix is duplicated on all the processors.
In
Table
5.4, we report both the time to perform one solution step (using the factorized
matrix to solve when necessary (Berr greater than p
") the time to improve
the solution using iterative refinement (lines with "+ IR"). With SuperLU, except on ecl32
and mixtank which did not require any iterative refinement, one step of iterative refinement
was required and was always enough to reduce the backward error to
". With MUMPS,
iterative refinement was only required on the matrix invextr1 and the backward error was
already so close to
" (on one processor that on 4 and 8 processors no
step of iterative refinement was required (Berr for the initial solution was already equal to
In this case, the time reported in the row "+ IR" corresponds to the time to
perform the computation of the backward error. We first observe (compare, for example,
Tables
5.1 and 5.4) that, on a small number of processors (less than 8), the solve phase
is almost two orders of magnitude less costly than the factorization. On a large number
of processors, because our solve phases are relatively less scalable than the factorization
phases, the difference drops to one order of magnitude. On applications for which a large
number of solves might be required per factorization this could become critical for the
performance and might have to be addressed in the future. We show solution times for
our smaller matrices in Table 5.5 where we have not run iterative refinement.
The performance reported in Tables 5.4 and 5.5 would appear to suggest that the
regularity in the structure of the matrix factors generated by the factorization phase of
MUMPS is responsible for a faster solve phase than that of SuperLU for up to 256 processors.
On 512 processors, the solve phase of SuperLU is sometimes faster than that of MUMPS
although in all cases the fastest solve time is recorded by MUMPS usually on a fewer number of
processors. The cost of iterative refinement can significantly increase the cost of obtaining a
solution. With SuperLU, because of static pivoting, it is more likely that iterative refinement
will be required to obtain an accurate solution on numerically difficult matrices (see bbmat,
and twotone). With MUMPS, the use of partial pivoting during the factorization
will reduce the number of matrices for which iterative refinement is required. (In our
set, only invextr1 requires iterative refinement.) For both solvers, the use of MC64 to
preprocess the matrix can also be considered to reduce the number of steps of iterative
refinement and even avoid the need to use it in some cases (see Section 4.1).
Matrix Order. Solver Number of processors
bbmat AMD MUMPS - 0.53 0.38 0.31 0.32 0.32 0.36 0.40 0.56
twotone MC64 MUMPS - 1.03 0.92 0.97 0.98 0.98 1.03 1.13 1.41
Table
5.4: Solve time (in seconds) for large matrices on the CRAY T3E.
" shows the time spent improving the initial solution using iterative refinement. "-"
indicates not enough memory.
Matrix Ord. Solver Number of processors
Table
5.5: Solve time (in seconds) for small matrices on the CRAY T3E.
5.2 Memory usage
In this section, we study the memory used during factorization as a function of both the
solver used and the number of processors, see Table 5.6.
We want first to point out that, because of the dynamic scheduling approach and the
threshold pivoting used in MUMPS, the analysis phase cannot fully predict the space that
will be required on each processor and an upper bound is therefore used for the memory
allocation. With the static task mapping approach used in SuperLU, the memory used
can be predicted during the analysis phase. In this section, we only compare the memory
actually used by the solvers during the factorization phase. This includes reals, integers
and communication buffers. Storage for the initial matrix is, however, not included but
we have seen, in Amestoy et al. (1999), that the input matrix can also be provided in a
general distributed format and can be handled very efficiently by the solver. This option is
available in MUMPS. In SuperLU the initial matrix is currently duplicated on all processors 7 .
Matrix Ordering Solver Number of processors
Avg. Max. Avg. Max. Avg. Max.
bbmat AMD MUMPS 147 176 52
ND MUMPS 114 118 44 53 28 35
43 44
ND MUMPS 132 139 39 44 25 28
28 17 22
mixtank ND MUMPS 84 87 29 31 19 21
twotone MC64 MUMPS 167 180 57 67 42
Table
Memory used during factorization (in Megabytes, per processor).
We notice, in Table 5.6, a significant reduction in the memory required when increasing
the number of processors. We also see that, in general, SuperLU usually requires less
memory than MUMPS although this is less apparent when many processors are used showing
the better memory scalability of MUMPS. One can observe that there is little difference
7 For MUMPS, note that the storage reported still includes another internal copy of the initial matrix in
a distributed arrowhead form, necessary for the assembly operations during the multifrontal algorithm.
between the average and maximum memory usage showing both algorithms are well
balanced, with SuperLU the better of the two.
Note that memory scalability can be critical on globally addressable platforms where
parallelism increases the total memory used. On purely distributed machines such as the
T3E, the main factor remains the memory used per processor which should allow large
problems to be solved when enough processors are available.
6 Performance analysis on 3-D grid problems
To further analyse and understand the scalability of our solvers, we report in this section
on results obtained for the 11-point discretization of the Laplacian operator on three-dimensional
(NX, NY, NZ) grid problems.
We consider a set of 3D cubic (NX=NY=NZ) and rectangular (NX, NX/4, NX/8)
grids on which a nested dissection ordering is used. The size of the grids used, the number
of operations and the timings are reported in Table 6.1. When increasing the number of
processors, we have tried as much as possible to maintain a constant number of operations
per processor while keeping as much as possible the same shape of grids. It was not
possible to satisfy all these constraints, thus the number of operations per processor is not
completely constant.
Nprocs Grid size LDL T factorization LU factorization
flops time flops time flops time
Cubic grids (nested dissection)
4 36 13.4 19.9 26.8 28.1 26.8 53.3
Rectangular grids (nested dissection)
128 208 52 26 243.1 27.4 485.8 53.6 485.6 60.7
Table
6.1: Factorization time (in seconds) on Cray T3E. LU factorization is performed
for MUMPS-UNS and SuperLU, LDL T for MUMPS-SYM.
Since all our test matrices are symmetric, we can use MUMPS to compute either an LDL T
factorization, referred to as MUMPS-SYM, or an LU factorization, referred to as MUMPS-UNS.
will compute an LU factorization. Note that, for a given matrix, the unsymmetric
solvers (SuperLU and MUMPS-UNS) perform roughly twice as many operations as MUMPS-SYM.
To overcome the problem of the number of operations per processor being non-constant,
we first report in Figures 6.1 and 6.2 the Megaflop rate per processor for our three
approaches on cubic and rectangular grids, respectively. In our context, the Megaflop
rate is meaningful because on those grid problems the number of operations is almost
identical for MUMPS-UNS and SuperLU (see Table 6.1), thus it corresponds to the absolute
performance of the approach used for a given problem. We first notice that on up to
8 processors, and independently of the grid shape, MUMPS-UNS is about twice as fast as
SuperLU and also has a much higher Megaflop rate than MUMPS-SYM. On 128 processors
on both rectangular and cubic grids, all three solvers have similar Megaflop rates per
processor.
In
Figures
6.3 and 6.4, we show the parallel efficiency on cubic and rectangular grids
respectively. The efficiency of a solver on p processors is computed as the ratio of its
Megaflop rate per processor on p processors over its Megaflop rate on 1 processor.
In terms of efficiency, SuperLU is generally more efficient on cubic grids than MUMPS-UNS
even on a relatively small number of processors. MUMPS-SYM is relatively more efficient than
MUMPS-UNS and the MUMPS-SYM efficiency is very comparable to that of SuperLU. On a large
number of processors SuperLU is significantly more efficient than MUMPS-UNS. The peak ratio
between the methods is reached on cubic grids (128 processors) for which SuperLU is about
three and two times more efficient than MUMPS-UNS and MUMPS-SYM, respectively.
Finally, we report in Table 6.2 a quantitative evaluation of the overhead due to
parallelism on cubic grids, using the analysis tool vampir (Nagel et al. 1996). In the rows
"computation", we report the percentage of the time spent doing numerical factorization.
MPI calls and idle time due to communications or synchronization are reported in rows
"overhead" of the table.
Nprocs Grid size MUMPS-SYM MUMPS-UNS SuperLU(NX=36)
computation 69% 76% 87%
overhead 31% 24% 13%(NX=46)
computation 67% 69% 75%
overhead 33% 31% 25%(NX=57)
computation 50% 36% 56%
overhead 50% 64% 44%
Table
6.2: Percentage of the factorization time (cubic grids) spent in computation and in
overhead due to communication and synchronization.
Table
6.2 shows that SuperLU has less overhead than either version of MUMPS. We also
observe a better parallel behaviour of MUMPS-SYM with respect to MUMPS-UNS, as analysed in
Processors
rate
MUMPS-SYM
MUMPS-UNS
Figure
6.1: Megaflop rate per processor (cubic grids, nested dissection).
Processors
rate
MUMPS-SYM
MUMPS-UNS
Figure
6.2: Megaflop rate per processor (rectangular grids, nested dissection).
0.40.81.2
Processors
Efficiency
MUMPS-SYM
MUMPS-UNS
Figure
6.3: Parallel efficiency (cubic grids, nested dissection).
1280.20.61Processors
Efficiency
MUMPS-SYM
MUMPS-UNS
Figure
6.4: Parallel efficiency (rectangular grids, nested dissection).
Amestoy et al. (2000), which is mainly due to the fact that node level parallelism provides
relatively more parallelism in a symmetric context.
7 Concluding remarks
In this paper, we have presented a detailed analysis and comparison of two state-of-the-
art parallel sparse direct solvers-a multifrontal solver MUMPS and a supernodal solver
SuperLU. Our analysis is based on experiments using a massively parallel distributed-memory
machine-the Cray T3E, and a dozen matrices from different applications. Our
analysis addresses the efficiency of the solvers in many respects, including the role of
preordering steps and their costs, the accuracy of the solution, sparsity preservation,
the total memory required, the amount of interprocessor communication, the times for
factorization and triangular solves, and scalability. We found that both solvers have
strengths and weaknesses. We summarize our observations as follows.
ffl Both solvers can benefit from a numerical preordering scheme implemented in MC64,
although SuperLU benefits to a greater extent than MUMPS. For MUMPS, this helps reduce
the number of off-diagonal pivots and the number of delayed pivots. For SuperLU, this
may reduce the need for small diagonal perturbations and the number of iterative
refinements. However, since this permutation is asymmetric, it may destroy the
structural symmetry of the original matrix, and cause more fill-in and operations.
This tends to introduce a greater performance penalty for MUMPS than for SuperLU
although recent work by Amestoy and Puglisi (2000) might affect this conclusion.
This is why by default, MUMPS does not use MC64 on fairly symmetric matrices.
ffl MUMPS usually provides a better initial solution; this is due to the effect of dynamic
versus static pivoting. With one step of iterative refinement, SuperLU usually obtains
a solution with about the same level of accuracy.
ffl Both solvers can accept as input any fill-in reducing ordering, which is applied
symmetrically to both the rows and columns. MUMPS performs better with nested
dissection than minimum degree, because it can exploit the better tree parallelism
provided by a nested dissection ordering, whereas SuperLU does not exploit this level
of parallelism and its parallel efficiency is less sensitive to different orderings.
ffl Given the same ordering, SuperLU preserves the sparsity and the asymmetry of the L
and U factors better. SuperLU usually requires less memory than MUMPS, and more so
with smaller numbers of processors. On 64 processors, MUMPS requires 25-30% more
memory on average.
ffl Although the total volume of communication is comparable for both solvers. MUMPS
requires many fewer messages, especially with large numbers of processors. The
difference can be up to two orders of magnitude. This is partly intrinsic to the
algorithms (multifrontal versus fan-out), and partly due to the 1D (MUMPS) versus
2D (SuperLU) matrix partitioning.
ffl MUMPS is usually faster in both factorization and solve phases. The speed penalty for
partly comes from the code complexity in order to preserve the irregular
sparsity pattern, and is partly due to more communication messages. With more
processors, SuperLU shows better scalability, because its 2D partitioning scheme does
a better job in keeping all the processors busy despite the fact that it introduces more
messages.
As we said in the introduction, we started this exercise with the intention of comparing
a wider range of sparse codes. However, as we have demonstrated in the preceding sections,
the task of conducting such a comparison is very complex. We do feel though that the
experience we have gained in this task will be useful in extending the comparisons in the
future.
In the following tables, we summarize the major characteristics of the parallel sparse
direct codes of which we are aware. A clear description of the terms used in the tables is
given by Heath, Ng and Peyton (1991).
Code Technique Scope Availability Reference
CAPSS Multifrontal SPD www.netlib.org/scalapack (Heath and Raghavan 1997)
MUMPS Multifrontal SYM/UNS www.enseeiht.fr/apo/MUMPS (Amestoy et al. 1999)
PaStiX Fan-in SPD see caption x (Henon et al. 1999)
PSPASES Multifrontal SPD www.cs.umn.edu/-mjoshi/pspases (Gupta, Karypis and Kumar 1997)
SPOOLES Fan-in SYM/UNS www.netlib.org/linalg/spooles (Ashcraft and Grimes 1999)
SuperLU Fan-out UNS www.nersc.gov/-xiaoye/SuperLU (Li and Demmel 1999)
S+ Fan-out y UNS www.cs.ucsb.edu/research/S+ (Fu, Jiao and Yang 1998)
WSMP z Multifrontal SYM IBM product (Gupta 2000)
Table
7.1: Distributed memory codes.
x www.dept-info.labri.u-bordeaux.fr/-ramet/pastix
y Uses QR storage to statically accommodate any LU fill-in
z Only object code for IBM is available. No numerical pivoting performed.
Code Technique Scope Availability Reference
GSPAR Interpretative UNS Grund (Borchardt, Grund and Horn 1997)
Multifrontal UNS www.cse.clrc.ac.uk/Activity/HSL (Amestoy and Duff 1993)
Multifrontal QR RECT www.cse.clrc.ac.uk/Activity/HSL (Amestoy, Duff and Puglisi 1996b)
PanelLLT Left-looking SPD Ng (Ng and Peyton 1993)
PARDISO Left-right looking UNS Schenk (Schenk, G-artner and Fichtner 2000)
PSLDLT y Left-looking SPD SGI product (Rothberg 1994)
PSLDU y Left-looking UNS SGI product (Rothberg 1994)
Left-looking UNS www.nersc.gov/-xiaoye/SuperLU (Demmel et al. 1999)
Table
7.2: Shared memory codes
y Only object code for SGI is available
Acknowledgments
We want to thank James Demmel, Jacko Koster and Rich Vuduc for very helpful
discussions. We are grateful to Chiara Puglisi for her comments on an early version
of this article and her help with the presentation. We also want to thank John Reid for
his comments on the first version of this paper.
--R
An unsymmetrized multifrontal LU factorization
A fully asynchronous multifrontal solver using distributed dynamic scheduling
SPOOLES: An object-oriented sparse matrix library
Parallel numerical methods for large systems of differential-algebraic equations in industrial applications
On algorithms for permuting large entries to the diagonal of a sparse matrix
To appear in SIAM Journal on Matrix Analysis and Applications.
WSMP: Watson Sparse Matrix Package Part I - direct solution of symmetric sparse systems Version 1.0
http://www.
A mapping and scheduling algorithm for parallel sparse fan-in numerical factorization
http://www.
Making sparse Gaussian elimination scalable by static pivoting
A scalable sparse direct solver using static pivoting
Hybridizing nested dissection and halo approximate minimum degree for efficient sparse matrix ordering
Efficient sparse Cholesky factorization on distributed-memory multiprocessors
--TR
Parallel algorithms for sparse linear systems
Elimination structures for unsymmetric sparse <italic>LU</italic> factors
A supernodal Cholesky factorization algorithm for shared-memory multiprocessors
Modification of the minimum-degree algorithm by multiple elimination
An Approximate Minimum Degree Ordering Algorithm
Highly Scalable Parallel Algorithms for Sparse Matrix Factorization
Efficient Sparse LU Factorization with Partial Pivoting on Distributed Memory Architectures
A Supernodal Approach to Sparse Partial Pivoting
The Design and Use of Algorithms for Permuting Large Entries to the Diagonal of Sparse Matrices
An Asynchronous Parallel Supernodal Algorithm for Sparse Gaussian Elimination
Making sparse Gaussian elimination scalable by static pivoting
Preconditioning Highly Indefinite and Nonsymmetric Matrices
On Algorithms For Permuting Large Entries to the Diagonal of a Sparse Matrix
A Fully Asynchronous Multifrontal Solver Using Distributed Dynamic Scheduling
A Mapping and Scheduling Algorithm for Parallel Sparse Fan-In Numerical Factorization
--CTR
Laura Grigori , Xiaoye S. Li, A new scheduling algorithm for parallel sparse LU factorization with static pivoting, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-18, November 16, 2002, Baltimore, Maryland
Mark Baertschy , Xiaoye Li, Solution of a three-body problem in quantum mechanics using sparse linear algebra on parallel computers, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.47-47, November 10-16, 2001, Denver, Colorado
Olaf Schenk , Klaus Grtner, Solving unsymmetric sparse systems of linear equations with PARDISO, Future Generation Computer Systems, v.20 n.3, p.475-487, April 2004
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'Excellent , Xiaoye S. Li, Impact of the implementation of MPI point-to-point communications on the performance of two general sparse solvers, Parallel Computing, v.29 n.7, p.833-849, July
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Timothy A. Davis, A column pre-ordering strategy for the unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.165-195, June 2004
Patrick R. Amestoy , Iain S. Duff , Stphane Pralet , Christof Vmel, Adapting a parallel sparse direct solver to architectures with clusters of SMPs, Parallel Computing, v.29 n.11-12, p.1645-1668, November/December | sparse direct solvers;multifrontal and supernodal factorizations;parallelism;distributed-memory computers |
504225 | The look of the link - concepts for the user interface of extended hyperlinks. | The design of hypertext systems has been subject to intense research. Apparently, one topic was mostly neglected: how to visualize and interact with link markers. This paper presents an overview of pragmatic historical approaches, and discusses problems evolving from sophisticated hypertext linking features. Blending the potential of an XLink-enhanced Web with old ideas and recent GUI techniques, a vision for browser link interfaces of the future is being developed. We hope to stimulate the development of a standard for hyperlink marker interfaces, which is easy-to-use, feasible for extended linking features, and more consistent than current approaches. | Figure
1: HyperTIES used cyan to highlight links
[from Shneiderman & Kearsley 1989]
HyperTIES (Fig. 1) avoided this problem by using a distinct
text color for link markers, similar to Hyper-G's
browser Harmony (Fig. 5) which utilized background
colors [Shneiderman & Kearsley 1989; Andrews 1996].
This has the advantage that the typeface and style of the
text can be chosen freely.
Figure
2: Intermedia's link marker arrows
IRIS' "Intermedia" marked hyperlinks with little arrow
icons between lines of text (Fig. 2), showing the start of the
link span but not its endpoint [Yankelovich et al. 1988].
"Emacs Info" brackets link markers with asterisk symbols.
These methods occupy extra screen space and change the
layout of the text by inserting additional elements.
Figure
3: Link markup by the Neptune Document Viewer
[from Delisle & Schwartz 1987]
Bernstein's Hypergate, the Neptune hypertext system (Fig.
and some early Web browsers UdiWWW (Fig.
boxes around the link marker text. This works with the
layout but is quite obtrusive and distracting. An improvement
to this technique could be found in HyperCard and
Storyspace. They drew the boxes only when the reader
pressed particular keys, making links evident on request
thus keeping the text pristine the rest of the time.
Figure
4: Anchor highlighting in UdiWWW
In fact, this was the consensus solution after the Hypertext
'87 demo sessions, when hypertext designers could first
compare all existing systems side by side [Bernstein 1996].
Hiding links has many advantages: pages stay uncluttered,
text stays readable, and page design is less influenced by
link appearance. However, when using "links on demand"
the interface designer must be aware of a potential dis-
advantage: since links are not always visible, possibly
distracting mode switches have to be applied. Therefore
this link trigger has to be seamlessly integrated into the
interface.
A less desirable variant of this method was used by Symbolics
Document Examiner [Walker 1987]: link markers
were hidden so well that they were only highlighted when
the mouse passed over them. This forces a "hunt and peck"
search for active regions. In Microcosm and DLS the user
can query the system for invisible links by marking a word
or some text and then issue a search for matching links
[Carr et al. 1995]. Though anchors are not marked, this
model is well applicable to generic links, given that most
words can be selected as anchors.
Ignoring previous experiences, Mosaic returned to colored
(blue) and underlined text to indicate link markers. This is
not the optimal solution as it emphasizes the link marker
text permanently. The underlined markers stick out from
the surrounding text and decrease its readability as underlining
interferes with descenders, letters that drop below
the line like p, q and j. The blue color is also an imperfect
choice, since especially elderly people have problems to
perceive it; the human eye is less sensitive to the color blue
than to other colors [Lythgoe, 1979]. The reasons for
Mosaic's link marker appearance were obviously of technical
nature: it was quite simple to implement, and at that
time most computers had only 16 colors or a black and
white display. Blue was the darkest of the available colors,
closest to the black text; for monochrome displays, the text
was underlined [NCSA 1996].
The pervasiveness of the Web has led us to accept underlined
colored text as the de facto marker standard for all
mainstream hypertext systems today. It can be found in
help systems on various platforms and even in operating
system components like the Microsoft Windows Explorer?.
Though the use of a standard is desirable from a consistency
point of view, user interfaces have otherwise improved,
and we are still wedged with this historical hack.
More recent technologies like Cascading Style Sheets [Bos
et al. 1998] allow to define the appearance of text links in
various ways. Also, the look of links can be configured
somehow in current browsers, however, the standard
setting is still blue and underlined, and links on demand are
still not possible. Even worse, for the visualization of link
maps in graphics no standard method can be found nor is
implemented in Web browsers. This shows how important
it is to design user interfaces well, considering earlier
experiences, and that even an interface with obvious design
mistakes can become a standard.
ENTER XML LINKING
While the concept of links made it possible to create non-sequential
texts, rich hypertext systems offer a more
sophisticated functionality, i.e., they support comprehensive
structuring, editing and navigation features
[Yankelovich et al. 1988; Bieber et al. 1997]. However, to
date the Web itself only supports embedded one-way links.
This limitation made the authoring of Web pages and the
development of Web servers and browsers simple, enabling
the Web to grow extremely fast. On the other hand, all
approaches that try to integrate some extended functionality
into the Web have to utilize workarounds to overcome the
weaknesses of this simple approach. It was also hard to
compete with the well-known standard browsers, as these
often were better suited to display existing stylishly designed
Web pages.
With the advent of XML linking, the Web will be able to
offer many of the features hypertext experts are missing
today [Vitali & Bieber 1999]. Standard browsers already
migrate to XML and XSLT [Clark 1999] and hopefully
linking will become widely available soon.
The linking potential of XML linking1 is based on two key
standards which are necessary to create and describe links
and link anchors:
XLink itself defines links as relation between resources or
portions thereof. Syntactically, a link consists of an arbitrary
number of resources (local or remote) and arcs. A
resource is any addressable unit of information or service,
while arcs create directed connections between two resources
each [DeRose, Maler, Orchard 2000].
XPointer allows to address different kinds of spans in XML
documents [see DeRose, Maler, Daniel 2001]. They can
vary from points to complex regions and can even be
distributed over the document, e.g. a XPointer could be
used to address the string "Goethe" in all citations in an
XML file:
xpointer(string-range(//cite, "Goethe")).
Two types of semantic attributes are defined: machine-readable
information is stored in the role and arcrole
attributes and the corresponding human-readable textual
information is kept in title attributes. This type information
can be specified for the link as a whole, each
endpoint of a link and for every arc.
linking also presents a solution for linking into other
authors' read-only material, by addressing parts of the
documents' structure. There is no need for tailored target
anchors, which are embedded in the target document, any
more. The importance of this can be seen from printed
media, i.e., referring to distinct pages or paragraphs.
linking may not only be used for hypertext links,
but for any kind of application describing relations,
associations or compositions of XML documents.
However, this paper focuses on its use for hypertext.
To summarize, XML linking will allow a multitude of new
hyperlink features, among them:
? structure and contents may be separated;
links may be bi-directional;
links may be typed;
links may have multiple endpoints;
anchors may be complex or overlap.
While the syntax of XLink has been elaborately defined,
most presentation and behavioral aspects of links have been
deliberately excluded from the model. Only few hints on
how to implement and interact with these features can be
found in the XLink definition, and few ideas exist to enable
the user to cope with this extended functionality. There is a
vague notion of displaying title attributes to enrich link
anchors semantically and computing role attributes to
realize typed links [DeRose, Maler, Orchard 2000]. Pop-up
windows are suggested for links with multiple endpoints
[DeRose 1989; Bosak & Bray 1999]. Though this might
suggest that not much has changed, for the user interface,
these new features create a lot of new questions for link
visualization.
The chances seem to be good that XLink will succeed. The
changes in browser capabilities since Mosaic suggest that
more extensive hypertext features will eventually be ac-
cepted: Forms, JavaScript, Java Applets and Flash animations
are now widely used. Some extended link functionality
is already being simulated with DHTML, showing pop-up
menus with multiple destinations or extra information.
VISUALIZING EXTENDED HYPERTEXT FEATURES
The two well-known hypertext models Dexter [Halasz
1990] and HyTime [Derose & Durand 1994] had features
almost matching, and sometimes exceeding XLink, but no
system ever fully implemented them [Gr?nb?k & Trigg
1999, p. 42]. Nonetheless, many systems existed that were
far ahead of their time and offered functionality that is not
available in the Web today.
This enables us to find ideas and detect problems by looking
at all these systems and how they implemented and
visualized hyperlinks. We map the user interfaces of these
programs to the linking features that are made available
with the introduction of XLink. We can thus discuss what
is needed by the user to profit from the extended functionality
Separation of Structure and Content
Several former Open Hypermedia Systems like Microcosm/
DLS [Carr et al. 1995] or the Devise Hypermedia System
permitted to store links separately
from documents in dedicated linkbases. Likewise,
XLink will allow the separation of structure and content for
the Web.
The external storage of links permits multiple linkbases to
be used for single Web pages. These links may originate
from the original author but also from other authors without
write access to the original document, like a single user or
members of a group2.
The use of several different linkbases can result in an
unintentionally great number of links3. Therefore, the user
must be enabled to select the employed linkbases. An
example for such a method can be found in ?Third Voice?,
a browser plug-in that adds annotations to Web pages4 (Fig.
5). A part of its functionality is a service that adds links
from an external linkbase to keywords. These annotation
link markers are distinguished by orange underlines. Third
Voice offers an extra tool bar in the head of the browser
where the presentation of its links can easily be toggled.
Figure
5: Third Voice adds additional links to an existing
Web page and offers a choice of targets.
Microcosms/DLS already permitted the use of several
linkbases. It offered a configuration screen to select the
utilized link database from a given set [Carr et al. 1996].
Unfortunately this menu was not directly integrated into the
browser interface and the addition of new linkbases was
quite complicated. An XLink browser will also need the
potential to find new linkbases5 and add them to a personal
list. So far, no standard means has been established to do
so.
Bi-directional Links
From the technical point of view bi-directional links help to
links consistent and to avoid broken links. From the
usability viewpoint they also permit to follow links backwards
as opposed to the uni-directional ?goto? links of the
Web. A user could use this feature to find e.g. more recent
information which is referring to an old but valuable
document.
To benefit from bi-directional linking, the user interface
has to support the backward traversal of links. Most hyper-text
systems with bi-directional links like Sepia, MacWeb
or Hyper-G offered a "local map", showing nodes and
connecting links. This visualizes the topology and permits
the user to select source objects directly on the map.
These additional links can be used to annotate and
supplement the existing information with other information
of personal importance.
3 These links may also overlap (see following sections).
Third Voice is available at http://www.thirdvoice.com
5 Furthermore, the primary linkbase will frequently change
when browsing the Web, as it usually will be provided by
the server hosting the current document.
For the Web, the retrieval of links that refer to the current
document poses a serious problem. A prototype Web
browser tool described in [Chakrabarti et al. 1999] gathered
this information from search engines. They alternatively
proposed to extend the HTTP protocol to send backlink
information gathered from the referrer URLs in the server
log. The prototype offered a list of titles of Web documents
that were linking to the current document.
Both approaches have their limitations if the number of
links is high. Especially graphical maps use a lot of screen
space if dozens of nodes and links are displayed. Thus, the
number of objects has to be limited, e.g. by filtering the
most appropriate ones.
Typed Links
A link type describes the relationship between source and
destination of a link, often derived from semantic categories
like "explanation" or "example" [Trigg 1983]. They
were introduced to help users to get a better idea of a link
target. Streitz et al. list semantic link information as their
first principle of useful hypermedia system design [Streitz
et al. 1992]. However, typed links are only helpful if the
user can distinguish the different types.
Tim Berners-Lee's WWW proposal [Berners-Lee 1989]
included typed links, and HTML allows Web authors to set
the link type attributes rel and rev. Though, this feature
is not supported by any current Web browser.
Sepia [Streitz et al. 1992] and MacWeb [Nanard & Nanard
1993] displayed the link type in an overview map close to
the arrow visualizing the link. Once more, this link information
is only available to the user if he considers two
areas at the same time: a document view and a link map.
He has to join these two information segments cognitively.
Other systems use text style to distinguish different link
anchor types: the current Microsoft help system displays
explanatory pop-up links in green with a dotted underline
and uses icons to indicate specific actions as the execution
of a program. However, the potentials of Text style are
quite limited, and inline icons can be distracting and create
problems with the layout.
Figure
Different mouse pointers utilized by the Guide system.
The Guide system utilized different mouse pointers to make
link characteristics apparent [Brown 1987]. The pointer
changed according to the link type if it hovered over a link
(Fig. 6). Since mouse pointers are independent from screen
and text layout, this may be an interesting option for Web
clients, too. Standard software, like word processors and
graphics programs, and also operating systems, commonly
employ these differently shaped mouse pointers as it is possible
to indicate many different actions in a non-obtrusive,
yet immediately visible manner.
Multiple endpoints
Links with multiple endpoints do not connect only two, but
a set of related nodes. Thus different alternative destinations
can be provided. When a user initiates the traversal
of a link with multiple endpoints, he can be requested to
choose between the available options. This solution was
preferred by most former hypertext systems. Microcosm
and DLS presented a list of generated link targets on an
intermediary page as the result of a user query [Hall, Davis,
Hutchings 1996; Carr et al. 1995]. Intermedia displayed a
dialogue box with a list of link titles.
Likewise, the preferred idea for XLink seems to be a pop-up
menu [Halsey, Anderson 2000; DeRose 2000]. Though
lists of targets are probably the most straightforward
approach, they may slow down Web navigation. A user has
to make an additional selection from the pop-up list each
time he follows a link.
Multiple links can also be used to automatically select the
most decent destination by applying a filter. Already the
father of hypertext Vannevar Bush suggested filters for
links. If the user follows a Guided Tour, links of the displayed
documents should be hidden [Bieber 97; Bush 45].
could filter links by link attributes and Hyper-G
by user rights. It would be even more desirable to filter by
semantic criteria like a user's task or profile.
Complex Link Anchors
Many Web usability guidelines confine the setting and the
length of link markers, e.g. Nielsen recommends that link
markers should be about 5 words long [Nielsen 2000]. This
restriction is a concession to the limited link visualization
potentials of current Web browsers, where extended link
spans result in hardly readable underlined text regions.
Hypertext systems that displayed links only on demand
avoided these readability problems.
The XML linking standard allows arbitrary complex link
anchors. As explained before, it is even possible to create
discontinuous anchors, i.e., anchors that consist of several
distinct regions. To the user this may appear like multiple
anchors that share the same destination, which can be
irritating. In Web system evaluations, already links that are
displayed in more than one line have been found confusing,
as the beginning and end of the anchor were not indicated
by the browsers used [Spool et al. 1999].
Consequently, the extent of a link marker should be visuali-
zed. This is possible in recent Web browsers: the link marker
can be highlighted if the mouse hovers over the link.
However, the browser configuration has to be changed or
an appropriate CSS must be defined.
Overlapping Link Markers
Link markers may overlap, either because an author creates
two anchors at two intersecting text sections which are
related to different destinations6, or because other authors
create anchors overlapping with the link spans of the
original author.
Hardly any current Web user will be familiar with the idea
of overlapping link markers as they cannot be found on the
Web or any popular hypertext system. Currently, it is not
possible to create such constructs in HTML, since there is
no way to distinguish different opening and closing anchor
tags. This technological problem can easily be solved even
with embedded links as Hyper-G's markup language HTF
demonstrated. It used link identifiers to associate opening
and closing link tags [Maurer 1996].
Nonetheless it is much harder to find a usable solution for
the visualization of overlapping link spans. Harmony,
Hyper-G's browser, used overlapping colored background
boxes to mark the beginning and end of up to six overlapping
markers (Fig. 5). But even two overlapping links
are hardly readable and this method will finally fail if a
larger number of anchors intersects: the increasing number
of boxes will shrink to pixel height before they finally
disappear.
Figure
7: Link Overlap in Harmony.
The user must also be able to choose a desired link in the
overlapping section. Third-Voice (Fig. 5) displays a pop-up
window where the user can pick the link to follow. Harmony
lets the user first select an overlapping link by single-
left clicks and then follow it by a double-left click
[Andrews 1996, p. 54]. Both solutions are not optimal, as
the first one needs always two and the second one may
even need an uncertain number of clicks to follow a link.
The current version of Hyper-G does not support overlapping
links any more7.
A VISION FOR IMPROVED HYPERTEXT USER INTERFACES
The Web, undoubtedly the most successful distributed
hypertext system ever, has despite its simplicity already
serious usability problems. It must be prevented that this
becomes worse when extended linking features are introduced
We would like to revive a discussion by presenting ideas of
an user interface strategy for extended links. To accomplish
this we consolidated experiences of earlier hypertext
research with established and innovative GUI techniques to
create a consistent vision. These thoughts are widely based
6 Example: the phrase "psycholinguistics department"
might be a link to the department home page, while
another link explains the meaning of "psycholinguistics".
7 HyperWave Information Server Version 5.5 uses HTML
as markup language.
Figure
8: Mockup: Outgoing XLinks can overlap and are marked by transparency. Note the marked scrollbar.
on the analogy of the hypertext reader as a traveler, introduced
in Landow's authoritative "Rhetoric of Hypertext"
paper. He divides the interaction with links into two key
parts: departure and arrival [Landow 1987].
Landow's ideas were based on his experiences with Inter-
media. On the background of later hypertext research and
the enriched linking capabilities of XLink a further discrimination
is possible. The action of departure can be split
into two sub-actions: first, the problem of locating the point
of departure (identifying link markers) and second, the
problem of getting sufficient information about the destination
of the journey (understanding the link relationship).
Considering the arrival procedure, the reader must get a
reception at the destination to understand the extent and the
context of the referenced material. The direction he came
from, i.e., the origin of the journey, is the last page he
visited and therefore known.
Finally, XLink does not only allow for links that connect
just two endpoints ? it is also possible to build XLinks that
represent whole paths or structures. Thus, XLink at last
embodies a standard Web storage format for structural
information, e.g. for guided paths or for hierarchical site
maps. We will discuss the uses of these hidden links (hid-
den in the sense that they are not originating from rendered
page content) in a separate section.
Point of Departure
Current methods of Web authors ? emphasizing text anchors
by using color and style and using specially tailored
graphics to mark graphical link anchors ? are already so
common that they will probably continue to exist when
XLink is introduced. However, as illustrated above, these
methods do not have the potential to identify extended or
externally defined XLinks. Furthermore, no prevalent
standard visualization method can be found to identify
graphical or image map links. Consequently, new schemata
are needed to display supplementary links, e.g. from an
external linkbase provided by an XLink service.
From the usability point of view, a consistent and uniform
technique is desirable, that does not distract from reading
and does not interfere with text and graphical layout tech-
niques, but enables the user to identify even complex
anchors clearly. We think that an appropriate way to
accomplish this might be the use of transparent areas
overlaid on the hypertext document. Overlays have the
advantage to be feasible with text and graphics, indicating
active areas directly by masking them. They can be applied
also at places where the document author did not plan a
link. A possible distraction can be reduced by using soft
and light colors for bright background and shady colors for
dark background. User tests with more sophisticated
transparent user interfaces showed promising results
[Harrison 1995; Cox 1998].
An important factor that has to be considered is link den-
sity. If the ratio of marker area to unlinked area is high, the
distinctive anchor appearance may overwhelm the "normal"
text. Since an arbitrary number of "alien" links can relate to
an XML Web page, a selection mechanism will have to
prevent a phenomenon we would call "link overload",
similar to information overload, which could overshadow
the interaction potentials of the approaching XLink Web.
Therefore, the user interface must provide means to select
which links will be put on view.
Once more, this calls for links-on-demand display techni-
ques. The selection mechanism may be provided by an
additional tool bar or window. A "link database browser"
could be displayed at the left side of the window like the
history list of Microsoft Internet Explorer or the Sidebar of
Netscape 6. The tool would not only allow to select new
link databases8, it would also permit to enable and disable
8 XLink offers a standard storage mechanism for external
links. This permits the construction of hyperbase systems
that offer compiled collections of links, e.g. as the result
of a query [Gr?nbaek & Trigg 1999, p. 167]. These
services could be provided just like today 's search engines
or Web catalogues.
linkbases, making them appear or disappear. Colors may be
used to associate listed linkbases to the anchors on the
screen (Fig. 8).
When a link starts from an anchor longer than a few words,
the overall readability of the text rapidly decreases, at least
as long as persistent highlighting is used. As with the
introduction of XLink source anchors can become arbitrarily
long, this question becomes increasingly important. We
suggest a simple method to reduce the impact on read-
ability: a narrow bar on the right side of the anchored
paragraph. The use of different techniques for short and
long anchors is suggested by looking at the use of conventional
paper: markup on paper consists of highlighting
words by coloring them with a transparent marker and,
when longer passages need to be distinguished, marking
whole paragraphs by using vertical lines on the page
border. Sometimes a title is given to help recognize the
underlying concepts of the passage. Markup of this style
has the advantage to be apparent but not as distracting as
long underlined text. It uses only little screen space and is
goes along well with most layouts.
Since this simple technique does not show the exact location
of link marker start and end, it should be supplemented
by a rollover effect. The scrollbar, or optionally a small-scale
overview window could be used to show the location
of link anchors that are outside the currently visible page
section. Using the scrollbar to locate particular areas on
long Web pages was already suggested by [Laakso, Laakso
Overlapping links of several linkbases could be visualized
by transparent overlays in a defined neutral color, like
bright gray. If the user clicks on such an area, a transparent
pop-up appears, showing a list of the available link titles in
the color of their associated database. Moving the mouse
over these transparent items will highlight the related link
markers in the document.
Destinations
At first sight the more or less uniform looking links of the
Web are not typed. Apart from the marked text, the only
preview a user can always get in a Web browser is the
destination URI [Spool et al 1999]. Even this scant information
is frequently utilized by Web users. Sometimes link
titles or alternative descriptions to graphics are provided to
hint at the content of the target document.
Looking closer the current Web could already provide
much richer information. Link targets can differ in type
("mailto:" links, downloads), availability (broken links),
size, and connection speed (affecting download time).
Further information of semantic nature like title, author and
language of the target document or structural hints like
indicating out-of-site links could be used to automatically
enhance link preview. In a paper on our project HyperScout
we already suggested techniques to display such information
in pop-ups [Weinreich 2000].
Figure
9: A pop-up menu that renders both XLink-specific,
and other automatically gathered information.
While XLink's title information can also be straight-forwardly
displayed in such a pop-up window, the
machine-readable information is provided to compute type
information. This can be used to induce alternative traversal
behavior, or to get advance information about file
types of target documents. It could also be used to filter
links according to a specified user profile. If this leads to
alternative browser behavior, this must not be hidden from
the user. In addition to pop-ups, we suggest the use of
different mouse pointers to immediately indicate link
actions, comparable to the method of the Guide system.
If an XLink offers several destinations, the problem of
selection occurs. Pop-up menus with a list of available links
are suggested in the XLink definition and some publications
[DeRose 2000; Bosak and Bray1999]. They have,
however, the disadvantage to require additional user action:
the user has first to choose a link, click and then he has to
choose a target anchor and click again. We would suggest
to use the role attributes to allow filtering, thus displaying
only part of the link targets available. In certain cases it
might even be desirable that a default destrination is automatically
selected when the left mouse button is clicked.
Indicated by the mouse pointer, a pop-up appears only on
right mouse click, presenting a choice of complementary
link targets.
Arrival
The rhetoric of arrival in the sense of Landow requires that
the reader gets the feeling of welcome at the destination
document: ?One must employ devices that enforce hyper-text
capacity to establish intellectual relations.? [Landow
1987].
Establishing such an intellectual relation requires the user
to determine the target of a link and its context. The method
of today's Web browsers to present the target is simple: the
whole document is shown, or, if a fragment identifier was
specified, the browser tries to scroll to the position of the
fragment anchor. In fact this is a known usability problem
of the Web: as the span of the link target is not visualized
the user cannot identify the extent of the destination. If the
fragment locator is near the end of a page, the browser
often cannot scroll sufficiently down to display the link
target at the top of the window.
Tim Berners-Lee's first graphical Web browser on the
NextStep system and later versions of Mosaic did already
highlight the target anchor [Caillau & Ashman 1999], a
feature lost in current mainstream browsers. Since we
already suggested transparent highlighting areas to identify
starting anchors, a different method would be advisable to
prevent misconceptions.
Figure
10: A mock-up showing the incoming link target in its
context and information about co-references.
We again suggest a technique long before known to work
on paper: lateral marker lines. A narrow bar on the far left
side of the window is used to indicate the target span. The
chance of confusing incoming and outgoing links is thus
(incoming links: left vs. outgoing links: right) kept to a
minimum. The Devise Hypertext System utilized a similar
technique to indicate target anchors [Gr?nb?k & Trigg
1999, p. 314]. When a more precise visualization of the
target becomes necessary (e.g. for tables), an on-demand
method may be used: moving the mouse over the marker
bar will shade the rest of the document except for the target
area; a method already used in Harmony's PostScript
viewer [Andrews 1996].
If the target section is larger than the visible window,
clicking on the bar will ?pin? the shading and the user may
scroll the page. Additionally, the scrollbar may be used to
show the extent of the target span compared to the whole
page, especially useful if it does not fit into the window.
Although a very precise notion of the target anchor can be
specified with XML linking, a weakness of XLink
emerges: what the standard lacks, is a definition of the link
context. Nanard and Nanard argue for a distinction of link
anchor (as trigger) and link context (as minimum reading
context) at both ends of the link. The link anchor is usually
quite short and focused, as the link context is embedding
the anchor, enabling the reader to understand the relationship
of a link better [Nanard & Nanard 1993; Hardman et
al. 1994]. The meaning of a sequence of words can change
severely when torn out of its context, i.e., the surrounding
sentence, paragraph or chapter. XLink misses an explicit
definition of such context spans. This is a serious dis-
advantage, that could be easily fixed by an additional
attribute for the resource tags. Then, if an anchor is
selected, the context should be made visible.
It is also sometimes useful to supplement other links
pointing to the target anchor, called co-references. These
links could have been collected in earlier sessions, retrieved
from search engines or compiled by linkbase systems. They
can stem from material that was not visited in the course of
the current search, and, if followed in the reverse direction,
they can provide material related to the current target
anchor.
We suggest to make the most appropriate co-references of
the last navigated link available by right clicking on the
lateral marker bar, just as the right click opens a pop-up on
link anchors. A double click can be used to open a larger
list in the left side of the browser with more references
pointing to that document. Finally, we think it would be
feasible to apply filtering mechanisms utilizing the arc
roles, e.g. to display only links that use the current anchor
as an ?example?.
The Use of Hidden Links
So far we have tried to optimize the visualization of hyper-links
with markers in documents. However, XLink can also
be used to describe relations without markers, e.g. non-associative
structural XLinks. These links can either be
supplied by the author of a site or by an external source,
e.g. a guided tour or a trail [Hammond & Allinson 1987;
Bush 1945]. Automatically generated link overview maps
(local map, fisheye view, 3D landscapes) often seem to be
more confusing than helpful, when used in large hyperspaces
[Utting & Yankelovich 1989]. Because of the
immense size and the distribution of the Web, structural
information has to be provided for an overview that truly
can help to find semantically related content.
A special link type introduced by HTML 2.0 (<LINK
REL>) to distinguish between structural and associative
links [Berners-Lee 1995]. Though this made it possible to
separate structure-related and content-related navigation, it
is poorly supported by current Web browsers. Only some
less widely spread browsers like Lynx and iCab (Fig. 10)
support the use of structural links. Thus, so far there are
only a handful of Web sites that offer structural links. Yet,
this information could often be easily provided, especially
for generated Web content or sites created with an authoring
tool.
Figure
11: iCab's structural link navigation toolbar.
To support structural information in XLinks special link
roles and arc roles would have to be defined. This, how-
ever, would use the role attributes not for semantic but
rather for syntactical information. Then again, it would be
possible to provide complete structures, e.g. Guided Tours
or Site Maps, in a single link, something not possible with
the LINK element.
As for link markers, a consistent interface is needed for
structural navigation: we suggest that XLink-aware browsers
should provide an iCab-like toolbar for basic structural
navigation. Furthermore a hierarchical view, like Hyper-
G's collection tree browser, can be provided on demand.
This additional navigation tool should be displayed in the
same browser window, e.g. in place of the sidebar. The
interface should also provide a standard interface for
Guided Tours or other meta-structures, thereby eliminating
the need for workarounds.
We can also imagine hybrid XLinks which bear structure,
and have link markers in the Web page9. This implicit
structure could be extracted and displayed in the standard-
9 Such structural links include: links on a homepage
pointing into the site, site logos pointing to the homepage,
arrows for next and last pages, etc.
ized user interface. The original embedded links should not
be hidden: the user can thus either use the consistent
standard interface (without having to search for navigation
or follow the rendered structural links (without
having to leave the page context).
CONCLUSION
Usability has become a key factor for the success of soft-
ware. Despite the intensive research on hypertext systems,
no standard hyperlink user interface has been agreed on.
We are thus bound to the de-facto standard of the Web, a
design with many inherent weaknesses that does not agree
with extended linking features.
Experiences from software engineering have shown how to
do better: the initial design of a system has to include its
user interface as well as its functionality [Nielsen 1993].
The representation of data is only of secondary importance.
This demonstrates the need of reconsidering currently
developed standards: the XLink standard does hardly
mention the user interface. The same lack of consideration
of link interfaces is apparent in other W3C activities:
Neither HTML nor its present descendants nor other
standards like SMIL or the Semantic Web Initiative mention
design issues regarding the user interface.
In this paper we try to stimulate a discussion on the visualization
of and the interaction with extended hyperlink
features. We believe that this is necessary to prevent an
impairment of Web usability when new linking features are
introduced.
Experiences from historical systems can help to avoid
mistakes and to provide solutions that are still topical. This
paper presents problems and solutions for the presentation
of and the interaction with extended hyperlink features.
Though we are aware that the developed vision can still be
enhanced, we gathered well-tried methods to create a consistent
and easy-to-use interface.
In this process, design issues for XLink arose: we found
some open issues, i.e., the missing definition of contexts,
default arcs, syntax attributes or attributes needed to carry
preview information (like the size of a target document).
Some issues were completely left out, like the distribution
of links via linkbases or an exact specification for the use
of the semantic attributes.
Nonetheless, XLink can be used even today: when XLinks
are used on Web servers, the centralized storage makes link
management much easier [Markos 2000]. Using XSL
Transformations, XML or XHTML documents and XLink
linkbases can be converted to HTML and be accessed by
conventional browsers ? right now.
In the long run, however, this functionality should be
moved to the client ? only then the browser will be able to
exploit the full power of XLink. The success of XLink or a
similar standard will eventually depend on two factors:
decent tools for authors and readers.
--R
Information Management: A Proposal.
Hypertext Markup Language - 2.0
HypertextNow: Showing Links
Forth Generation Hypermedia: Some Missing Links for the World Wide Web.
Cascading Style Sheets Level 2 Specification.
XML and the Second-Generation Web
Turning Products: The Guide System
As We May Think
Hypertext in the Web
The Distributed Link Service: A Tool For Publishers
Web Links as User Artefacts
Surfing the Web Backwards
XSL Transformations (XSLT) Version 1.0
XML Path Language (XPath) Version 1.0
Hypertext: An Introduction and Sur-
The Usability of Transparent Overview Layers.
Neptune: A Hypertext System for CAD Applications
XML Pointer Language (XPointer) Version 1.0
Linking Language (XLink)
Making Hyper-media Work: A User's Guide to HyTime
The Dexter Hypertext Reference
XLink and open hypermedia systems: a preliminary investigation
The Travel Metaphor as Design Principle and Training Aid for Navigating around Complex Systems.
Adding Time and Context to the Dexter Model
An Experimental Evaluation of Transparent User Interface Tools and Information Content
Relationally Encoded Links and 46.
The Ecology of Vision
--TR
Neptune: a hypertext system for CAD applications
Hypertext: an introduction and survey
The travel metaphor as design principle and training aid for navigating around complex systems
Hypertext hands-onMYAMPERSANDmdash;an introduction to a new way of organizing and accessing information
Context and orientation in hypermedia networks
SEPIA
Should anchors be typed too?
The Amsterdam hypermedia model
An experimental evaluation of transparent user interface tools and information content
As we may think
Fourth generation hypermedia
Fluid links for informed and incremental link transitions
The usability of transparent overview layers
Web site usability
Surfing the Web backwards
Turning ideas into products
Document Examiner
Relationally encoded links and the rhetoric of hypertext
XLink and open hypermedia systems
Hypermedia on the Web
Hypertext in the Web MYAMPERSANDmdash; a history
Concepts for improved visualization of Web link attributes
Designing Web Usability
Hyperwave
Usability Engineering
Rethinking Hypermedia
Making Hypermedia Work
A network-based approach to text handling for the on-line scientific community
--CTR
Duncan Martin , Helen Ashman, Goate: XLink and beyond, Proceedings of the thirteenth ACM conference on Hypertext and hypermedia, June 11-15, 2002, College Park, Maryland, USA
Duncan Martin , Mark Truran , Helen Ashman, The end-point is not enough, Proceedings of the fifteenth ACM conference on Hypertext and hypermedia, August 09-13, 2004, Santa Cruz, CA, USA
Paolo Ciancarini , Federico Folli , Davide Rossi , Fabio Vitali, XLinkProxy: external linkbases with XLink, Proceedings of the 2002 ACM symposium on Document engineering, November 08-09, 2002, McLean, Virginia, USA
Hartmut Obendorf , Harald Weinreich, Comparing link marker visualization techniques: changes in reading behavior, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Delfina Malandrino , Vittorio Scarano, Tackling web dynamics by programmable proxies, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1564-1580, 14 July 2006
Niels Olof Bouvin, Augmenting the web through open hypermedia, The New Review of Hypermedia and Multimedia, v.8 n.1, p.3-25, January
Whitehead, As we do write: hyper-terms for hypertext, ACM SIGWEB Newsletter, v.9 n.2-3, p.8-18, June-October 2000 | XLink;user interface;distributed hypertext;link marker |
504364 | Class-is-type is inadequate for object reuse. | It is well known that class and type are two different concepts in object-oriented programming languages (OOPLs). However, in many popular OOPLs, classes are used as types. In this paper, we argue that the class-is-type principle is a major obstacle to software reuse, especially to object reuse. The concepts of the basic entities, i.e., objects, object classes, object types, and object kinds, in the type hierarchy of OOPLs are revisited. The notion of object reuse is defined and elaborated. In addition, we show that parameterized types and generic functions are better served by used kind-bounded quantification than universal quantification and other mechanisms. | Introduction
Object-oriented languages and technology have propelled software reuse to an unprecedented level. However,
software reuse at the current stage is still mostly (1) within a single software project and (2) in the form
of source-code reuse and as libraries. The dream that software construction would be similar to assembling
Integrated Circuits (ICs) in hardware is still not realized. Programming in many essential aspects is the same as
twenty years ago. Software construction is still basically at the stage of program writing, and has not advanced
to the stage of software integration (SI).
We consider that object reuse (rather than class reuse) is a key concept in software integration. Objects
are the basic building blocks of an object-oriented system. They are first-class citizens. Programming via
software integration means that we can reuse and integrate objects, which are developed independently (by other
This work is supported by the Natural Sciences and Engineering Research Council of Canada grant OGP0041630.
projects or object vendors) and in an executable form, into our program conveniently without knowing their
implementation details.
We consider that object reuse is a much more important concept than class reuse in software integration.
Consider an example in the physical world: When we purchase an air-conditioner for a house, we do not need or
want to know the design and technical details of the air-conditioner. It is only necessary for us to know whether
the air-conditioner satisfies a few specifications, e.g., cooling capacity, physical dimensions, and voltage. In this
case, the object (the air-conditioner) and its type (a few specifications) are our concern and interest. We do not
need or even want to know the class of the object (the technical details of the air-conditioner).
We consider that one major cause of the stagnation in reuse is that, in most major (class-based) object-oriented
programming languages, object classes are directly used as object types. For example, when we specify
the object type of a function parameter, we use a class in place of a type. This approach may be due to the
fact that the semantics of an object type is easy to define by a specific implementation. It is important to
notice that classes are implementation-dependent entities in general and also objects cannot exist autonomously
apart from their class definitions in current class-based languages. In contrast, object types are implementation-independent
entities. The class-is-type principle used in many object-oriented languages restricts an object type
to a specific implementation. We argue in this paper that the class-is-type principle, not the classes themselves,
is an obstacle to object reuse.
Another issue that concerns the type hierachy of object-oriented languages is about parameterized types.
Note that a C++ class template is a parameterized type under the class-is-type principle. We consider that a
parameterized type is just a type function which maps a given type (or types) to a type. The domain of such a
type function, i.e., the domain of its type parameter(s), is commonly represented by T (T n ), where T is the set
of all types. However, in many cases, not all types can be used to replace the type parameter of a parameterized
type. There are restrictions which are implicitly imposed by the parameterized type definition. We consider
that object kinds, which are higher-level entities than object types [31, 32, 26, 25, 34], are best suited for defining
the domains of type parameters of parameterized types as well as for those of generic functions.
Objects, object classes, object types and object kinds are basic entities in the type hierarchies of various
object-oriented languages [31]. In the next section, we review the basic concepts of those entities as well as the
relationships among them. We also discuss the similarities and differences between abstract classes, interfaces,
and object types. We focus our attention to the class-is-type principle, giving an initial analysis on its advantages
as well as its potential problems. We also distinguish object kinds from supertypes.
Object reuse is defined and analyzed in Section 3. We consider that object reuse concerns mainly the
following five issues: (1) object creation, (2) object autonomy, (3) object application, (4) object integration,
and (5) object modification. We describe each of the five issues as well as its role in object reuse.
In Section 4, we discuss what roles classes and the class-is-type principle play in each of the five issues of
object reuse. We find that while classes can be apt at object creation and object autonomy, the class-is-type
principle is inadequate for object application, object integration, and object modification, i.e., all the last three
issues of object reuse. We also give our insight analysis on why this is the case.
In Section 5, we argue that parameterized types and generic functions can be best served by kind-bounded
qualification, i.e., each type parameter is qualified by a higher-level entity "object kind".
We conclude our paper in Section 6 by summarizing our suggestions on the further development of object-oriented
languages.
Type hierarchy of object-oriented systems
In this section, we give our view on the basic type entities in object-oriented systems. We will consider objects,
object classes, object types, as well as the concept of object kinds. We will be brief on the commonly accepted
concepts and pay more attention on the points that are related to the topics of the subsequent sections of
the paper. Note that object classes and object types are very different entities; however, the approach that
object classes are used in place of object types, i.e., the class-is-type principle, is used in many object-oriented
languages, including C++ and Java. We will also describe the concept of object kind and the fundamental
differences between supertypes and object kinds. The three seemingly equivalent terms, abstract class, interface,
and object type, are also compared.
2.1 Objects
An object is an integrated unit of data and methods, which is an abstraction of an application domain or an
implementation domain entity, which has crisp boundaries and meaning for the problem concerned.
Each object has an identity, data fields, and method fields. The data fields store the internal state of the
object and act as data storage, while the method fields are functions that modify the internal state of the object,
access the data storage, and interface the object with the outside world.
The internal state of an object is modified dynamically during execution. Normally, this change can be made
only through its own methods. Not all methods may be accessed from the outside of the object. Some may only
be used by other methods of the object. We call those methods that can be directly accessed from the outside
the properties of the object.
Clearly, objects are first-class entities of object-oriented systems.
2.2 Object classes
serve as the descriptions of objects as well as a mechanism to create objects. Objects are instances
of classes. In class-based languages, the only way to describe an object is through its class. A class is also
considered a set of objects which share the same description. Thus, the relation between an object O and its
class C can be denoted O 2 C.
In general, object classes are implementation-dependent entities. For example, a stack class with an array
implementation and a stack class with a linked-list implementation are considered as different classes. Different
objects of the same class have the same implementation, but different identities and maybe different internal
states.
An abstract class may be independent of any implementation. However, an abstract class cannot be the
description of an object directly. An object can only be instantiated directly from a concrete class but not from
an abstract class. So, an abstract class as a set has no direct object members.
Object classes are static entities. Unlike objects, classes cannot be changed at runtime once they are defined.
The relationship between classes are static, too. We can modify the relationship between objects but not their
classes at runtime.
We call a property of a class is a method of the class that can be accessed directly from the outside. All
objects of a class (direct members) share exactly the same set of methods and, thus, they have exactly the same
properties.
2.3 Object types
Intuitively speaking, an object type is a description of a collection of properties without giving any implemen-
tation. Thus, an object type is also considered a set of objects that have the same properties, but possibly
different implementations. Clearly, object types are implementation independent entities. All objects of the
same class are also of the same object type. However, all objects of the same object type may not belong to
the same class. For example, two integer-stack classes S 1 and S 2 are implemented with a linked list and an
array, respectively. They have the same set of operations: push(int), pop(), top(), empty(). S 1 and S 2 are two
different classes, but of the same type, say T S . In this case, we also say that
is an implementation of T S .
A stack object O S of type T S is considered a member of the set T S , denoted
O
The fact that S 1
An object type may be defined in many ways. For example, an object type can be define by listing all its
properties (the names and signatures of all its methods) [18], by extracting a type from a class using an operator
[1], or by implicitly deleting the implementation details of a class when a type is needed. Similar to classes,
subtypes can be defined in terms of their supertypes.
We say that an object O is of type T if O possesses the properties of T . Note that O can be of type T in
spite of that the creation of O may be independent of the definition of T .
In some cases, an object O actually possesses the properties of an object type T , but O and T use different
names for their properties. For example, O has the two properties below:
void insert-top( Content );
Content delete-top();
and T is defined as:
Type T -
Content pop();
void push( Content );
and we know that insert top is the same property as push and delete top corresponds to pop. Then it would
be convenient to have a "is of " language construct like the following:
Object O is-of Type T -
insert-top() is push();
delete-top() is pop();
which allows the programmer to claim that O is an object of type T and to specify the correspondence between
the properties of O and those of T . Type checking and type equivalence are usually quite complex. The above
construct would be helpful in reducing the work of the compiler.
More formally, the "is of " is a binary operator which links an object to an object type. The "is" operator
inside the "is of " construct is a propterty matching operator, which links a property of the object to a property
of the object type. A simple interpretaion of the "is of " construct is that we require the two properties on the
both side of the "is" operator to have exactly the same signature although they may have different names. This
interpretaion is in general good enough for practical purposes. A more flexible and complex interpretaion is to
require that the object type of the property on the lefthand side, which may be a function type, is a subtype
of that of the property on the righthand side. This interpretation would involve contravariance and covariance
rules.
The relationship between classes and object types is an interesting one. They are both collections or sets of
objects. But, object types are implementation independent entities while classes are not.
We say that an object type defines a set of objects that have the same external behaviors. However, the
precise meaning of behaviors would become extremely complicated to define if the behaviors were not restricted
to a specific implementation. We may use, e.g., axiomatic, denotational, or logical way to specify the behaviors
of an object type, but all those methods are too complex to be practical at the moment. The easiest way to
precisely define the semantics of the properties of a type is, perhaps, to give a specific implementation of the
operations. In other words, it is easy and safe to define an object type by an object class. This comes what we
see in many popular object-oriented languages the class-is-type principle: using object classes as object types.
The main advantage of using the class-is-type principle is that object types defined this way are rigorous,
unambiguous, precise, and easy to define, although not necessarily easy to comprehend. The main disadvantage
is that each object type is restricted to only one particular implementation and, thus, becomes implementation-
dependent. We will argue later that this is a major obstacle to object reuse and software integration.
In some object-oriented languages, e.g., Theta [18], object types are separate entities from classes. In Theta,
a new object type, or simply a new type, is defined by the names and the signatures of its methods (properties).
The semantics of the methods (properties) is not formally defined and is left for the programmer to interpret
from their names and possibly informal definitions in the comments. For example, a stack type and its operations
push, pop, top, etc. are well understood simply from their names. A graph type can be easily understood with a
few lines of explanations in the comments. This informal approach can be dangerous if without care. However,
this is perhaps the only practical way of defining an object type in a program without restricted it to a specific
implementation, and this way works well in the physical world.
Abstract classes are available in many major object-oriented languages. Abstract classes can play the role
of object types in certain extent. However, abstract classes are not object types. The main differences between
an abstract class and an object type are the following:
(1) Abstract classes may or may not be implementation independent while object types are implementation
independent with no exception. There is no consistency in this aspect on abstract classes.
(2) For an object O and an abstract class C, we say that O 2 C if there exists a concrete class C 0 that is a
descendant of C such that O is an instance of C 0 . In other words, there has to exist a declaration link or
family relation between O and C, that is, we have to declare that C 1 is a subclass of
subclass of C n , and O is an instance of C 0 . In contrast, the fact that O is of type T can be simply implied
by the fact that O has all the properties of T . The relationship can be direct. There does not necessarily
exist a family relation here.
Interfaces in Java are one-step closer to object types than abstract classes are. Interfaces are implementation-independent
entities. Thus, the above (1) does not apply on them, but (2) still does.
2.4 Object kinds
All object types that share a certain set of common properties form an object kind and it is specified by those
properties. Hence, object kinds are at a higher-level than object types. The members of an object kind, if any,
are object types. Object kinds have been introduced and studied in [31, 32, 26, 25, 34]. The name kind has also
appeared in [16, 21, 8, 9, 10], however, the connotations of this word are not the same. We will show later in
Section 5 that object kinds are useful in defining parameterized types and generic functions.
The relation between an object kind and its members, i.e., object types, cannot be represented as a super-type
relation. Consider two object types: word (or character string) and acyclic graph, where a word can be
implemented by an array or a linked list and an acyclic graph by an adjacency matrix or an adjacency list, etc.
Here, we name them type W and type G, respectively. These two different types have a common property: the
distance between two words, respectively two acyclic graphs, can be measured, i.e., each of the two object types
has a distance function. Naturally, we can represent the set of types that have a set of common properties by a
higher-level entity. In this case, we define an object kind K to be a set of all types that have a distance function.
Clearly, both W and G are types in K. It is important to notice that K is not a supertype of W and G, and K
cannot be replaced by a supertype of W and G in this case. Assume that we define a supertype T (instead of a
kind K) such that T has a distance function which measures the distance between each pair of objects of type
T . This would imply that there is a distance function between a word and an acyclic graph by the subtyping
principle. This is clearly not what we intend to define.
3 Object reuse
In this section, we consider the issues that concern object reuse.
We classify objects into the following two categories: internal objects and external objects. In a program,
we call the objects created within the program internal objects and those created elsewhere but used in the
program external objects. Since the reuse of internal objects is relatively straightforward and involved in fewer
issues than that of external objects, our discussion is focused on the reuse of external objects. The sources of
the external objects may include other projects and object vendors.
Object reuse is different from class reuse or source code reuse. By object reuse, we mean that we reuse
objects that are already created and in an executable form, especially those that are developed independently,
in a larger programming environment.
Object reuse may involve the following five interrelated issues:
(1) object creation: how an object is created,
(2) object autonomy: whether an object can survive autonomously,
(3) object application: how an object is used,
object integration: how two or more objects are integrated into larger objects,
object modification: whether an individual object can modified and what are the side effects.
An object has to be created first before any application. Object can be created in many ways. In class-based
languages like C++ [28], Java [3], Eiffle [22], and Modula-3 [23], objects are instantiated from classes. In
object-based languages, e.g., Cecil [11], Self [30], and Omega [6], objects are created by cloning and extension of
prototypical objects without the existence of classes. Apparently, how an object is created does not necessarily
affect how an object is used internally or externally. We consider that object creation is not an essential issue
for object reuse.
In order for an object to be used not only internally but also externally in terms of the environment where
it is created, it is essential that the object should be able to survive autonomously. This means that the object
is executable without its class or other objects of its creation environment. Many issues may involve object
autonomy, which include embedding versus delegation, global variables and class variables, and visibility of
attributes. However, we feel that it is not difficult for any object-oriented languages to be modified to safely
export objects, which can live autonomously and be reused safely in other runtime environments. For example,
objects that are going to be exported can be specially marked as "exporting" objects, and the compiler will check
whether they satisfy certain conditions for autonomy, e.g., no global variables, and if they do, then generate the
autonomous objects using embedding instead of delegation, etc.
Object application by definition is clearly the most important issue in object reuse. What conditions are
necessary for an external object to be used in (or linked to) a programming environment? Let us again consider
the air-conditioner example. When we purchase an air-conditioner for a house, it is necessary for us to know
whether the air-conditioner satisfies a few specifications, e.g., cooling capacity, physical dimensions, and voltage.
However, we do not need or want to know the design and technical details of the air-conditioner. Also, any
air-conditioner that satisfies our requirements, whatever its internal implementation is, would do the job. The
internal implementation is not part of our requirement. In this case, what we need are the object and its object
type, and whether the object type conforms the required object type, and we do not need and want to know
the object class. Thus, an object type system which is separate from the class system need to set up for object
reuse. In this type system, for example, an object type may be explicitly declared, or obtained by using an
operator which, given a class C, removes all the implementation details of C and returns an
object type. Object types may have their own inheritance hierarchies. Most importantly, the system should be
able to check, for a given object and a given object type, whether the object is of the give object type.
When an external object is used in a programming environment, it may not just act as a server providing
one-way services, it may also call the methods of (or send messages to) some internal objects. In other words,
the object may interact with internal objects. Also consider the situation when we use two or more external
objects in our programming environment. Those objects may require to interact with each other. In a strong-
typed object-oriented system (as we have assumed), each object involved in a multi-object interaction needs to
know the type information of the other objects. However, we may be at a situation that external objects are
developed independently by different vendors. Thus, in general, the developer of each object does not know how
the types of other objects are defined. There are many different ways to solve this problem. Example solutions
are (1) standardization of object types for object interfaces, (2) writing wrapper or adapter programs in order
for independently developed objects to interact, and (3) using the "is of " construct suggested in Section 2 to
match the equivalent types that have different appearances.
Solution (1) above may be a long-term solution. However, even in the long term, it is difficult to standardize
all the types for reusable objects. Other mechanisms have to be in place. The wrapper and adapter programs
are complicated and the programmer has to know the type declarations of each object involved (and have to
have their class declarations under the current class-is-type principle). It would be much more convenient to
use the "is of " construct together with the standardization.
It would also be convenient if objects can be modified after creation. In several object-based languages,
both data fields and methods of an object can be modified in a uniform way without any side-effect. Object
modification and object autonomy are two closely related issues. However, we consider this issue as a convenience
issue.
The notion of object reuse appears to be similar to that of component-based computing [29] in many ways.
Both a component and an object (in object reuse) are being independently developed and in an executable form.
We consider that a component is just an object or a collection of closely related objects in our term. However,
component-based computing mechanisms in general are not part of an object-oriented language; they impose
an external structure on the programs written in current programming languages rather than change them.
IDL can be consider a standard way to specify object types. In some sense, the component-based computing
mechanisms are meant to alleviate the problems of current object-oriented languages in object reuse from the
outside rather than from the inside.
4 Class-is-type is inadequate for object reuse
play two major roles in class-based languages: (1) as an object description and creation mechanism,
and (2) as object types.
As an object creation mechanism, classes can serve the first two issues of object reuse well. With some minor
modifications, the current popular class-based languages can be used to develop autonomous objects. Objects
can be also created in many other ways in object-based [1] or prototype-based languages [27, 24, 13], e.g., by
direct declaration and by cloning and extension from prototypical objects. We consider that classes as an object
creation mechanism is as adequate as any other mechanism whether it is for creating a single object or a stock
of objects.
We now consider the class-is-type principle, i.e., classes plying the role of object types, in relation to object
reuse.
Let us assume that we are going to use a "registrar" object, from an external source, in our program. In
our program, we instantiate many student objects from the "STUDENT" class (type) in our program, each of
which has a method for registration. When a STUDENT s1 registers, it calls the registration method and uses
the external object registrar for registration:
s1:register( registrar );
This implies that we have the following definition of a method for registration in STUDENT class:
Boolean register( REGISTRAR );
where REGISTRAR is the class, as well as the object type, of registrar. The class REGISTRAR has to be
declared in our program. Since the class-is-type principle is used, we have the following problems:
ffl The class REGISTRAR (and possibly its descendants), which describes all the implementation details
of the object registrar, can be very complicated and large (this is why we reuse it instead of developing
it by ourselves). Since we have to include the class REGISTRAR (and its descendants) in our program,
it would be more economical just to instantiate a registrar object in our program rather than to use the
external object registrar. Then object reuse degenerates into class reuse (or source code reuse).
ffl If both registrar and student s1 are external objects and are created independently, then the register
method of the s1 object may not be defined as above since the student object does not have the class
definition of a registrar when it is created. (Also, the registrar object does know the class definition of s1
objects.) This seriously restricts the way the registrar and s1 objects are defined and makes integration
of those objects extremely complex.
ffl Neither the registrar object nor the s1 object can be modified to have additional features or more efficient
implementations once it is created.
From the above example, we can observe that there are the following three general problems of the class-is-
type principle for object reuse:
(1) Classes are implementation dependent entities. The class-is-type principle unnecessarily restricts an object
type to a specific implementation. It is not compatible with the general idea of reuse.
(2) Under the class-is-type principle, an object O can be used where object type (class) T is specified if O is
an instance of T or an instance of a descendant class of T . In other words, O has to have a declaration
link or a family relation with T . Since external objects are developed (programmed) independently, this
necessary link makes the reuse of external objects complicated and difficult.
(3) Classes are detailed description of objects. When the information about the type of an object is re-
quired, the class-is-type principle makes the object type information too cumbersome and even sometime
redundant to the object itself.
The class-is-type principle makes the realization of object application, object integration, and object modification
difficult and even impossible in some cases.
5 Parameterized types and generic functions
parameterized type is a function which maps a type (or several types) into another type. Let T denote the
set of all types. Then a parameterized type P is a function
For example, a C++ Sorted Vector template declaration [28] is in the following:
template!class T? class Sorted-Vector -
T* v;
int sz; // size of the vector
int n; // current number of elements
public:
explicit Vector( int );
T& operator[](int i);
if (n == zs) return false;
for (int
return true;
bool delete( int i ); // delete ith element
where T in the above is the type parameter of the template. The template maps the int type to an integer
Sorted Vector type, maps the char type to the character Sorted Vector type, etc. It appears that given any type
the template would map it to a Sorted Vector of that type. However, this is not totally true. Notice
that there is a comparison v[i] ! a between two objects of type T in the function insert. This implies that T
cannot be any type but only types that have a '!' operator. So, the domain of this type function (the template)
is not T (the set of all types) but a subset of T . We call this subset a KIND K. Then the Sorted Vector
template is a type function: K ! T . (In general, a parameterized type is a function: K 1
are KINDs.) The KIND K may be defined as follows:
KIND K: for T in K -
bool operator! (T, T);
Then the template may be written as follows:
template!K T? class Sorted-Vector -
where !K T? denotes that type T is bound to KIND K.
Similarly, a generic function is a function that maps from a type to a function and, thus, kinds can be used to
define the domain of such type functions, too. We call such type of polymorphism KIND-bounded polymorphism.
Note that we have discussed in Section 2 that KINDs cannot be replaced by supertypes. KIND and supertype
are clearly two different concepts.
Consider this situation: a type T has a comparison operator, which is essentially the same as ! but uses
a different name, say "less than". A language construct like the following to explicitly link the corresponding
names may be introduced:
Type T is-in KIND K -
less-than is operator!;
By using KIND-bounded polymorphism, there are at least the following advantages:
(1) The restrictions, on type variables, that are implicitly imposed by the definition of a parameterized type
or a generic function are explicitly and clearly stated by the definition of a KIND. The user of the
parameterized type or generic function need not read the detailed definition of the parameterized type or
generic function to find out all the buried constraints.
(2) KINDs are at a higher level than types. A KIND does not affiliate to a specific parameterized type or
a generic function. KINDs are natural entities to represent constraints on types, which make generic
constructs conceptually more transparent.
Notions similar to kinds have been studied in [16, 21, 20, 8, 9, 10]. The where clause of parameterized
procedures and parameterized types in CLU [19] and Theta [18], as well as the with statement of generic
procedures in Ada [2] are all similar to the definition of KINDs. However, unlike kinds, the where clause and the
with statement are not independent entities. They are an inseparable part of a specific parameterized type etc.
In contrast, kinds are treated uniformly as other entities in our type hierarchy. Several parameterized types and
genric functions that have the same restrictions on their type parameters can be expressed by the same KIND.
Comparisons between KIND and opaque types in Modula-3, signatures in G++ [5], "kinds" in Quest [10], etc.
can be found in [31].
The concept of object kinds has shown to be useful in realizing algorithmic abstraction [31] in programming
languages. We conjecture that object kinds will also be useful in formal or semi-formal descriptions of design
patterns as well as in their implementation in object-oriented languages.
6 Conclusion
In the past half century, programming languages have been continuously evolving to higher levels. There exist
two opposite directions of the development: programming languages are getting more and more complicated
conceptually, but easier and easier technically. It takes a much longer time to learn a programming language
now than forty years ago, but programming for the same task takes much less time.
In this paper, we have made several suggestions to the development of object-oriented programming lan-
guages, which are summarized in the following:
1. Separation of object types from object classes.
2. Exporting autonomous objects with their type information.
3. Introducing a type-matching construct like the "is of " construct in Section 2, which matches the type of
a given object to a given object type.
4. Introducing object kinds, kind-bounded polymorphism, and the "is in" construct for fitting a object type
into an object kind.
An object type system that is separated from classes would be much more complicated to implement.
However, the system would make an object-oriented language much more flexible and feasible for object reuse
and software integration.
--R
A Theory of Objects
Ada 9X mapping/Revision Team
The Java Programming Language
Programming in Ada
"Type-Safe OOP with Prototypes: The Concept of Omega"
Algebraic Specification Techniques in Object Oriented Programming Environments
"A Modest Model of Records, Inheritance, and Bounded Quantification"
"The Semantics of Second-order Lambda Calculus"
Formal Description of Programming Concepts - IFIP State-of-art Report
"The Cecil Language Specification and Rationale"
"F-Bounded Polymorphism for Object-Oriented Programming"
The Interpretation of Object-Oriented Programming Languages
Data Types Are Values
Principles of OBJ2
Interpretation Fonctionelle et 'elimination des coupures de l'arithm'etique d'ordre sup'erieur
The Algebraic Specification of Abstract Data Types
Theta Reference Manual - Preliminary Version
Abstraction and Specification in Program Development
A Semantic Model of Types For applicative Languages
An Investigation of a Programming Language with a Polymorphic Type Structure
Eiffle: The Language
Systems Programming with Modula-3
Introducing KINDS to C
"A shared view of sharing: The treaty of Orlando"
Component Software - Beyond Object-Oriented Programming
"Self: The Power of Simplicity"
"Algorithmic Abstraction in Object-Oriented Languages"
"Software Reuse via Algorithm Abstraction"
On Parametric Polymorphism in Object-Oriented Languages
Algorithm Abstraction via Polymorphism In Object-Oriented Languages
--TR
Data types are values
Abstraction and specification in program development
A shared view of sharing: the treaty of Orlando
The semantics of second-order lambda calculus
A modest model of records, inheritance, and unbounded quantification
Inheritance is not subtyping
F-bounded polymorphism for object-oriented programming
Systems programming with Modula-3
Eiffel: the language
Component software
Principles of OBJ2
The C++ Programming Language
The Integration of Object-Oriented Programming Languages
Programming ADA
A Theory of Objects
A semantic model of types for applicative languages
An investigation of a programming language with a polymorphic type structure.
Algorithmic abstraction via polymorphism in object-oriented programming languages
--CTR
Chitra Babu , D. Janakiram, Method driven model: a unified model for an object composition language, ACM SIGPLAN Notices, v.39 n.8, August 2004 | kinds;object reuse;classes;kind-bounded polymorphism;parameterized types;generic functions;types;objects;class-is-type principle |
504535 | Minimal cover-automata for finite languages. | A cover-automaton A of a finite language L &Sgr; is a finite deterministic automaton (DFA) that accepts all words in L and possibly other words that are longer than any word in L. A minimal deterministic finite cover automaton (DFCA) of a finite language L usually has a smaller size than a minimal DFA that accepts L. Thus, cover automata can be used to reduce the size of the representations of finite languages in practice. In this paper, we describe an efficient algorithm that, for a given DFA accepting a finite language, constructs a minimal deterministic finite cover-automaton of the language. We also give algorithms for the boolean operations on deterministic cover automata, i.e., on the finite languages they represent. | Introduction
Regular languages and finite automata are widely used in many areas such as
lexical analysis, string matching, circuit testing, image compression, and parallel
processing. However, many applications of regular languages use actually
only finite languages. The number of states of a finite automaton that accepts
a finite language is at least one more than the length of the longest word in
the language, and can even be in the order of exponential to that number.
If we do not restrict an automaton to accept the exact given finite language
but allow it to accept extra words that are longer than the longest word in
the language, we may obtain an automaton such that the number of states
This research is supported by the Natural Sciences and Engineering Research
Council of Canada grants OGP0041630.
Preprint submitted to Elsevier Preprint
is significantly reduced. In most applications, we know what is the maximum
length of the words in the language, and the systems usually keep track of the
length of an input word anyway. So, for a finite language, we can use such an
automaton plus an integer to check the membership of the language. This is
the basic idea behind cover automata for finite languages.
Informally, a cover-automaton A of a finite language L ' \Sigma is a finite automaton
that accepts all words in L and possibly other words that are longer
than any word in L. In many cases, a minimal deterministic cover automaton
of a finite language L has a much smaller size than a minimal DFA that accept
L. Thus, cover automata can be used to reduce the size of automata for finite
languages in practice.
Intuitively, a finite automaton that accepts a finite language (exactly) can be
viewed as having structures for the following two functionalities:
(1) checking the patterns of the words in the language, and
(2) controlling the lengths of the words.
In a high-level programming language environment, the length-control function
is much easier to implement by counting with an integer than by using
the structures of an automaton. Furthermore, the system usually does the
length-counting anyway. Therefore, a DFA accepting a finite language may
leave out the structures for the length-control function and, thus, reduce its
complexity.
The concept of cover automata is not totally new. Similar concepts have
been studied in different contexts and for different purposes. See, for exam-
ple, [1,7,4,10]. Most of previous work has been in the study of a descriptive
complexity measure of arbitrary languages, which is called "automaticity" by
Shallit et al. [10]. In our study, we consider cover automata as an implementing
method that may reduce the size of the automata that represent finite
languages.
In this paper, as our main result, we give an efficient algorithm that, for a
given finite language (given as a deterministic finite automaton or a cover
automaton), constructs a minimal cover automaton for the language. Note
that for a given finite language, there might be several minimal cover automata
that are not equivalent under a morphism. We will show that, however, they
all have the same number of states.
Preliminaries
Let T be a set. Then by #T we mean the cardinality of T . The elements of
are called strings or words. The empty string is denoted by -. If w 2 T
then jwj is the length of x.
We define T
l
is an ordered set, k ? 0, the quasi-lexicographical order on
denoted OE, is defined by: x OE y iff jxj ! jyj or
or y.
We say that x is a prefix of y, denoted x - p y, if
A deterministic finite automaton (DFA) is a quintuple
where \Sigma and Q are finite nonempty sets, q
is the transition function. We can extend ffi from Q \Theta \Sigma to Q \Theta \Sigma by
We usually denote ffi by ffi.
The language recognized by the automaton A is
Fg. For simplicity, we assume that
In what follows we assume that ffi is a total function, i.e., the automaton
is complete.
Let l be the length of the longest word(s) in the finite language L. A DFA
A such that called a deterministic finite cover-automaton
(DFCA) of L. Let A = (Q; \Sigma; ffi; 0; F ) be a DFCA of a finite language L. We
say that A is a minimal DFCA of L if for every DFCA
of L we have #Q - #Q 0 .
a) q 2 Q is said to be accessible if there exists w 2 \Sigma such that ffi(0;
b) q is said to be useful (coaccessible) if there exists w 2 \Sigma such that ffi(q; w) 2
F .
It is clear that for every DFA A there exists an automaton A 0 such that
all the states of A 0 are accessible and at most one of the
states is not useful (the sink state). The DFA A 0 is called a reduced DFA.
3 Similarity sequences and similarity sets
In this section, we describe the L-similarity relation on \Sigma , which is a generalization
of the equivalence relation jL (x jL y: xz 2 L iff yz 2 L for all
z 2 \Sigma ). The notion of L-similarity was introduced in [7] and studied in [4]
etc. In this paper, L-similarity is used to establish our algorithms.
Let \Sigma be an alphabet, L ' \Sigma a finite language, and l the length of the longest
in L. Let x; y 2 \Sigma . We define the following relations:
(1) x -L y if for all z 2 \Sigma such that jxzj - l and jyzj - l, xz 2 L iff yz 2 L;
(2) x 6- L y if x -L y does not hold.
The relation -L is called similarity relation with respect to L.
Note that the relation -L is reflexive, symmetric, but not transitive. For exam-
ple, let aabbg. It is clear that aab -L aabb (since
but aab 6- L baa (since for
2 L and
The following lemma is proved in [4]:
be a finite language and x;
The following statements hold:
(1) If x -L y, x -L z, then y -L z.
(2) If x -L y, y -L z, then x -L z.
(3) If x -L y, y6- L z, then x6- L z.
If x 6- L y and y -L z, we cannot say anything about the similarity relation
between x and z.
Example 2 Let x; We may have
y, y -L z and x -L z, or
y, y -L z and x6- L z.
Indeed, if we choose
be a finite language.
(1) A set S ' \Sigma is called an L-similarity set if x -L y for every pair x; y 2 S.
(2) A sequence of words [x is called a dissimilar sequence of
for each pair
(3) A dissimilar sequence [x called a canonical dissimilar sequence
of L if there exists a partition of \Sigma such that for each
is a L-similarity set.
(4) A dissimilar sequence [x of L is called a maximal dissimilar
sequence of L if for any dissimilar sequence [y
Theorem 4 A dissimilar sequence of L is a canonical dissimilar sequence of
L if and only if it is a maximal dissimilar sequence of L.
PROOF. Let L be a finite language. Let [x be a canonical dissimilar
sequence of L and the corresponding partition of \Sigma such
that for each is an L-similarity set. Let [y be an
arbitrary dissimilar sequence of L. Assume that m ? n. Then there are y i and
is a L-similarity
set, y i -L y j . This is a contradiction. Then, the assumption that m ? n is
false, and we conclude that [x is a maximal dissimilar sequence.
Conversely, let [x dissimilar sequence of L. Without loss
of generality we can suppose that jx 1 j - jx n j. For
Note that for each y 2 \Sigma , y -L x i for at least one
is a maximal dissimilar sequence. Thus, is a
partition of \Sigma . The remaining task of the proof is to show that each X i ,
set.
We assume the contrary, i.e., for some i, 1 - i - n, there exist
such that y6- L z. We know that x i -L y and x i -L z by the definition of
We have the following three cases: (1) jx
(or or (2), then y -L z by
Lemma 1. This would contradict our assumption. If (3), then it is easy to
prove that y 6- x j and z 6- x j , for all j 6= i, using Lemma 1 and the definition
of X i . Then we can replace x i by both y and z to obtain a longer dissimilar
sequence This contradicts the fact that
is a maximal dissimilar sequence of L. Hence,
y - z and X i is a similarity set.
Corollary 5 For each finite language L, there is a unique number N(L) which
is the number of elements in any canonical dissimilar sequence of L.
Theorem 6 Let S 1 and S 2 be two L-similarity sets and x 1 and x 2 the shortest
words in S 1 and S 2 , respectively. If x 1 -L x 2 then
set.
PROOF. It suffices to prove that for an arbitrary word y 1 and an
arbitrary word y 2 holds. Without loss of generality, we assume
that jx 1 j - jx 2 j. We know that jx 1
we have y 1 -L y 2 (Lemma 1 (1)).
4 Similarity relations on states
it is clear that if
y.
Therefore, we can also define similarity as well as equivalence relations on
states.
be a DFA. We define, for each state
i.e., level(q) is the length of the shortest path from the initial state to q.
If for each q 2 Q, we denote xA
qg, where the minimum is taken according to the quasi-
lexicographical order, and LA Fg. When the automaton
A is understood, we write x q instead of xA (q) and L q instead LA (q).
The length of x q is equal to level(q), therefore level(q) is defined for each
We say that
is equivalent to q in A) if for every w 2 \Sigma , ffi(s; w) 2 F iff
be a DFCA of a finite language L. Let
jg. We say that p -A q (state p is
L-similar to q in A) if for every w 2 \Sigma -l\Gammam , ffi(p; w) 2 F iff ffi(q; w) 2 F .
be a DFCA of a finite language L. Let
such that ffi(0; y.
PROOF. Let
Choose an arbitrary w 2 \Sigma such that jxwj - l and jywj - l. Because i - jxj
and j - jyj it follows that jwj - l \Gamma m. Since p -A q we have that ffi(p; w) 2 F
which means that xw 2 L(A)
Hence x -L y.
Lemma be DFCA of a finite language L. Let
that ffi(0;
PROOF. Let x -L y and w 2 \Sigma -l\Gammam . If ffi(p; w) 2 F , then ffi(0; xw) 2 F .
Because x -L y, it follows that ffi(0; yw) 2 F , so ffi(q; w) 2 F . Using the
symmetry we get that p -A q.
Corollary 12 Let A = (Q; \Sigma; ffi; 0; F ) be a DFCA of a finite language L. Let
\Sigma , such that ffi(0; x 1
then x 2 -L y 2 .
Example 13 If x 1 and y 1 are not minimal, i.e. jx
then the conclusion of Corollary 12 is not true.
3. The following is a DFCA of L and we
\Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi*
a
-a
ja
b'
-@
@
@
@
@R
HY
a
HY
Fig. 1. If x -L y then we do not have always that ffi(0; x) -A ffi (0; y)
have that b -L bab, but b6- L a (ba 2 L, aa 2 L and
Corollary 14 Let A = (Q; \Sigma; ffi; 0; F ) be a DFCA of a finite language L and
If p -A q, and level(p) - level(q) and q 2 F then p 2 F .
Lemma be a DFCA of a finite language L. Let
s; m. The
following statements are true:
(1) If s -A p, s -A q, then p -A q.
(3) If s -A p, p6- A q, then s6- A q.
PROOF. We apply Lemma 1 and Corollary 14.
Lemma be a DFCA of a finite language L. Let
is a L-similarity set.
Therefore ffi(p; w) 2 F , and jwj - l \Gamma m.
Hence, because p -A q, ffi(p; w) 2 F , so w 2 L q " \Sigma -l\Gammam .
Lemma be a DFCA of a finite language L. If
-A q for some
Then we can construct a DFCA A of L such that
ffi(s; a) if ffi(s; a) 6= q;
for each s 2 Q 0 and a 2 \Sigma. Thus, A is not a minimal DFCA of L.
PROOF. It suffices to prove that A 0 is a DFCA of L. Let l be the length of
the longest word(s) in L and assume that
Consider a word w 2 \Sigma -L . We now prove that w 2 L iff
If there is no prefix w 1 of w such that ffi(0; w
is the shortest prefix of w such
that In the remaining, it suffices to prove that ffi 0 (p; w
We prove this by induction on the length of w 2 . First consider the
case
by the construction of A 0 . Thus, . Suppose that
the statement holds for jw
Consider the case that jw 2 . If there does not exist u such that
and u be the shortest
nonempty prefix of w 2 such that ffi(p;
By induction hypothesis,
Lemma A be a DFCA of L and L
x -L y.
PROOF. Let l be the length of the longest word(s) in L. Let x jL 0 y. So,
for each z 2 \Sigma ; xz 2 L 0 iff yz 2 L 0 . We now consider all words z 2 \Sigma , such
that j xz j- l and j yz j- l. Since
have xz 2 L iff yz 2 L. Therefore, x -L y by the definition of -L .
Corollary 19 Let A = (Q; \Sigma; ffi; 0; F ) be a DFCA of a finite language L, L
L(A). Then p jA q implies p -A q.
Corollary 20 A minimal DFCA of L is a minimal DFA.
PROOF. Let A = (Q; \Sigma; ffi; 0; F ) be a minimal DFCA of a finite language L.
Suppose that A is not minimal as a DFA for L(A), then there exists
such that p jL 0 q, then p -A q. By Lemma 17 it follows that A is not a
minimal DFCA, contradiction.
Remark 21 Let A be a DFCA of L and A a minimal DFA. Then A may not
be a minimal DFCA of L.
Example 22 We take the DFA's:
a
a
a
@ \Gamma\Psi
a
Automaton A
@ \Gamma\Psi
a
Fig. 2. Minimal DFA is not always a minimal DFCA.
The DFA A in Figure 2 is a minimal DFA and a DFCA of aag but
not a minimal DFCA of L, since the DFA B in Figure 2 is a minimal DFCA
of L.
Theorem 23 Any minimal DFCA of L has exactly N(L) states.
PROOF. Let A = (Q; \Sigma; ffi; 0; F ) be DFCA of a finite language L, and
n.
Suppose that n ? N(L). Then there exist q, such that x p -L x q
(because of the definition of N(L)). Then p -A q by Lemma 14. Thus, A is
not minimal. A contradiction.
Suppose that N(L) ? n. Let [y be a canonical dissimilar sequence
of L. Then there exist
Again a contradiction.
Therefore, we have
5 The construction of minimal DFCA
The first part of this section describe an algorithm that determines the similarity
relations between states. The second part is to construct a minimal
DFCA assuming that the similarity relation between states is known.
An ordered DFA is a DFA where ffi(i; implies that i - j, for all states
letters a. Obviously for such a DFA is the sink state.
5.1 Determining similarity relation between states
The aim is to present an algorithm which determines the similarity relations
between states.
of a finite language L. Define D
Fg, and D
is taken according to the quasi-lexicographical order. If the automaton A is
understood then we write D i and fl s instead of D i (A) and respectively fl s (A).
Lemma of a finite language L, and p 2
PROOF. We can assume that i ! j. Then obviously ffi(p;
j. So, we have that jfl
Lemma accepting L,
. If for all a 2 \Sigma, ffi(p; a) -A ffi(q; a),
then p-A q.
PROOF. Let a 2 \Sigma and ffi(p; a) = r and ffi(q; a) = s. If r -A s then for all jwj,
also have: xA (q)aw 2 L iff xA (s)w 2 L for all w 2 \Sigma , jwj - l \Gamma jx A (s)j and
Hence xA (p)aw 2 L iff xA (q)aw 2 L, for all w 2 \Sigma , jwj - l \Gamma
maxfjxA (r)j; jx A (s)jg. Because jx A (r)j - jx A
Since a 2 \Sigma is chosen arbitrary, we conclude that xA (p)w 2 L iff xA (q)w 2 L,
for all w 2 \Sigma , jwj - l \Gamma maxfjxA (p)j; jx A (q)jg, i.e. xA (p) -A xA (q). Therefore,
by using Lemma 11, we get that p -A q.
Lemma 26 Let accepting L such that
. If there exists
a 2 \Sigma such that ffi(p; a)6- A ffi(q; a), then p6- A q.
PROOF. Suppose that p -A q. Then for all aw 2 \Sigma l\Gammam , ffi(p; aw) 2 F iff
by definition that ffi(p; a) -A ffi(q; a). This is a contradiction.
Our algorithm for determining the similarity relation between the states of a
DFA (DFCA) of a finite language is based on Lemmas 25 and 26. However,
most of DFA (DFCA) do not satisfy the condition of Lemma 26. So, we shall
first transform the given DFA (DFCA) into one that does.
be a DFCA of L. We construct the minimal DFA
for the language \Sigma -l ,
The DFA B will have exact l states.
Now we use the standard Cartesian product construction (see, e.g., [3], for
details) for the DFA
(taking the automata in this order) and we eliminate all inaccessible states.
Obviously, satisfies the condition of Lemma 26.
Lemma 27 For the DFA C constructed above,
PROOF. We have
Lemma 28 For the DFA C constructed above we have (p; q) -C (p; r).
PROOF. If p 2 D \Gamma1 (A), the lemma is obvious. Suppose now that
and q - r. Then r - l so It follows that
Lemma 29 For the DFA C constructed above we have that (#Q \Gamma
PROOF. We have that ffi C
2 FC and
2 FC it follows the conclusion.
Now we are able to present an algorithm, which determines the similarity
relation between the states of C. Note that QC is ordered by that (p A ; pB
. Attaching to each state of C is a
list of similar states. For ff; fi 2 QC , if ff -C fi and ff ! fi, then fi is stored on
the list of similar states for ff.
We assume that reduced (so is the sink
state of A).
(1) Compute
(2) Initialize the similarity relation by specifying:
(a) For all (n \Gamma
(b) For all (n \Gamma
(3) For each D i (C), create a list List i , which is initialized to ;.
(4) For each ff 2 following the reversed order of
QC , do the following:
Assuming ff 2 D i (C).
(a) For each fi 2 List i , if ffi C (ff; a) -C ffi C (fi; a) for all a 2 \Sigma, then ff -C fi.
(b) Put ff on the list List i .
By Lemma 24 we need to determine only the similarity relations between
states of the same D i (C) set. The Step 2(a) follows from Lemma 28, 2(b)
from Lemma 29 and Step 4 from Lemma 15.
Remark 30 The above algorithm has complexity O((n \Theta l) 2 ), where n is the
number of states of the initial DFA (DFCA) and l is the maximum accepted
length for the finite language L.
5.2 The construction of a minimal DFCA
As input we have the above DFA C and, with each ff 2 QC , a set S
fig. The output is
DFCA for L.
We define the following:
while (T 6= ;) do the following:
and
;g.
Note that the constructions of x i above are useful for the proofs in the following
only, where the min (minimum) operator for x i is taken according to the
lexicographical order.
According to the algorithm we have a total ordering of the states QC : (p; q) -
Also, using the construction (i.e. the total order on QC ) it
follows that
Lemma 31 The sequence [x constructed above is a canonical
L-dissimilar sequence.
PROOF. We construct the sets X g. Obviously
it follows that X i is a L-similarity set for all
Let w 2 \Sigma . Because (S i ) 1-i-m\Gamma1 is a partition of Q, w
is a partition of \Sigma and therefore a canonical L-dissimilar
sequence.
Corollary 32 The automaton D constructed above is a minimal DFCA for
L.
PROOF. Since the number of states is equal to the number of elements of
a canonical L-dissimilar sequence, we only have to prove that D is a cover
automaton for L. Let w 2 \Sigma -l . We have that ffi D (0; w) 2 FD iff ffi C ((0; 0); w) 2
(because C
is a DFCA for L).
6 Boolean operations
We shall use similar constructions as in [3] for constructing DFCA of languages
which are a result of boolean operations between finite languages. The
modifications are suggested by the previous algorithm. We first construct the
DFCA which satisfies hypothesis of Lemma 26 and afterwards we can minimize
it using the general algorithm. Since the minimization will follow in a
natural way we shall present only the construction of the necessarily DFCA.
Let A two DFCA of the finite languages L i , l
2.
6.1 Intersection
We construct the following DFA:
lg.
Theorem 33 The automaton A constructed above is a DFA for
PROOF. We have the following relations:
The rest of the proof
is obvious.
6.2 Union
We construct the following DFA:
where r is such that l r = l.
Theorem 34 The automaton A constructed above is a DFA for
PROOF. We have the following relations:
The rest of the proof is obvious.
6.3 Symmetric difference
We construct the following DFA:
f(s;
r is such that l r = l.
Theorem 35 The automaton A constructed above is a DFA for
PROOF. We have the following relations:
or exclusive w
or exclusive w 2 The rest of the proof
is obvious.
6.4 Difference
We construct the following DFA:
Theorem 36 The automaton A constructed above is a DFA for
PROOF. We have the following relations:
and
The rest of the proof is
obvious.
Open Problems
1) Try to find a better algorithm for minimization or prove that any minimization
algorithm has complexity
Find a better algorithm for determining similar states in any DFCA of L.
Find better algorithms for boolean operations on DFCA.
--R
Uniform characterisations of non-uniform complexity measures
"zone"
Regular languages and programming languages
time complexity gap for two-way probabilistic finite-state automata
Two memory bounds for the recognition of primes by automata
Introduction to Automata Theory
Minimal Nontrivial Space Space Complexity of Probabilistic One-Way Turing Machines
Running time to recognise non-regular languages by 2- way probabilistic automata
A class of measures on formal languages
Properties of a Measure of Descriptional Complexity
Theory of Automata
The state complexities of some basic operations on regular languages
Finite Automata: Behaviour and Synthesis
On the State Complexity of Intersection of
Handbook of Formal Languages
--TR
Uniform characterizations of non-uniform complexity measures
Minimal nontrivial space complexity of probabilistic one-way turing machines
time complexity gap for two-way probabilistic finite-state automata
Running time to recognize nonregular languages by 2-way probabilistic automata
On the state complexity of intersection of regular languages
The state complexities of some basic operations on regular languages
Automaticity I
Regular languages
Introduction To Automata Theory, Languages, And Computation
Theory of Automata
--CTR
Martin Kappes , Frank Niener, Succinct representations of languages by DFA with different levels of reliability, Theoretical Computer Science, v.330 n.2, p.299-310, 2 February 2005
Martin Kappes , Chandra M. R. Kintala, Tradeoffs between reliability and conciseness of deterministic finite automata, Journal of Automata, Languages and Combinatorics, v.9 n.2-3, p.281-292, September 2004 | finite languages;deterministic cover automata;deterministic finite automata;cover language;finite automata |
504537 | Normal form algorithms for extended context-free grammars. | We investigate the complexity of a variety of normal-form transformations for extended context-free grammars, where by extended we mean that the set of right-hand sides for each nonterminal in such a grammar is a regular set. The study is motivated by the implementation project GraMa which will provide a C++ toolkit for the symbolic manipulation of context-free objects just as Grail does for regular objects. Our results generalize known complexity bounds for context-free grammars but do so in nontrivial ways. Specifically, we introduce a new representation scheme for extended context-free grammars (the symbol-threaded expression forest), a new normal form for these grammars (dot normal form) and new regular expression algorithms. Copyright 2001 Elsevier Science B.V. | Introduction
In the 1960's, extended context-free grammars were introduced, based on Backus-Naur form,
as a useful abbreviatory notation that made context-free grammars easier to write. More
recently, the Standardized General Markup Language (SGML) [16] used a similar abbrevia-
tory notation to define extended context-free grammars for documents. Currently, Extensible
Markup Language (XML) [6], which is a simplified version of SGML, is being promoted as
the markup language for the web, instead of HTML (a specific grammar or document type
definition (DTD) specified using SGML). These developments led to the investigation of how
notions applicable to context-free grammars could be carried over to extended context-free
grammars. There does not appear to have been any consolidated effort to study extended
context-free grammars in their own right. We begin such an investigation with the most basic
This research was supported under a grant from the Research Grants Council of Hong Kong SAR. It
was carried out while the first and second authors were visiting HKUST.
y Lerhstuhl fur informatik II, Universitat Wurzburg, Am Hubland, D-97074 Wurzburg, Germany. E-mail:
albert@informatik.uni-wuerzburg.de.
z Dipartimento di Matematica Applicata e Informatica, Universit'a Ca' Foscari di Venezia, via Torino 155,
30173 Venezia Mestre, Italy. E-mail: dora@dsi.unive.it.
x Department of Computer Science, Hong Kong University of Science & Technology, Clear Water Bay,
Kowloon, Hong Kong SAR. E-mail: dwood@cs.ust.hk.
problems for extended context-free grammars: reduction and normal-form transformations.
There has been some related work that is more directly motivated by SGML issues; see the
proof of decidability of structural equivalence for extended context-free grammars [4] and the
demonstration that SGML exceptions do not add expressive power to extended context-free
grammars [17].
We are currently designing a manipulation system toolkit GraMa for extended context-free
grammars, pushdown machines and context-free expressions. It is an extension of
Grail [20, 19], a similar toolkit for regular expressions and finite-state machines. As a re-
sult, we need to choose appropriate representations of grammars and machines that admit
efficient transformation algorithms (as well as other algorithms of interest).
Earlier results on context-free grammars were obtained by Harrison and Yehudai [12, 13,
26] and by Hunt et al. [15] among others. Harrison's chapter on normal form transformations
[12] provides an excellent survey of early results. Cohen and Gotlieb [5] suggested a
specific representation for context-free grammars and demonstrated how it aided the programming
of various operations on them.
We first define extended context-free grammars using the notion of production schemas
that are based on regular expressions. In a separate paper [9], we discuss the algorithmic
impact of basing the schemas on finite-state machines. Since finite-state machines and
regular expressions are both first-class objects in Grail, they can be used interchangeably as
we expect they will be in GraMa.
We next describe algorithms for the fundamental normal-form transformations in Section
3. Before doing so, we propose a representation for extended context-free grammars as
regular expression forests with symbol threads. We then discuss some algorithmic problems
for regular expressions before tackling the various normal forms. We close the presentation,
in Section 4, with a brief discussion of our ongoing investigations.
Notation and terminology
We treat extended context-free grammars as context-free grammars in which the right-hand
sides of productions are regular expressions. Let V be an alphabet. Then, we define a
regular expression over V and its language in the usual way [1, 25] with the Kleene plus as
an additional operator. The symbol denotes the null string.
An extended context-free grammar G is specified by a tuple (N; \Sigma;
and \Sigma are disjoint finite alphabets of nonterminal symbols and terminal symbols, re-
spectively, P is a finite set of production schemas, and the nonterminal S is the sentence
symbol. Each production schema has the form A!EA , where A is a nonterminal and EA
is a regular expression over that does not contain the empty-set symbol. When
the string fi 1 fffi 2 can be derived from the
string fi and we denote this fact by writing fi)fi 1 fffi 2 . The language L(G) of an extended
context-free grammar G is the set of terminal strings derivable from the sentence symbol
of G. Formally, denotes the transitive closure of the
derivability relation.
Even though a production schema may correspond to an infinite number of ordinary
context-free productions, it is known that extended and standard context-free grammars
describe exactly the same languages; for example, see the texts of Salomaa [23] and of
Wood [25].
We denote the size of a regular expression E by jEj and define it as the number of symbols
and operators in E. We denote the size of a set A also by jAj. To measure the complexity
of any grammatical transformation we need to define the size of a grammar. There are two
traditional measures of the size of a context-free grammar that we generalize to extended
context-free grammars as follows. Given an extended context-free grammar
we define the size jGj of G to be X
and we define the norm k G k of G to be
Clearly, the norm is a more realistic measure of a grammar's size as it takes into account the
size of the encoding of the symbols of the grammar. We use only the size measure however,
since the extension of our results to the norm measure is straightforward.
3 Normal-form transformations
We introduce the notion of an expression forest that is a tree-based representation for the
set of regular expressions that appear as right-hand sides of production schemas. Each
production schema's right-hand side is represented as an expression tree in the usual way,
internal nodes are labeled with operators and external nodes are labeled with symbols. In
addition, we represent the nonterminal left-hand side of a production schema with a single
node labeled with that nonterminal. The node also references the root of the expression tree
of its corresponding right-hand side. In Fig. 3, we give an example forest of two regular
expressions.
Since an extended context-free grammar has a number of production schemas that are
regular expressions, we represent such grammars as an expression forest, where each tree in
the forest corresponds to one production schema and each tree is named by its corresponding
nonterminal. (The naming avoids the tree repetition problem.) We now add threads such
that the thread for symbol X connects all appearances of the symbol X in the expression
forest.
3.1 Reachability and usefulness
Recall that a symbol X is reachable if it appears in some string derived from the sentence
symbol; that is, if there is a derivation S) ffXfi where ff and fi are (possibly null) strings
As in standard context-free grammars, reachable symbols can be easily identified by
means of a digraph traversal. More precisely, we construct a digraph whose vertices are
symbols in N [ \Sigma and there is an edge from A to B if and only if B labels an external node
of the expression tree named A. (We assume that the production schemas do not contain
a
\Theta
\Theta
A
AA
\Theta
\Theta
a
\Theta
\Theta
A
\Theta
\Theta
\Omega \Omega \Omega \Omega J
JJ
A
\Theta
\Theta
\Theta
\Theta
a
Figure
1: An expression forest for the extended context-free grammar with production
schemas: We have omitted the
symbol threads for clarity.
the empty-set symbol.) Then, a depth-first traversal of this digraph starting from S gives
all reachable symbols of the grammar. The times taken by the digraph construction and
traversal are both linear in the size of the grammar.
A nonterminal symbol A is useful if there is a derivation ff is a terminal
string. The set of useful symbols can be computed recursively as follows. Compute
B such that L(EB ) contains a string of terminal symbols (possibly the null string). All
such are useful symbols. Then, a symbol A is useful if L(EA ) contains a string of terminals
and the currently detected reachable symbols, and so on until no newly useful symbols
are identified. We can formalize this inductive process with a marking algorithm such as
described by Wood [24] for context-free grammars. The major difference between previous
work and the approach taken here is that we want to obtain an efficient algorithm. Yehudai
[26] designed an efficient algorithm for determining usefulness for context-free grammars;
our approach can be viewed as a generalization of his algorithm.
To explain the marking algorithm, we assume that we have one bit available at each
node of the expression forest to indicate the marking. We initialize these bits in a preorder
traversal of the forest as follows: The bits of all nodes are set to zero (unmarked) except
for nodes that are labeled with a Kleene star symbol, a terminal symbol or a null-string
symbol-the bits of these nodes are set to one (marked). In the algorithm, whenever a
node u is marked, it is useful and it satisfies the condition: The language of the subtree
rooted at u contains a string that is completely marked. A Kleene-star node is marked since
its subtree's language contains the null string; that is, a Kleene-star node is always useful.
After completing the initial marking, we bubble markings up the trees in a propagation
phase as follows: Repeatedly examine newly marked nodes as follows until no newly marked
nodes are obtained. For each newly marked node u, where p(u) is u's parent if u is not the
root, perform one of the following actions:
if p(u) is a plus node and p(u) is not marked, then mark p(u).
if p(u) is a dot node, p(u) is not marked and u's sibling is marked, then mark p(u).
if p(u) is a Kleene-plus node, then mark p(u).
if p(u) is a Kleene-star node, it is already marked.
if u is a root node and the expression tree's nonterminal symbol is not marked, then mark
the expression tree's nonterminal symbol.
If there are newly marked nonterminals after this initial round, then we mark all their
appearances in the expression forest and repeat the propagation phase which bubbles the
markings of newly marked symbols up the trees. If there are no newly marked nonterminals,
then the algorithm terminates.
The algorithm has, therefore, a number of rounds and at the beginning of each round it
marks all appearances of newly discovered useful nonterminals (discovered in the previous
round) and then bubbles the newly marked nonterminals up the trees. As long as a round
marks new nodes, the propagation process is repeated. To implement this process efficiently,
we construct, at the beginning of each round, a queue of newly marked nodes. Note that the
queue is a catenation of appearance lists after the first round. The algorithm then repeatedly
deletes a newly marked node from the queue and, using the preceding propagation rules, it
may also add newly marked nodes to the queue. A round terminates when the queue is
empty.
Observe that each node of the expression forest is visited at most twice because a dot
node can be visited twice. Thus, the marking algorithm runs in O(jGj) time and space.
Recall that a grammar G is reduced if all its symbols are both useful and reachable. As
for standard context-free grammars, to reduce a grammar we first identify all useful symbols
and then select (together with the corresponding schemas) those that are also reachable.
More formally, after identifying the useless nonterminals (terminals are always useful),
we first remove their production schemas from G. Second, we remove all productions (not
schemas) that contain a useless nonterminal in their right-hand sides. In both steps we
have to ensure that the threads are maintained correctly. In the first step, we need not
only to remove the production schemas, but also to reconnect the threads of of all symbol
appearances that are removed. We can use a traversal of each schema to identify the symbols
in it and remove their appearances from the appropriate threads. In the second step, we use
the threads of useless symbols to remove the corresponding productions. We simply replace
each useless symbol with the empty-set symbol and remove it from its thread, and then
apply the empty-set removal algorithm for regular expressions to each production schema.
Thus, we obtain the equivalent grammar
We next identify the unreachable symbols of
G and then remove the production schemas
of the unreachable nonterminals and, once more, maintain the threads correctly. Observe
that an unreachable terminal symbol can only appear in production schemas of unreachable
nonterminals and that reachable symbols can only appear in production schemas of reachable
nonterminals. Thus, we obtain G 0 from
G in linear time.
We summarize the result of this section as follows.
Theorem 1 Let be an extended context-free grammar represented as an expression
forest. Then, an equivalent, reduced extended context-free grammar G
can be constructed from G in time O(jGj) such that jG
represented as an expression forest.
3.2 Null-free form
Given a reduced extended context-free grammar S), we can determine the
nullable nonterminals (the ones that derive the null string) using a similar algorithm to the
one we used for usefulness in Section 3.1. This algorithm takes O(jGj) time. Given the
nullability information we can then make the given grammar null free in two steps.
First, we replace all appearances of each nullable symbol A with the regular expression
This step takes time proportional to the total number of appearances of nullable
symbols in G-we use the symbol threads for fast access to them. Second, we transform
each production schema A!EA , where A is nullable, into a null-free production schema
A , where 62 L(E 0
A ). Unfortunately, this step can take time O(2 jGj ) in the worst case
when we use the typical textbook algorithm and each production schema has nested dotted
subexpressions in which each operand of the dot can produce the null string. We replace
each dotted subexpression F \Delta G with is the transformed
version of F that does not produce the null string. Note that we at least double the length
of the dotted subexpressions. Because similar doubling can occur in the subexpressions of
F and G and of their subexpressions, we obtain the exponential worst-case bound. (Note
that this is the same case that occurs with a standard context-free grammar in which every
nonterminal is nullable.)
We want, however, to obtain at most a linear blowup in the size of the resulting grammar.
Since nested dot expressions cause the nonlinearity, we modify the grammar to remove nested
dot expressions. This approach was first suggested by Yehudai [13, 26] for standard context-free
grammars-he converted a given grammar into Chomsky normal form to avoid the dot
problem. We take a similar approach by removing nested dot, Kleene-plus and Kleene-star
subexpressions from production schemas. The removal generates new nonterminals and their
production schemas; however, the size of the resulting grammar is only linearly larger than
the original grammar.
We replace each dot, Kleene-plus and Kleene-star node of an expression tree that has a
dot, Kleene-plus or Kleene-star ancestor with a new nonterminal and add a new production
schema to the grammar. We repeat this local modification until no such nested nodes exist.
For example, given the production schema
we can replace it with the new production schemas:
and
Repeating the transformation for B, we obtain
Repeating the transformation for A, we obtain
A!D
and
We say that the resulting grammar is in dot normal form. Its size is of the same order
as the original size and the number of nonterminals is increased by at most the size of the
original grammar.
be a reduced, extended context-free grammar represented
as an expression forest. Then, an equivalent, reduced extended context-free grammar G
in dot normal form can be constructed from G in time O(jGj) such that jG 0 j is
O(jGj), jN 0 j is O(jGj) and jP 0 j is O(jGj). Moreover, G 0 is also represented as an expression
forest.
We now apply the simple null-removal construction on a grammar G in dot normal form
to produce a new grammar that has size O(jGj). The algorithm runs in time O(jGj).
Theorem 3 Let be a reduced, extended context-free grammar in dot normal
form represented as an expression forest. Then, an equivalent, reduced, null-free extended
context-free in dot normal form can be constructed from G in
time O(jGj) such that jG 0 j is O(jGj), jN 0 j is O(jN j) and jP 0 j is O(jP j). Moreover, G 0 is also
represented as an expression forest.
3.3 Unit-free form
A unit production is a production of the form A!B. We transform an extended context-free
grammar into unit-free form in three steps. First, we identify all instances of unit
productions. Second, we remove each unit-production instance from its schema. Third and
last, for each modified schema, we add the unit-free schemas of the unit-production instances
to the modified schema.
We now discuss these three steps in more detail. We assume that each reduced, null-
free, extended context-free grammar G, is also in dot normal form. To identify instances of
unit productions observe that, for each schema EA , each root-to-frontier path contains at
most one dot or Kleene-plus node, and no Kleene-star nodes. Now, assume that there is a
unit-production instance of B in EA (that is, A!B is in A!EA ). Immediately, none of B's
ancestors can be dot nodes; an ancestor can be a plus node and at most one ancestor can be
a Kleene-plus node. To identify unit productions, we carry out a preorder traversal of EA
and identify root-to-frontier paths that satisfy the necessary conditions for unit-production
instances and also have a nonterminal at their frontier nodes. This step takes O(jEA
Whenever the traversal meets a Kleene-plus node or a plus node it continues the traversal
in the corresponding subtrees. When it meets a dot node it terminates that part of the
traversal. When eventually the traversal reaches a node labeled with a nonterminal B, then
that occurrence of B corresponds to a unit production for A. The overall running time for
the first step is O(jGj).
Second, we remove the instances of unit productions from their schemas. That is,
we transform each production schema A!EA into a production schema A!E 0
A such that
We define a new threading, which we refer to as the unit thread
that connects all occurrences of nonterminals that correspond to unit productions in the
schemas. The threading can be constructed during the identification step but it is used in
the second step. Furthermore, while identifying unit productions, we determine, for each
nonterminal A, the set UA of nonterminals that are unit reachable from A. (Note that UA
may contain A.) We use these sets to modify the production schemas in the third step.
We traverse the expression trees from their frontiers to their roots and, in particular, we
follow the paths that start from the nodes labeled with nonterminals that correspond to unit
productions (we access them by following the unit threads). Assume that we are removing
an instance of B. Then, its ancestors are only plus nodes with the possible exception that
one ancestor is a Kleene-plus node.
To remove unit appearances from Kleene-plus subtrees, we globally transform all Kleene-
plus subexpressions of the form F + in the expression forest into )). The idea
behind this global transformation is that we have separated the unit appearances in F
from the non-unit appearances of the same symbols in F + , since the unit appearances now
occur only in the first F and the non-unit appearances of the same symbols appear in the
subexpression node u is a Kleene-plus node in some expression tree, then we
make two copies of u's only subtree R (we call them S and T ) and ensure we maintain all
threads in S and T except for the unit threads. We then remove the Kleene-plus node and
reconnect R, S and T as (R
The removal of all unit appearances of each nonterminal B is now straightforward. We
arrive at a node labeled B by following the unit thread and we replace B and B's parent
with B's sibling and terminate the process for this occurrence of B. The only case we have
not covered is when A!B is the only production for A. In this case, B has no parent;
therefore, we are left, temporarily, with an empty expression tree for A. (Note that B 6= A
since A is useful.)
The time complexity of this second step is the same as that of the first step.
Third and last, we modify the production schemas such that, for each nonterminal A, if
are the nonterminal symbols that are unit reachable from A that do not include
A, then the new production schema for A is
The resulting grammar has size O(jGj 2 ), a quadratic blow up, since we must make copies of
the
subtrees to give an expression tree for A. The algorithm takes, therefore, O(jGj 2 )
time.
Theorem 4 Let be a reduced, null-free extended context-free grammar in
dot normal form that is represented as an expression forest. Then, an equivalent, reduced,
dot-normal-form, null-free, unit-free extended context-free grammar G
be constructed from G in time O(jGj 2 ) such that jG 0 j is O(jGj 2
O(jGj). Moreover, G 0 is also represented as an expression forest.
Note that we can ensure that the blow up is linear, if we do not make multiple copies of
the various subtrees, but we merely provide links to one copy of each distinct subtree. This
approach takes O(jN space to the grammar G.
3.4 Greibach form
This normal form result for context-free grammars was established by Sheila Greibach [10] in
the 1960's; it was a key result in the use of the multiple-path syntactic analyzer developed at
Harvard University at that time. An extended context-free grammar is in Greibach normal
form if its productions are of only the following form:
where a is a terminal symbol and ff is a possibly empty string of nonterminal symbols. The
transformation of an extended context-free grammar into Greibach normal form requires two
giant steps: left-recursion removal and back left substitution. Recall that a grammar is left
recursive if there is a nonterminal A such that in the grammar, for some string ff.
We consider the second step first.
Assume that the given extended context-free grammar
factored if, for each nonterminal A, a string x
in L(EA ) is either completely nonterminal or it is a single terminal symbol. It is straightforward
to factor a grammar and if we do it before we make the grammar null free, we avoid
the possible introduction of unit productions.)
In addition, for the second step we also assume that the grammar is non-left recursive.
Since the grammar is non-left recursive there is a partial order on the nonterminals, left
reachability, that is defined by A B if there is a leftmost derivation As
usual, we can consider the nonterminals to be enumerated as A such that whenever
A i A j , then i j. Observe that A n is already in Greibach normal form since it has
only productions of the form A n !a, where a 2 \Sigma. We now convert the nonterminals one
at a time from A n\Gamma1 down to A 1 . The conversion is conceptually simple, yet computational
expensive. When converting A i , we replace all nonterminals that can appear in the first
positions in the strings in L(EA i
schemas. Thus, the resulting schema A i !E 0
is now in Greibach normal form. To be able to carry out this substitution efficiently we
first convert each schema EA i
into first normal form; that is, we express each schema as
the sum of regular expressions each of which begins with a unique symbol. More precisely,
letting and using the notation E i instead of EA i
, for simplicity, we
replace
which is defined as follows:
n+m \Delta a n+m ;
The conversion must
into an equivalent regular expression
in
Greibach normal form, we need only replace the first A k of each term A k
k .
If the grammar is left recursive, we first need to make it non-left recursive. We use a
technique introduced by Greibach [11], investigated in detail by Hotz and his co-workers [14,
21, 22] and rediscovered by others [7]. It involves producing, for each nonterminal, a distinct
subgrammar of G that is left linear; hence, it can be converted into an equivalent right linear
grammar. This conversion changes left recursion into right recursion and does not introduce
any new left recursion. For more details, see Wood's text [25]. The important property of
the left-linear subgrammars is that every sentential leftmost derivation sequence in G can
be mimicked by a sequence of leftmost derivation sequences, each of which is a sentential
leftmost derivation sequence in one of the left-linear subgrammars. Once we convert the
left-linear grammars into right-linear grammars this property is weakened in that we mimic
the original derivation sequence with a sequence of sentential rightmost derivation sequences
in the right-linear grammars. The new grammar that is equivalent to G is the collection of
the distinct right-linear grammars, one for each nonterminal in G.
As the modified grammar is no longer left recursive, we can now apply back left substitution
to obtain a final grammar in Greibach normal form.
How well does this algorithm perform? Left recursion removal causes a blow up of jN jjGj
at worst. Converting the production schemas into first normal form causes an additional
blow up of jN jjGj. We use the derivative dE
dX
of a regular expression E by a symbol X to give a
new expression F such that L(F L(E). The derivative
of a regular expression was introduced by Brzozowski [3] who defined it inductively. Now,
given a schema EA , we obtain its derivatives for each symbol X 2 N [ \Sigma. When we catenate
X with its derivative we obtain one of the terms in the first normal form. Since G is null
free, the only derivative that can cause exponential blow up is dF
dX
dX
We transform G such that no Kleene-plus subexpression is nested within any other Kleene-
plus expression-a similar transformation to the one we used for conversion to dot normal
form. This modification ensures that exponential blow up does not occur. The back left
substitution causes, in the worst case, an additional blow up of jN jjGj in the size of the
Greibach normal form grammar.
As all three operations take time proportional to the sizes of their output grammars
essentially, the transformation to Greibach normal form takes O(jN in the worst
case. The reason for the jN j 5 term is that we first remove left recursion which not only
increases the size of the grammar but also squares the number of nonterminals from jN j to
. The number of nonterminals is crucial in the size bound for the grammar obtained by
first normal form conversion and by back left substitution.
We can however, reduce the worst-case time and space by using indirection as we did for
unit-production removal. Rather than performing the back left substitution for a specific
nonterminal, we use a reference to its schema. This technique gives a blowup of only jGj+jN j 2
at most; thus, it reduces the complete conversion time and size to O(jN j 3 jGj) in the worst
case.
We may also apply the technique that Koch and Blum [18] suggested; namely, leave unit-
production removal until after we have obtained a Greibach-like normal form. Moreover,
transforming an extended context-free grammar into dot normal form appears to be a very
useful technique to avoid undesirable blow up in grammar size. We are currently investigating
this and other approaches.
The results that we have presented are truly a generalization of similar results for context-free
grammars. The time and space bounds are similar when relativized to the grammar sizes.
The novelty of the algorithms is four-fold. First, we have introduced the regular expression
forest with symbol threads as an efficient data representation for context-free grammars and
extended context-free grammars. We believe that this representation is new. The only previously
documented representations are those of Cohen and Gotlieb [5] and of Barnes [2] and
they are more simplistic. Second, we have demonstrated how indirection using referencing
can save time and space in the null-production removal and back left substitution algorithms.
Although the use of the technique is novel in this context, it is well known technique in other
applications. It is an application of lazy evaluation or evaluation on demand. Third, we have
introduced dot normal form for extended context-free grammars that plays a role similar to
normal form for standard context-free grammars. Fourth, we have generalized the
left-linear to right-linear grammatical conversion for extended grammars.
We are currently investigating whether we can obtain Greibach normal form more efficiently
and whether we can improve the performance of unit-production removal.
We would like to mention some other applications of the regular expression forest with
threads. First, we can reduce usefulness determination to nullability determination.
Given an extended context-free grammar S), we can replace every appearance
of every terminal symbol with the null string to give G nonterminal A
in G is useful if and only if it is nullable in G 0 . Second, we can use the same algorithm to
determine the length of the shortest terminal strings generated by each nonterminal symbol.
The idea is that we replace each appearance of a terminal symbol with the integer 1 and
each appearance of the null string with 0. We then repeatedly replace: each node labeled
"+" that has two integer children with the minimum of the two integers; each node labeled
"\Delta" that has two integer children with the sum of the two integers; and each node labeled " "
with 0. The root value is the required length. We can use the same "generic" algorithm to
compute the smallest terminal alphabet for the terminal strings derived from a nonterminal,
the LL(1) first and follow sets, and so on.
Last, the careful reader will have observed that the space-efficient algorithms for unit freeness
and Greibach normal form produce output grammars that are not represented as expression
forests-they are represented as a set of expression dags (directed acyclic graphs). The
dags have as many roots as there are nonterminals. Not surprisingly, each root-to-frontier
traversal yields a tree since we have reduced space by sharing common subtrees among trees
in the underlying expression forest. Clearly, we may also share common subtrees within the
original trees in the expression forest, although we do not know of any "practical" computation
that would benefit from such sharing. We are currently investigating the complexity
of the transformations we have discussed when we are given a collection of expressions dags
as the representation of an extended grammar. Although, a collection of dags is a dag, the
dags we are dealing with have three properties. First, a traversal from any root node yields
a tree that corresponds to a production schema; second, there are as many roots as there are
nonterminals; and, third, the dags are threaded. For this reason, we call such a collection of
expression dags, a dagwood 1 .
--R
The Theory of Parsing
Exploratory steps towards a grammatical manipulation package (GRAMPA).
Derivatives of regular expressions.
Structural equivalence of extended context-free and extended E0L grammars
A list structure form of grammars for syntactic analysis.
W3C web page on XML.
An easy proof of Greibach normal form.
A system for manipulating polynomials given by straight-line programs
Transition diagram systems and normal form trans- formations
A new normal form theorem for context-free phrase structure grammars
A simple proof of the standard-form theorem for context-free grammars
Introduction to Formal Language Theory.
Eliminating null rules in linear time.
ISO 8879: Information processing-Text and office systems-Standard Generalized Markup Language (SGML)
SGML and exceptions.
Greibach normal form transformation revisited.
Grail: Engineering automata in C
Grammar Transformations Based on Regular Decompositions of Context-Free Derivations
A general Greibach normal form transforma- tion
Formal Languages.
Theory of Computation.
Theory of Computation.
On the Complexity of Grammar and Language Problems.
--TR
An easy proof of Greibach normal form
Formal languages
<italic>Grail</italic>: a C++ library for automata and expressions
Dagwood
Derivatives of Regular Expressions
A New Normal-Form Theorem for Context-Free Phrase Structure Grammars
A List Structure Form of Grammars for Syntactic Analysis
Theory of Computation
Introduction to Formal Language Theory
The Theory of Parsing, Translation, and Compiling
SGML and Exceptions
Greibach Normal Form Transformation, Revisited
On the complexity of grammar and language problems.
Grammar transformations based on regular decompositions of context-free derivations.
--CTR
Marcelo Arenas , Leonid Libkin, A normal form for XML documents, Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 03-05, 2002, Madison, Wisconsin
Frank Neven, Attribute grammars for unranked trees as a query language for structured documents, Journal of Computer and System Sciences, v.70 n.2, p.221-257, March 2005
Marcelo Arenas , Leonid Libkin, A normal form for XML documents, ACM Transactions on Database Systems (TODS), v.29 n.1, p.195-232, March 2004 | grammatical representations;complexity;efficient algorithms;normal forms;symbolic manipulation;extended context-free grammars |
504540 | Using acceptors as transducers. | We wish to use a given nondeterministic two-way multi-tape acceptor as a transducer by supplying the contents for only some of its input tapes, and asking it to generate the missing contents for the other tapes. We provide here an algorithm for assuring beforehand that this transduction always results in a finite set of answers. We also develop an algorithm for evaluating these answers whenever the previous algorithm indicated their finiteness. Furthermore, our algorithms can be used for speeding up the simulation of these acceptors even when not used as transducers. Copyright 2001 Elsevier Science B.V. | Introduction
In this paper we study the following problem: assume that we are given a
nondeterministic two-way multi-tape acceptor A and a subset X of its tapes.
We would like to use A no longer as an acceptor which receives input on all
its tapes, but instead as a kind of transducer [15, Chapter 2.7] which receives
input on tapes X only and generates as output the set of missing inputs onto
the other tapes. We then face the following two problems:
Problem 1 Can it be guaranteed that given any choice of input strings for
tapes X the set of corresponding outputs of A will always remain finite?
Problem 2 In those cases where Problem 1 can be solved positively, how can
the actual set of outputs corresponding to a given choice of input strings be
Supported by the Academy of Finland, grant number 42977.
To appear in the Theoretical Computer Science special issue on the Third International
Workshop on Implementing Automata (WIA'98).
Preprint submitted to Elsevier Preprint 18th January 2000
The motivation for studying these two problems came from string databases
[3,7,11] which manipulate strings instead of indivisible atomic entities. Such
databases are of interest for example in bioinformatics, because they allow
the direct representation and manipulation of the stored nucleotide (DNA or
sequences. While one can base a query language for these databases
on a fixed set of sequence comparison predicates, such as in for example the
PROXIMAL system [5], it would be more flexible to allow user-defined string
processing predicates as well.
If we assume an SQL-like notation [1, Chapter 7.1] for the query language,
then one possible query for such a string database might be stated as follows.
Here # rev user-defined expression which compares the strings w 1
and w 2 denoted by the variables x 1 and x 2 , say "w 2 is the reversal of w 1 ". Then
this query requests every string w 2 that is the reversal of some string w 1 currently
stored in the database table R. Note in particular that these strings w 2
need (and in general can) not be stored anywhere in the database; the query
evaluation machinery must generate them instead as needed.
We have developed elsewhere [10,11,17] a logical framework for such a query
language. This framework accommodates expressions like # rev
multidimensional extension of the modal Extended Temporal Logic suggested
by Wolper [29]. The multi-tape acceptors studied here are exactly the computational
counterparts to these logical expressions.
A given query to a database is considered to be "safe" for execution if there is a
way to evaluate its answer finitely [1, pages 75-77]. One safe plan for evaluating
the aforementioned query would be as follows, where is the string relation
accepted by A rev , a multi-tape acceptor corresponding to the expression
such acceptor is shown as Figure 1 below.)
for all strings w 1 in table R do
output every string in V
end for
Our two problems stem from these safe evaluation plans. Problem 1 is "How
could we infer from # rev that the set V is always going to be finite for
every string w 1 that could possibly appear in R?" Problem 2 is in turn "Once
the finiteness of every possible V has been ensured, how can we simulate
this A rev (e#ciently) for each w 1 to generate the V corresponding to this
We have studied elsewhere [9][17, Chapter 4.4] how solutions for Problem 1
can be used to guide the selection of appropriate safe execution plans. To this
end, Section 1.2 presents the problem in not only automata but also database
theoretic terms.
One possible solution would have been to restrict beforehand the language
for the string handling expressions such as # rev into one which ensures
this finiteness by definition, say by fixing x 1 to be the input variable, which is
mapped into the output variable x 2 as a kind of transduction [3,7]. However,
in logic-based data models [1], the use of transducers seems less natural than
acceptors, because the concept of information flow from input to output is
alien to the logical level, and of interest only in the query evaluation level.
But we must eventually also evaluate our string database queries, and then
we must infer which of our acceptors can be used as transducers, and how
to perform these inferred transductions, and thus we face the aforementioned
problems.
The rest of this paper is organized as follows. Section 1.1 presents the acceptors
we wish to use as transducers, while Section 1.2 formalizes Problem 1. Section 2
first reviews what is already known about its decidability, and then presents
our algorithms, which give su#cient conditions for answering Problem 1 in the
a#rmative. Section 3 presents then an explicit evaluation method for those
acceptors that these algorithms accept, answering Problem 2 in turn. Finally,
Section 4 concludes our presentation.
1.1 The Automaton Model
Let the alphabet # be a finite set of characters fixed in advance, let # denote
the set of all finite sequences of these characters, let # denote the empty
sequence of this kind, and let w t denote concatenating t # N copies of w # ,
as usual. We shall study relations on these sequences, or strings drawn from #
in what follows.
On the other hand, database theory often studies sequence database models
where # is taken to be conceptually infinite instead, as in for example
[7,19,22,24,25]. Then the emphasis is on data given as lists of data items provided
by the user. Conversely, our emphasis is on data given as strings in
an alphabet fixed beforehand by the database designer. In other words, our
approach fixes an appropriate alphabet # as part of the database schema,
while the list approach considers # as part of the data instead. However, our
approach has been employed even for managing list data [2].
We furthermore assume left and right tape end-markers '[' and `]' not in #.
Then we define the n th character of a given string w # with length
as
| {z }
Intuitively our automaton model is a "two-way multi-tape nondeterministic
finite state automaton with end-markers"; similar devices have been studied
by for example Harrison and Ibarra [14] and Rajlich [20]. Formally, a k-tape
Finite State Automaton (k-FSA) [11, Section 3][17, Chapter 3.1] is a tuple
(1) # is the finite alphabet as explained above;
is the number of tapes;
(3) QA is a finite set of states;
QA is a distinguished start state;
QA is the set of final states; and
is a set of transitions of the form p c1 ,. ,c k
q, where p, q # QA , each
and each d i # {-1, 0, +1}.
We moreover require that d
ensures that the heads do indeed stay within the tape area
limited by these end-markers.
A configuration of A on input #
is of the form
corresponds intuitively to the situation, where A is in state p, and each head
scanning square number n i of the tape containing string w i .
Hence we say that is a possible next configuration
of C if and only if p w1 [n1 ],. ,w k [n k ]
-# d1 ,. ,d k
Now +1 can be interpreted as
"read forward", while -1 means "rewind the tape to read the preceding square
again", and 0 "stand still". We call tape i of A unidirectional if no transition in
specifies direction -1 for it; otherwise tape i is called bidirectional instead.
A computation of A on input #
w is a sequence . of these config-
urations, which starts with the initial configuration C
each C j+1 is a possible next configuration of the preceding configuration C j .
This computation C is accepting if and only if it is finite, its last configuration
C f has no possible next configurations, and the state of this C f belongs to
FA . The language L(A) accepted by A consists of those inputs #
w, for which
there exists an accepting computation C. Note that this language is a k-fold
relation on strings in the general case.
Because A is nondeterministic, we can without loss of generality assume that
no transitions leave the final states FA . We can for example introduce a new
III
a, a
Figure
1. A 2-FSA for recognizing strings and their reversals.
state f A into QA , and set for every state p previously in
FA , and every character combination c 1 , . , c k # {[, ]}, on which there
is no transition leaving p, we add the transition p c 1 ,. ,c k
-# 0,. ,0
f A . In this way,
whenever a computation of A would halt in state p, it performs instead one
extra transition into the (now unique) new final state f A , and halts there
instead.
We often view A as a transition graph GA with nodes QA and edges In
particular, a computation of A can be considered to trace a path P within
GA starting from node s A . It is furthermore expedient to restrict attention
to non-redundant A where each state is either s A itself or on some path P
from it into some state in FA . Figure 1 presents a 2-FSA A rev in this form
b}. The language accepted by it consists of the pairs
string v is the reversal of string u: looping in state II finds the
right end of the bidirectional tape 1 without moving the unidirectional tape 2,
while looping in state III compares the contents of these two tapes in opposite
directions.
Another often useful simplification is the following way to detect mutually
incompatible transition pairs.
of k-FSA A is locally consistent if and only if every
consecutive pair
,. ,c # k
r
of transitions in satisfies the condition
This ensures that there are configurations, in which this pair can indeed be
taken; whether these configurations do ever occur in any computation is quite
another matter. For example, both tapes in Figure 1 are locally consistent. If
in particular tape i is both unidirectional and locally consistent, then given
any path
c (1,1) ,. ,c (k,1)
-# d (1,1) ,. ,d (k,1)
c (1,2) ,. ,c (k,2)
-# d (1,2) ,. ,d (k,2)
c (1,3) ,. ,c (k,3)
-# d (1,3) ,. ,d (k,3)
in GA we can construct an input string
for tape i, which allows P to be followed, if we choose
c (i,j) if (d
# otherwise.
For example in Figure 1 the w 2 from Eq. (2) spells out the sequence of transitions
taken when looping in state III. Harrison and Ibarra provide a related
construction for deleting unidirectional input tapes from multi-tape pushdown
automata [14, Theorem 2.2], while Rajlich [20, Definition 1.1] allows the reading
head to scan two adjacent squares at the same time for similar purposes.
Again the nondeterminism of A allows us to enforce Definition 1 for tape i
at the cost of expanding the size of A by a factor of (|#|
a k-FSA B with state space Q which remembers the
character under tape head i. Add for each transition p c1 ,. ,c k
-# d1 ,. ,d k
q the transitions
-# d1 ,. ,d k
to complete our construction.
1.2 The Limitation Problem
This section introduces our limitation problem [11, Definition 3.1][17, Definition
3.3] concerning the automata defined in Section 1.1.
determine if there exists a limitation
# N with the following property: if #u 1 , . ,
L(A), then
If this is the case, then we say that A satisfies the finiteness dependency [21]
These dependencies are a special case
of functional dependencies in database theory [1, Section 8.2]. Intuitively, A
is a finite representation of the conceptually infinite database table
assures that if we select rows from this table by supplying
values for the columns 1, . , k, we do always receive a finite answer. In this
way A can be used both safely and declaratively as a string processing tool
within our string database model. Thus our goal is to treat the (user-defined)
string processing operation A as just another relation as far as the database
query language is concerned; such transparency is in fact being advocated
for the forthcoming object/relational database proposal [4, pages 49-55]. We
discuss elsewhere [12] how this overall string processing mechanism of ours
relates to this proposal and how it could be incorporated into such database
management systems.
In terms of automata theory we require that for any input #u 1 , . ,
the possible outputs #v 1 , . , v l # ) l must remain a finite set. This is what
is meant by "using acceptors as transducers": we supply strings for only some
tapes (here 1, . , k) of the acceptor A, and ask it to produce us all those
contents for the missing tapes (here k would have accepted
given the known tape contents. The limitation problem is then to determine
beforehand whether this computation will always return a finite result or not.
Weber [27,28] has studied a related question whether the set of all possible
outputs on any inputs of a given transducer remains finite, and if so, what is
the maximal output length.
2 Solving the Limitation Problem
The hardness of the limitation problem has been shown to depend crucially on
the amount of bidirectional tapes in A. The problem has been shown elsewhere
to be undecidable for FSAs with two bidirectional tapes [11, Theorem 5.1][17,
Chapter 4.1]: given a Turing machine [15, Chapter 7] M one can write a corresponding
3-FSA AM with two bidirectional tapes, which accepts exactly the
tuples #u, v, w#, where v and w together encode a sequence of computation
steps taken by M on input u. Here v and w must be read twice, requiring
bidirectionality. Then asking whether AM satisfies {1} # {2, 3} amounts to
asking whether M is total. This read-twice construction is reminiscent of representing
the valid computations of a given Turing machine as an intersection
of two Context-Free Languages [15, Chapter 8.6], and shows that it is also
undecidable to determine whether a given finiteness dependency is satisfied
by the intersection of the relations denoted by two given FSAs, even when
these FSAs have no bidirectional tapes at all [17, Corollary 6.1].
On the other hand, the limitation problem becomes decidable if we restrict attention
to those FSAs with at most one bidirectional tape [11, Theorem 5.2][17,
Chapter 4.2]. Intuitively, all the unidirectional tapes are first made locally con-
sistent, after which Eq. (2) allows us to construct their contents at will, so that
we can concentrate on the sole bidirectional tape. This tape can in turn be
studied by using an extension of the well-known crossing sequence construction
[15, Chapter 2.6] for converting two-way finite automata into classical one-way
iii
iii
iii
a, a
Figure
2. A crossing behavior of the 2-FSA in Figure 1.
finite automata. This method is clearly impractical, however. Therefore this
paper presents in Section 2.1 a practical partial solution, which furthermore
applies even in some cases involving multiple bidirectional tapes. Section 2.2
then develops this solution further to yield yet more explicit limitation information
Example 3 The 2-FSA A rev in Figure 1 satisfies both {1} # {2} and {2} #
{1} with the same limitation function the reversal of a
string is no longer than the string itself. This is moreover decidable, because
only tape 1 is bidirectional in A rev . To see how limitation inference proceeds
consider Figure 2, which exhibits the crossing behavior of A rev when tape 1
contains the string ab. For example, determining {2} # {1} involves checking
that every character written onto the bidirectional output tape 1 is "paid for"
by reading something from the unidirectional input tape 2 as well, although this
payment may occur much later during the computation; here it occurs when
tape 1 is reread in reverse. This can in turn be seen from the automaton B
produced by the crossing sequence construction by noting that the loops of B
around the repeating crossing sequence indicated in Figure 2 consume tape 2
as well.
The 2-FSA A rev is also considered to satisfy the trivial finiteness dependency
by definition. On the other hand, A rev does not satisfy # {1, 2},
because
2.1 An Algorithm for Determining Limitation
Our technique for solving the limitation problem given in Definition 2 is
based on the following two observations. Let A be the
l} the finiteness dependency in question.
Observation 1 If A accepts some input #w 1 , . , w k+l # with some computation
never visits the corresponding right
end-marker ']', then A also accepts all the su#xed inputs
with the same C. Hence, A cannot satisfy # in this case.
Observation 2 If on the other hand every accepting computation of A visits
the right end-marker ']' on all output tapes, then the only way A can violate
# is by looping while generating output onto some output tape but without
"consuming" any of the inputs at the same time - that is, by returning again
and again to read the same squares of the input tapes.
However, the (un)decidability results mentioned in the beginning of this section
indicate that reasoning about actual computations is infeasible. Thus we
reason instead about the structure of the transition graph GA . Therefore, instead
of Observation 1, the algorithm in Figure 3 merely tests that there is
no path P from the start state s A into a final state, which never requires ']'
to appear on some output tape, whereas it would have su#ced to show that
no such P is ever traversed during any accepting computation. (B denotes the
Boolean type with values 0 as 'false' and 1 as `true'.)
Similarly, the algorithm in Figure 4 enforces a more stringent condition than
Observation 2: every cycle L in GA , during which some output tape is advanced
to direction +1, must also move some input tape i into direction 1
but not back into the opposite direction #1. Then this tape i acts as a clock,
which eventually terminates the potentially dangerous repetitive traversals
of L. Again, if A violates #, then some L # failing this condition must exist in
GA , but the converse need not hold, because repetitions of L # need not necessarily
occur during any accepting computation of A; Figure 6 presents a loop
which seems at first glance to generate arbitrary many copies of character 'a'
onto its output tape 2, because it seems to move back and forth on its input
tape 1. However, closer scrutiny reveals that this behavior is in fact impossible
because the same square on tape 1 must first contain character 'a' in order to
get into state q, but later character 'b' in order to get back into state p.
Making the tapes locally consistent as in Definition 1 will catch some of these
impossible transition sequences, including all cases that arise due to the demands
on the contents of the unidirectional tapes. On the other hand, Figure 6
presents a bidirectional tape 1 which is already locally consistent but still im-
possible. If there is just one bidirectional tape altogether, then the crossing
sequence construction alluded to above in Example 3 can be seen as a method
for detecting these impossibilities and eliminating them from further consid-
eration. Unfortunately, we have no method of this kind for the general case.
The more stringent condition given above is enforced by repeatedly deleting
those transitions, which can justifiably be argued not to take part in any
loops of the kind mentioned in Observation 2. This technique is related to
analyzing the input-output behavior of logic programs [16,26], which analyze
the call graph of the given program component by component. However, our
technique remains simpler, because our automata are more restricted than
general logic programs.
function halting( G :transition graph GA of a k-FSA A;
X:subset of {1, . , k}):B;
2: for all i # {1, . , k} \ X do
3: H # G without transitions that specify ']' for tape
4: b # b # (H contains no path from s A into any state in FA )
5: end for
return b
Figure
3. An algorithm for testing Observation 1.
function looping( G:subgraph of GA for a k-FSA A;
X:subset of {1, . , k}):B;
2: H 1 , . , Hm # the maximal strongly connected components of G;
3: Delete from G all transitions between di#erent components (a.);
4: for all i # 1, . , m do
5: if some tape j # X winds to direction 1 in H i but not to #1 then
Delete from H i all transitions that wind this tape j (b.);
7: d # looping(H i , X)
8: else
9: d # no tape in {1, . , k} \ X winds into direction +1 in H i
12: end for
13: return b
Figure
4. An algorithm for testing Observation 2.
function limited( A :k-FSA;
X:subset of {1, . , k}):B;
1: return halting(G A , X) # looping(G A , X)
Figure
5. An algorithm for determining limitation.
a, a
a, a
a, a
b, a
Figure
6. A loop that cannot be traversed repeatedly.
More precisely, the edge deletions made by the algorithm in Figure 4 can be
justified as follows. Consider the first call made by the main algorithm in
Figure
5. Every loop mentioned in Observation 2 must clearly be contained in
some component H i of the entire transition graph of the k-FSA A.
(a) A transition between two di#erent strongly connected components cannot
then surely belong to any loop of this kind. The deletions in step 3 are
therefore warranted.
(b) Any transition # that winds the clock tape j selected for the current
component H i cannot belong to any loop of this kind either, because #
cannot be traversed indefinitely often. These traversals will namely wind
the input tape j eventually onto either end-marker, because tape j is not
wound into the opposite direction by any other transition # in H i . The
deletions in step 6 are therefore warranted as well.
This reasoning can then be applied in the subsequent recursive calls on the
reduced components H i as well, because we can then assume inductively that
the loops broken during the earlier calls could not have been ones mentioned
in Observation 2.
Formalizing this reasoning shows that the algorithm in Figure 5 is indeed
correct as follows.
Theorem 4 Let A be a (p
and let the algorithm in Figure 5 return 1 on A and {1, . ,
satisfies {1, . , with the function
where
Y
PROOF. Let us assume that C is an arbitrary computation of A on some input
We begin by proving the following
two claims about this C which correspond to Observations 1 and 2.
If C is accepting, then for every tape p r the
computation C takes some transition which requires ']' on tape j.
Let otherwise h be a tape which violates this Claim 5. C traces a path through
GA from s A into some state in FA . Then step 4 of the algorithm in Figure 3 sets
which violates our assumption that the algorithm in
Figure
returns 1, thus proving this Claim 5.
moves to direction +1 more than
times during the computation C.
Assume to the contrary that some h violates this Claim 6. Post a fence between
two adjacent configurations C g and C g+1 in C whenever tape
to direction +1. By our contrary assumption, at least l + 1 of these fences
are posted. Consider on the other hand two configurations C x and C y of C
to have the same color if and only if they share the same state and the same
head positions for tapes 1, . , most l of these colors are available,
recalling the assumption that tapes 1, . , p are unidirectional. Therefore C
must contain two configurations C x and C y which have the same color but are
separated by an intervening fence. Consider then the sequence of transitions
which transform C x into C y , as a path L within GA . This L forms a closed
cycle, and tape heads 1, . , are on the same squares both before and
after traversing L, because C x and C y shared a common color. Let us then see
which of the steps 3 or 6 of the algorithm in Figure 4 will first delete some
transition that belongs to L. It cannot be step 3, because all of L belongs
initially to the same maximal strongly connected component. But it cannot
be step 6 either, because if L ever moves a tape j # X into some direction 1,
it must also move tape j into the opposite direction #1 as well, in order to
return its head onto the same square both before and after L. Hence L persists
untouched to the very end of the recursion on step 9, and there the presence
of the transition of L that crosses the fence between C x and C y yields
which subsequently violates our assumption that the algorithm in Figure 5
returns 1, thus proving this Claim 6.
Claims 5 and 6 are combined into a proof of the theorem as follows. Assume
that #z # L(A); that is, A has some accepting computation C on input #z. It
su#ces to show that |w h | # l - 1 for every 1 # h # r. Tape head
must cross every border between two adjacent tape squares from left to right,
because otherwise C would not meet Claim 5. Claim 6 states in turn that C
performs at most l crossings of this kind. This means that tape p
contains at most l squares, of which the first and the last are reserved for
the end-markers, leaving at most l - 1 squares for the characters of the input
string w h . #
Example 7 Consider the 2-FSA A rev in Figure 1. The algorithm in Figure 5
can detect that it satisfies {1} # {2} as follows. The algorithm in Figure 3
returns 1, because every path into the final state IV must contain III [,]
-# 0,0
IV.
Evaluating the algorithm in Figure 4 proceeds in turn as follows. First, all transitions
from one state into another are deleted in step 3, leaving only the loops
ii
iii
a, a
Figure
7. The division of the 2-FSA in Figure 1 into components.
around states II and III. This is depicted in Figure 7, where the components
themselves are dotted, and the transitions between them (and thus deleted in
step are dashed. These loops are in turn deleted in step 6 when processing
the corresponding components, and therefore this function eventually returns
1 as well. However, Theorem 4 provides a rather imprecise limitation function
compared to the one given in Example 3.
On the other hand, the algorithm in Figure 5 fails to detect {2} # {1}, which
was detected in Example 3: looping in state II advances tape 1 without moving
tape 2, and seems therefore dangerous to the algorithm in Figure 4. Intuitively,
A rev first guesses nondeterministically some string, and only later verifies its
guess against the input. In Example 3, crossing sequences were examined to
see that this later checking in state III indeed reduces the acceptable outputs
into only finitely many (here just one).
Essentially the same limitation function as in Theorem 4 su#ces whenever all
of the output tapes r to be limited are unidirectional,
even if {1, . , cannot be verified by the
algorithm in Figure 5 [9, Theorem 2.1]. This is natural, because the algorithm
in
Figure
5 ignored the e#ects of moving any output tape p+q+1, . , p+q+r
into direction -1.
Theorem 8 Let p+1, . , p+q be all the bidirectional tapes in the (p+q+r)-
FSA A, and let A satisfy
Then
is a corresponding limitation function, where g A is as in Theorem 4.
PROOF. Consider the proof of Theorem 4, and assume further in Claim 6
that all the output tapes p+q+1, . , p+q+r are made locally consistent as in
Definition new assumption introduces the factor (|#|+2) r into W # . With
this modification, the original fencing-coloring construction shows that if some
accepting computation C on input #z advances some output tape p+q+h more
than then the path of transitions
taken by this C can be partitioned into three sub-paths KLM where L begins
in C x and ends in C y which share the same color and contains a transition #
that crosses some fence between C x and C y . However, now A must also accept
all the pumped inputs
where t # N, and each w (k,J ) denotes the string of characters in those squares
of output tape which the head lands during J (']' excluded).
This is in e#ect an application of Eq. (1) to the output tapes p+q+1, . , p+
r. The presence of # within L shows that w (h,L) #, and hence # fails by
observation 2, thereby proving this modified Claim 6.
5 continues to hold, as reasoned in Observation 1, and the theorem
follows again as before. #
Turning now to assessing when the algorithm in Figure 5 does detect finiteness
dependencies we see that it is successful at least when all tapes are unidirectional
Theorem 9 Let A be a non-redundant l)-FSA with all tapes unidirectional
and locally consistent, and let the algorithm in Figure 5 return 0 on A
and {1, . , k}. Then A does not satisfy
PROOF. The non-redundancy of A and the unidirectionality and local consistency
of all its tapes imply by Eq. (2) that for every path P in GA we
can always find an accepting computation C on some input #
traversing P.
Letting P then be any subgraph of GA which caused the algorithm in Figure
5 to return 0 yields some C whose existence violates # along the lines of
Theorem 8. #
2.2 Two Variants of the Limitation Algorithm
This section explores two possible directions into which the algorithm given
in Section 2.1 could be developed further. They both alter the non-recursive
step 9 of the algorithm in Figure 4, which tests some strongly connected
component H i in the transition graph GA of A. Moreover, this H i is known
not to be shrinkable further by the algorithm.
The first direction enlarges the set of FSAs A and finiteness dependencies #
that can be verified to hold by relaxing this test as follows. Suppose H i contained
some output tape j # {1, . , k} \ X that is wound into the reverse
if some tape j # X winds to direction 1 in H i but not in direction #1
then
Delete from H i all transitions that wind this tape j (b.);
else if some tape j # {1, . , k} \ X winds to direction -1 in H i but not
in direction +1 then
Delete from H i all transitions that wind this tape j;
else
d # no tape in {1, . , k} \ X winds into direction +1 in H i
Figure
8. The enlarging additions to the algorithm in Figure 4.
direction -1 but not into the forward direction +1. Then this tape j can again
be used as a clock for shrinking H i further, similarly to steps 5-7, because the
head on tape j cannot travel backwards forever, but must stop at least once
the left end-marker '[' is reached.
Algorithmically this direction leads to replacing steps 5-10 of the algorithm
in
Figure
4 with the steps given in Figure 8.
The other direction into which the algorithm given in Section 2.1 can be developed
is to constrain further the set of FSAs A and finiteness dependencies #
that can be verified to hold by restricting the test performed on step 9 of the
algorithm in Figure 4 to require that the component H i being tested must not
have any transitions left. Call the correspondingly modified limitation algorithm
of Section 2.1 fastidious; it thus requires that all the transitions of A
are deleted in order to verify #.
Example 10 The calculations in Example 7 show that the 2-FSA A rev in
Figure
does actually satisfy the finiteness dependency {1} # {2} even fas-
tidiously.
One advantage of fastidious verification is more e#cient simulation of A, as
will be explained in Section 3.1. The rest of this section explains another ad-
vantage, namely that it enables straightforward construction of better explicit
limitation functions than the generic one provided by Theorem 4. This construction
is similar in spirit to van Gelder's analysis of logic programs with
systems of equations [26], except that recurrences are used instead.
Let therefore A be a satisfying the finiteness dependency
suitable limitation function
would evidently be
if each auxiliary function f p
limit to the character position on output tape k+j where the right end-marker
']' is encountered. Here A is assumed to be in state p # QA , each of its input
tapes of an unspecified input string
with length m i # N and its designated output tape k + j on character h of
some unspecified output string.
These auxiliary functions can in turn be obtained by labeling the transition
graph GA of A with suitable expressions as follows:
. The expression for a state p # QA is the maximum of the expressions for
those transitions # that leave from p:
leaves from p
Graphically speaking, one can consider each node p of graph GA to become
labeled with the operator 'max' applied to the arrows that exit from p.
. The expression for a transition # is the expression for the state that
# enters, adjusted with the e#ects of # on the tapes in question:
f
where
Here the "otherwise" branch of Eq. (5) denotes the case when the transition
cannot apply by virtue of some input tape head position n i violating its
corresponding restriction # i . Then the value 1 is warranted, because it is
the earliest possible position in which ']' can possibly appear.
Graphically speaking, this shows how to construct the expressions for the
arrows maximized by node p of graph GA in Eq. (4) above.
Fastidiousness guarantees that this labeling does indeed yield a function: otherwise
the expression for some f p
refers (indirectly)
back to itself with no change to its arguments n 1 , . , n k . This, however,
implies a cycle C in GA from p back to itself for which none of the input
tapes 1, . , k moves in exactly one direction. This in turn means that C
could not be deleted by the fastidious limitation algorithm after all.
Accordingly, if a fastidious version of the algorithm in Figure 8 is used instead,
the labeling must then include all of h 1 , . , h l instead of just h, because then
also the output tapes k+1, . , k+l may have been used in deleting transitions
from the (k in question, and thus these output tape head positions
might be the ones that cannot repeat as arguments for some expression.
On the other hand, dropping the fastidiousness restriction does not altogether
invalidate this approach either, it just makes it more di#cult to provide an
explicit counterpart for Eq. (4). It is namely now possible that the expression
for some f p
does indeed refer back to itself without
changing its arguments n 1 , . , the expressions f #
j of Eq. (5) for the
transitions # that still remain in the component containing state p even after
the successful execution of the algorithm in Figure 4. However, these expressions
are also guaranteed not to increase h, for otherwise the execution of
the algorithm in Figure 4 would have been unsuccessful. Thus a function for f p
does still exist, even though giving an explicit expression for it is di#cult.
A similar labeling technique su#ces even for the crossing sequence construction
mentioned in Example 3 instead of a fastidious algorithm from Section 2.1,
provided that the labeling is performed relative to the resulting crossing sequence
automaton instead of the original [17, Chapter 5.2].
Example 11 The labeling to generate Eq. (3) for Example 10 proceeds as
follows. Applying Eq. (5) to the transitions leaving state III leads to
f
III
-# 0,0
IV
f
III
a,a
III
III
b,b
III
f III
which appear in Eq. (4) for state III, namely
f III
f
III
-# 0,0
IV
III
a,a
III
f
III
b,b
III
f III
a, a
a, a
a, a
Figure
9. A bidirectional loop that eventually ends.
where the last simplification solves the recurrence found above for f III
1 . Continuing
similarly for state II leads eventually to the tight limitation function
mentioned in Example 3.
A di#erent possibility for improving on the limitation algorithm given in Section
2.1 would be to take into account not only the directions but also the
total amount of tape movement. For instance, the current algorithms will not
break a loop like in Figure 9, because the input tape 1 in question moves in
both directions, even though the overall net e#ect of these movements is +1,
or to move one square forward, and therefore the loop cannot execute indefi-
nitely. Calculating such net e#ects have been recently studied in [18], but the
resulting algorithms di#er significantly from the approach presented here.
3 Evaluation of the Limited Answers
After inferring that the given (k does indeed satisfy the given
finiteness dependency want to
generate the (finite) set of outputs
#, or to solve Problem 2. This problem is known to be di#cult in
the general case: let B be a 2-FSA with an unidirectional input tape 1 and
a bidirectional output tape 2, and ask if a given input u can produce any
output v. This problem is equivalent to whether B, considered as a checking
stack automaton, accepts u [20, Theorem 5.1] which is known to be either
PSPACE- or NP-complete, depending on whether B # is a part of the instance or
not [6, Problem AL5]. However, the additional information # provides certain
optimization possibilities.
A straightforward way to obtain an evaluation algorithm is to convert the
output tapes from read-only into write-once, and perform these writing operations
concurrently with the simulation of the nondeterministic control. Figure
shows the resulting algorithm where the simulations of all the possible
computations are performed in a depth-first order using a stack S. The algorithm
maintains for each 1 # j # l an extensible character array W j [0 . L j
which holds the contents for the tape squares the output head k + j has al-
procedure simulate( A :(k
2:
3:
4: Initialize stack S to contain #0; L 1 , . , L l #
5: while S is nonempty do
l # be top element in S, and let # t
7: if q # FA then
8: output(v 1 , . , v l ) where each
10: else if t > |T A | then
13: Pop o# the top element from
15: else if #
-# d1 ,. ,d k ,e 1 ,. ,e l
l on which
19:
21: else
23: end if
24: end while
Figure
10. An algorithm for using acceptors as transducers.
ready examined during the computation C of A currently being simulated.
Figure
11 shows the indices during one simulation of the 2-FSA A rev from
Figure
1, where the input tape 1 contains the string ab whose reversal is being
generated onto the output tape 2.
In
Figure
10, is enumerated as # 1 , . , # |T A | , and # 0 is a new starting pseudo-
transition
-# 0,. ,0
s A . We also assume as in Section 1.1 that no final state
in FA has any outgoing transitions.
Note that # alone does not guarantee that the algorithm in Figure 10 halts, it
just guarantees that only finitely many di#erent outputs are ever generated.
Consider namely the situation in Figure 12, where the 2-FSA A rev in Figure 1
output tape 2: [ b
iii
state:
Figure
11. Simulating the 2-FSA in Figure 1 as a transducer.
input tape 2: [ b a ]
output tape 1: [ a a a a
ii
state:
Figure
12. Generating an indefinitely long output with the 2-FSA in Figure 1.
is being used as a transducer in the opposite direction to Figure 11: input is
read from tape 2 and written onto tape 1. As explained in Example 7, A rev
is now guessing nondeterministically a possible output for later verification
against the input. But how long guesses should A rev be allowed to make?
This question can be answered by adding the extra conditions
for each 1 # j # l into branch 15 of the algorithm in Figure 10 where W is
a limitation function corresponding to #. This addition is warranted, because
if during the currently simulated computation C some j violates Eq. (6) then
more than W(|u 1 | , . , |u k |) characters from # have been output onto tape k+
j. In this case C must be eventually rejecting and can hence be discarded at
once without further ado.
Now that the L j have been bounded by W the stack S will always contain only
finitely many di#erent configurations C of the transducer being simulated on
input #u. (These transducer configurations C can be defined in a straightforward
way by extending the acceptor configurations defined in Section 1.1 with
write-once output tapes.) Although stack S represents these configurations C
only implicitly, they can be reconstructed as in branch 10 of the algorithm in
Figure
10. However, some of these configurations C can still repeat, because
the transducer being simulated can also loop on the already known parts of its
tapes without generating new output. Fortunately this looping can be detected
and eliminated simply by testing in branch 15 of the algorithm in Figure 10
that the new configuration C new being pushed into stack S does not yet occur
in stack S. This is a standard way to avoid repetition during a depth-first
search [23, Chapter 3.6]. We have also experimented with comparing C new
against all the configurations C encountered so far in the entire search conducted
by the algorithm in Figure 10 on the current input #u, but this proved
to be extremely ine#cient in practice.
Now we have solved Problem 2 by developing a halting variant of the evaluation
algorithm in Figure 10. However, this solution su#ers from two drawbacks.
Drawback 1 The value of a limitation function is needed in Eq. (6) to estimate
- and hopefully tightly - the depth at which ultimately rejecting output-
generating computations can be pruned.
This could be termed the "compile-time" drawback: the limitation function
must be formed when the acceptor is proposed as a possible transducer, while
its value is required before each invocation of the simulation algorithm in
Figure
10.
Drawback 2 The whole stack S must be scanned against repeating configurations
C when pushing each new configuration C new in the algorithm in
Figure
10.
This could in turn be termed the "run-time" drawback, because it adds loop
checking overhead the execution of the simulation algorithm in Figure 10.
Fortunately both of these drawbacks can be alleviated by considering how #
was inferred to hold, as discussed in the remainder of this section.
3.1 When the Limitation Algorithm Succeeds
Consider first the case where # was inferred to hold by having the algorithm
in
Figure
return 1 on A and #. Claim 6 in the proof of Theorem 4 shows
that every computation C of A is "self-limiting" in the sense that no L j can
grow indefinitely. Thus Eq. (6) is not needed after all, thereby alleviating
Drawback 1.
alleviates also Drawback 2. Two occurrences C x and C y of the same
configuration during C have the same color by definition. The proof of the
claim shows that C x and C y can only arise by traversing a closed loop L,
which is not deleted during the algorithm in Figure 4. We therefore modify
this algorithm to mark in A the transitions it considers deleted. Then the
algorithm in Figure 10 can stop scanning its stack S as soon as the most
recent marked transition is seen. This holds even when the marking has been
performed by the enlarged algorithm in Figure 8.
This reasoning shows another benefit of the fastidious variant of the algorithm
in
Figure
4: then every transition gets marked, and therefore scanning the
stack S is no longer required at all. That is, the algorithm in Figure 10 su#ces
unmodified in this case, and all run-time loop checking overhead has been
eliminated.
Example 12 Applying this modified marking algorithm in Figure 4 into the
2-FSA A rev in Figure 1 and even fastidiously by Example
10. Then the algorithm in Figure 10 and A rev can generate the reversal
of any given string in linear time with respect to its length. In other words,
choosing this evaluation strategy leads into an optimal way to perform string
reversals.
Note finally that this marking technique can also speed up the simulation of
those m-FSAs B which are still used as acceptors and not transducers: just
compute the marking given by the modified algorithm in Figure 4 with B and
{1, . , m} (which yields 1), and use the resulting stack scanning optimization
strategy during the simulation of B on any given input #u 1 , . , um #. Again
the marks identify transitions under which it is not necessary to look when
scanning the stack for repeating configurations during the simulation.
3.2 When All the Outputs are Unidirectional
Another strategy related to the one developed in Section 3.1 works when all
the output tapes of A are unidirectional and the finiteness dependency # still
holds but this fact can no longer be inferred by the algorithm developed in
Section 2.1.
In this case the proof of Theorem 8 shows that a halting but still correct variant
of the simulation algorithm in Figure 10 can be obtained by adding into its
branch 15 the extra condition that configurations C x and C y of the same color
- in the sense of that proof - may not repeat within any computation C: if the
path L of transitions from C x into C y advances any of the unidirectional output
tapes r, then this C must be rejecting because it would
violate # via Observation 2, while otherwise taking L during C was unnecessary
because then C x and C y are the same configuration. This reasoning provides
the loop checking discipline which guarantees the halting of the simulation
algorithm in this case.
Furthermore, this simulation is also amenable to the stack scanning optimization
technique developed in Section 3.1: a variant of the algorithm in Figure 4
which merely attempts to mark every transition it possibly can - instead of
trying to test for # and fail - identifies some of those cycles L that can cause
some color to repeat. These marks can then again be used for limiting stack
scanning during simulation.
Conclusions
We studied the problem of using a given nondeterministic two-way multi-tape
acceptor as a transducer by supplying inputs onto only some of its tapes, and
asking it to generate the rest. We developed a family of algorithms for ensuring
that this transduction does always yield finite answers, and another family of
algorithms for actually computing these answers when they are guaranteed to
exist. In addition, these two families of algorithms provided a way to execute
and optimize the simulation of nondeterministic two-way multi-tape acceptors
by restricting the amount of work that must be performed during run-time
loop checking.
These algorithms have been implemented in the prototype string database
management system being developed at the Department of Computer Science
in the University of Helsinki [8,12,13].
--R
Foundations of Databases.
Datalog and transducers.
Foundation for Object/Relational Databases - The Third Manifesto
PROXIMAL: a database system for the e
Computers and Intractability: A Guide to the Theory of NP-Completeness
Regular sequence operations and their use in database queries.
AQL: An alignment based query language for querying string databases.
translation and evaluation of Alignment Calculus.
Reasoning about strings in databases.
Reasoning about strings in databases.
A declarative programming system for manipulating strings.
Implementing a declarative string query language with string restructuring.
Introduction to Automata Theory
A framework for testing safety and e
Finding paths with the right cost.
Absolutely parallel grammars and two-way finite-state transducers
Safety of recursive Horn clauses with infinite relations (extended abstract).
Supporting lists in a data model (a timely approach).
Artificial Intelligence: a Modern Approach.
SEQ: A model for sequence databases.
The AQUA approach to querying lists and trees in object-oriented databases
Deriving constraints among argument sizes in logic programs.
On the valuedness of finite transducers.
On the lengths of values in a finite transducer.
Temporal logic can be more expressive.
--TR
A database language for sets, lists and tables
Safety of recursive Horn clauses with infinite relations
On the valuedness of finite transducers
On the lengths of values in a finite transducer
Reasoning about strings in databases
Artificial intelligence
A framework for testing safety and effective computability
Foundation for object/relational databases
Regular sequence operations and their use in database queries
Sequences, datalog, transducers
Reasoning about strings in databases
Foundations of Databases
Introduction To Automata Theory, Languages, And Computation
Computers and Intractability
The AQUA Approach to Querying Lists and Trees in Object-Oriented Databases
Implementing a Declarative String Query Language with String Restructuring
Supporting Lists in a Data Model Timely Approach)
--CTR
Matti Nyknen , Esko Ukkonen, The exact path length problem, Journal of Algorithms, v.42 n.1, p.41-53, January 2002
Gsta Grahne , Raul Hakli , Matti Nyknen , Hellis Tamm , Esko Ukkonen, Design and implementation of a string database query language, Information Systems, v.28 n.4, p.311-337, June | transducers;finiteness dependencies;multi-tape automata |
504552 | A new lower bound for the list update problem in the partial cost model. | The optimal competitive ratio for a randomized online list update algorithm is known to be at least 1.5 and at most 1.6, but the remaining gap is not yet closed. We present a new lower bound of 1.50084 for the partial cost model. The construction is based on game trees with incomplete information, which seem to be generally useful for the competitive analysis of online algorithms. 2001 Elsevier Science B.V. | Introduction
The list update problem is a classical online problem in the area of self-organizing data
structures [4]. Requests to items in an unsorted linear list must be served while maintaining
the list so that access costs remain small. We assume the partial cost model where
accessing the ith item in the list incurs a cost of units. This is simpler to analyze than
the original full cost model [14] where that cost is i. After an item has been requested, it
may be moved free of charge closer to the front of the list. This is called
Any other exchange of two consecutive items in the list incurs cost one and is called a
paid exchange.
An online algorithm must serve the sequence oe of requests one item at a time, without
knowledge of future requests. An optimum offline algorithm knows the entire sequence oe
in advance and can serve it with minimum cost OFF (oe). If the online algorithm serves oe
with cost ON (oe), then it is called c-competitive if for a suitable constant b,
ON
for all request sequences oe. The competitive ratio c in this inequality is the standard
yardstick for measuring the performance of the online algorithm. The well-known move-
to-front rule MTF , for example, which moves each item to the front of the list after it has
been requested, is 2-competitive [14, 15]. This is also the best possible competitiveness
for any deterministic online algorithm for the list update problem [10].
1 Institute for Theoretical Computer Science, ETH Z-urich, 8092 Z-urich, Switzerland. Email:
ambuehl@inf.ethz.ch, gaertner@inf.ethz.ch
Mathematics Department, London School of Economics, London WC2A 2AE, Great Britain. Email:
stengel@maths.lse.ac.uk. Support by a Heisenberg grant from the Deutsche Forschungsgemeinschaft and
the hospitality of the ETH Z-urich for this research are gratefully acknowledged.
Randomized algorithms can perform better on average [9]. Such an algorithm is
called c-competitive if
\Theta
ON (oe)
where the expectation is taken over the randomized choices of the online algorithm. Fur-
thermore, we call the algorithm strictly c-competitive if (2) holds with
The best randomized list update algorithm known to date is 1.6-competitive. This
algorithm COMB [2] serves the request sequence with probability 4/5 using the algorithm
BIT [14], which alternately moves a requested item to the front or leaves it in place. With
probability 1/5, COMB treats the request sequence using a deterministic TIMESTAMP
algorithm [1], where a requested item x is moved in front of the first item in the list that
has been requested at most once since the last request to x.
Randomization is useful only against the oblivious adversary [5] that generates request
sequences without observing the randomized choices of the online algorithm. If the
adversary can observe those choices, it can generate requests as if the algorithm was deter-
ministic, which is then at best 2-competitive. We therefore consider only the interesting
situation of the oblivious adversary. Lower bounds for the competitive ratio can be proved
using Yao's theorem [18]: If there is a probability distribution on request sequences so
that the resulting expected competitive ratio for any deterministic online algorithm is d or
higher, then every deterministic or randomized online algorithm has competitive ratio d
or higher [8]. Teia [16] described a simple distribution on request sequences that, adapted
to the partial cost model, shows a lower bound of 1.5. The optimal competitive ratio
for the list update problem is therefore between 1.5 and 1.6, but the true value is as yet
unknown.
For lists with up to four items, it is possible to construct an online list update
algorithm that is 1.5-competitive [3] and therefore optimal. In this paper, we show a
lower bound that is greater than 1.5 when the list has at least five items. We will prove
this bound for the standard assumption that algorithms may use paid exchanges. One
can also prove a lower bound above 1.5 for the variant of the list update problem where
only free exchanges are allowed. For that purpose, we have to modify and extend our
method in certain ways, as mentioned at the end of this paper.
Our construction uses a game tree where alternately the adversary generates a request
and the online algorithm serves it. The adversary is not informed about the action
of the online algorithm, so the game tree has imperfect information [12]. We consider
a finite tree where after some requests, the ratio of online versus optimal offline cost is
the payoff to the adversary. This defines a zero-sum game, which we solve by linear pro-
gramming. For a game tree that is sufficiently deep, and restricted to a suitable subset
of requests so that the tree is not too large in order to stay solvable, this game has a
value of more than 1.50084. This shows that any strictly c-competitive online algorithm
fulfills c - 1:50084. In order to derive from this a new lower bound for the competitive
ratio c according to (1) with a nonzero constant b, one has to generate arbitrarily long
request sequences. This can be achieved by composing the game trees repetitively, as we
will show.
A drawback is our assumption of the partial instead of the full cost model. In the
latter model, where a request to the ith item in the list incurs cost i, the known lower
bound is 1:5 \Gamma 5=(n+5) for a list with n items. This result by Teia [16] yields a lower bound
for the competitive ratio much below 1.5 when the list is short. In fact, the algorithm
COMB [2] is 1.5-competitive when n ! 9. To prove a lower bound above 1.5 for the full
cost model we would have to extend our construction to longer lists. Unfortunately, a
straightforward extension cannot compensate for the reduction of the competitive ratio
by 5=(n (or any term proportional to 1=n) when considering the full instead of the
partial cost model, so this case remains open. Nevertheless, we think a result for the
partial cost model is still interesting since that model is more canonical when one looks
at the analysis, and it is still close to the original problem formulation.
2. Pairwise analysis and partial orders
The analysis of a list update algorithm is greatly simplified by observing separately the
relative movement of any pair of items in the list. Let oe be a sequence of requests.
Consider any deterministic algorithm A that processes oe. For any two items x and y in
the list, let A xy (oe) be the number of times where y is requested and is behind x in the
list, or vice versa. Then it is easy to show [6, 9, 2] that
A xy (oe) ;
where L is the set of items in the list. In that way, A xy (oe) represents the cost of the online
algorithm projected to the unordered pair fx; yg of items.
Let oe xy be the request sequence oe with all items other than x or y deleted. Many
list update algorithms, like MTF , BIT , and TIMESTAMP, are projective in the sense
that at any time the relative order of two items x and y in the list depends only on the
projected request sequence oe xy and the initial order of x and y, which we denote by [xy]
if x precedes y. (In general, we list items between square brackets to denote their current
order in the list maintained by the algorithm.)
For the optimal offline algorithm OFF , the projected cost OFF xy (oe) is clearly at
least as high as the cost of serving oe xy optimally on the two-element list consisting of x
and y. The latter cost is easy to determine since, for example, it is always optimal to
move an item to the front at the first of two or more successive requests. In fact, the
item must be moved to the front at the first of three or more successive requests. On the
other hand, it is usually not optimal to move an item that is requested only once. Hence,
for any two items x and y where x precedes y, an online algorithm serving a request to
y can either leave y behind x or move y in front of x, which, either way, is a "mistake"
depending on whether y will be requested again before x or not.
Based on this observation, Teia [16] has constructed a lower bound of 1.5 for the
competitive ratio. The requests are generated in runs which are repeated indefinitely. At
the start of each run, the list currently maintained by the offline algorithm has a particular
this list is traversed from front to back, requesting each item
with equal probability either once or three times. If an item is requested three times, then
it is moved to the front at the first request, otherwise left in place, which is an optimal
offline treatment. This results in a new offline list, which determines the next run.
The following table (3) lists the resulting costs for the possible actions of the online
algorithm, projected on items x and y. In that table, WAIT refers to an online algorithm
that moves the requested item only at the second request in succession (if it is not moved
then, the online costs are even higher), and MTF moves the item to the front at the first
request. In (3), item x is assumed to precede y in the lists maintained by both offline and
online algorithm. The four request sequences each have probability 1/4. For each of the
four possible combinations of WAIT and MTF , the column ON denotes the online cost
and I after denotes the number of inversions in the online list after the requests have been
served. an inversion is a transposition of two items relative to their position in the offline
list.
OFF ON with [xy]; I before
oe xy with x WAIT x MTF x WAIT x MTF
ON I after ON I after ON I after ON I after
Without inversions, the MTF algorithm, for example, would incur the same cost as the
optimal offline cost. However, the inversion increases the online cost by a full unit in the
next run, where [xy] is the order for the offline algorithm but [yx] is the order of x and y
in the list used by the online algorithm. The following table shows these online costs
when the algorithm starts with such an inversion, denoted by I before = 1.
OFF ON with inversion [yx]; I before
oe xy with x WAIT x MTF x WAIT x MTF
ON I after ON I after ON I after ON I after
Tables
(3) and (4) list all possible online actions, except for those that perform even worse
(leaving a triply requested item in place, for example). Note that it does not matter if
the online algorithm conditions its action on the presence of inversions or not.
Let T be the distribution on request sequences generated by the described method of
Teia. Then the expected online costs together with the change in the number of inversions
fulfill the inequality
This follows from (3) and (4) by considering the projected sequences and telescoping
the sum for the inversion counts from one run to the next (where I before for that run is
equal to I after for the previous run and cancels). Inequality (5) shows that the number
of inversions can serve as a potential function [7]. The variation \DeltaI = I after \Gamma I before of
this potential function is bounded, so that (5) implies that any online algorithm is at
most 1.5-competitive for the distribution T on request sequences. We will extend Teia's
method in our lower bound construction.
Using partial orders, one can construct a 1.5-competitive list update algorithm for
lists with up to four items [3]. The partial order is initially equal to the linear order of
the items in the list. After each request, the partial order is modified as follows, where
xky means that x and y are incomparable:
partial order after request to
before z 62 fx; yg x y
That is, a request only affects the requested item y in relation to the remaining items.
Then y is in front of all items x except if x ! y held before, which is changed to xky. The
initial order in the list and the request sequence determine the resulting partial order.
One can generate an arbitrary partial order in this way [3].
The partial order defines a position for each
item x. If the online algorithm can maintain a distribution on lists so that the expected
cost of accessing an item x is equal to p(x), then this algorithm is 1.5-competitive [3].
One can show that then x is with probability one behind all items y so that y ! x, and
precedes with probability 1/2 those items y where xky. Incomparable elements reflect the
possibility of a "mistake" of not transposing these items, which should have probability
1/2. For lists with up to four items, one can maintain such a distribution using two
lists only. That is, the partial order is represented as the intersection of two lists, where
each list is updated by moving the requested item suitably to the front, using only free
exchanges. The algorithm works by choosing one of these lists at the beginning with
probability 1/2 as the actual list and serving it so as to maintain the partial order (with
the aid of the separately stored second list).
The partial order approach is very natural for the projection on pairs and when the
online algorithm can only use free exchanges. A lower bound above 1.5 must exploit a
failure of this algorithm. This is already possible for lists with five items, despite the fact
that all five-element partial orders are two-dimensional (representable as the intersection
of two linear orders). Namely, let the items be integers and let the initial list be [12345],
and consider the request sequences
After the first request to 4, the partial order states 4k1, 4k2, 4k3, and 4 ! 5, and otherwise
5. Using a free exchange, 4 can only be moved forward and has to precede
each with probability 1/2. This is achieved uniquely with the uniform distribution
on the two lists [12345] and [41235] (this, as well as the following, holds even though
distributions on more than two lists are allowed). The next request to 2 induces 2 ! 4, so
2 must be moved in front of 4 in the list [41235], where 2 already passes 1, which yields
the unique uniform distribution on [12345] and [24135]. The next request to 5 entails that
5 is incomparable with all other items. It can be handled deterministically in exactly two
ways (or by a random choice between these two ways): Either 5 is moved to the front in
[24135], yielding the two lists [12345] and [52413] with equal probability, or 5 is moved to
the front in [12345], yielding the two lists [51234] and [24135] with equal probability. If
the two lists are [12345] and [52413], the algorithm must disagree with the partial order
after the request to 4 as in oe 1 , since then 4 must precede both 1 and 5 in both lists (so 4
is moved to the front in both lists) but then incorrectly passes 2 where only 2k4 should
hold. Similarly, for the two lists [51234] and [24135] the request to 3 as in oe 2 moves 3
in front of 5 and 4 in both lists, so that it passes 1, violating 1k3. Thus, either oe 1 or oe 2
in (6) causes the poset-based algorithm to fail, which otherwise achieves a competitive
ratio of 1.5. These sequences will be used with certain probabilities in our lower bound
construction.
3. Game trees with imperfect information
Competitive analysis can be phrased as a zero-sum game between two players, the adversary
and the online algorithm (or online player ). In order to deal with finite games, we
assume a finite set S of request sequences oe (of a given bounded length, for example),
which represent the pure strategies of the adversary. These can be mixed by random-
ization. The online player has a finite number N of possible ways of deterministically
serving these request sequences. These deterministic online algorithms can also be chosen
randomly by suitable probabilities . In this context of finitely many
request sequences, an arbitrary constant b in (2) is not reasonable, so we look at strict
competitiveness. The randomized online algorithm is strictly c-competitive if for all oe
in S,
where ON j (oe) is the cost incurred by the jth online algorithm and OFF (oe) is the optimal
offline cost for serving oe. We can disregard the trivial sequences oe with OFF
consist only of requests to the first item in the list. In this case (7) is equivalent to
ON j (oe)
OFF (oe)
The terms ON j (oe)=OFF (oe) in (8), for 1 - j - N and oe 2 S, can be treated as a payoff to
the adversary in a zero-sum game matrix with rows oe and columns j. Correspondingly, a
lower bound d for the strict competitive ratio is an expected competitive ratio [8] resulting
from a distribution on request sequences. This distribution is a mixed strategy of the
adversary with probabilities q oe for oe in S so that for all online strategies
ON j (oe)
The minimax theorem for zero-sum games [18] asserts that there are mixed strategies for
both players and reals c and d so that (8) and (9) hold with c is the "value"
of the game and the optimal strict competitive ratio for the chosen finite approximation
of the list update problem. Note that it depends on the admitted length of request
sequences. Due to the complicated implicit definition and large size of the game matrix,
we only know bounds c and d in (8) and (9) that hold irrespective of the length of the
request sequences where d ! c.
The number of request sequences is exponential in the length of the sequences. The
online player has an even larger number of strategies since that player's actions are conditional
on the observed requests. This is best described by a game tree. At each nonterminal
node of the tree, a player makes a move corresponding to an outgoing edge. The
game starts at the root of the tree where the adversary chooses the first request. Then,
the online player moves with actions corresponding to the possible reorderings of the list
after the request. There are n! actions corresponding to all possible reorderings. (Later,
we will see that most of them need not be considered.) The players continue to move
alternatingly until the last request and the reaction by the online player. Each leaf of
the tree defines a sequence oe and an online cost ON (oe) (depending on the online actions
leading to that leaf), with payoff ON (oe)=OFF (oe) to the adversary.
The restricted information of the adversary in this game tree is modeled by information
sets [12]. Here, an information set is a set of nodes where the adversary is to move
and which are preceded by the same previous moves of the adversary himself. Hence,
the nodes in the set differ only by the preceding moves of the online player, which the
adversary cannot observe. an action of the adversary is assigned to each information set
(rather than an individual node) and is by the definition the same action for every node
in that set. On the other hand, the online player is fully informed about past requests, so
his information sets are singletons. Figure 1 shows the initial part of the game tree for a
list with three items for the first and second request by the adversary, and the first online
response, here restricted to free exchanges only.
A
A
A
A
A
A
@
@
@
@
@
@
[123] [123] [213] [123] [213] [312]
ffl \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
ffl \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
ffl \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
ffl \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
adv
ON
ON
ON
adv
adv
adv
Figure
1. Game tree with information sets.
A pure strategy in a game tree assigns a move to every information set of a player,
except for those that are unreachable due to an earlier choice of that player. Here,
the online player has information sets (like in Figure 1) where each combination of moves
defines a different strategy. This induces an exponential growth of the number of strategies
in the size of the tree. The strategic approach using a game matrix as in (8) and
becomes therefore computationally intractable even if the game tree is still of moderate
size. Instead, we have used a recent method [17, 11] which allows to solve a game tree
with a "sequence form" game matrix and corresponding linear program that has the same
size as the game tree.
Using game trees, a first approach to finding a randomized strategy for the adversary
is the following. Consider a list with five items, the minimumnumber where a competitive
ratio above 1.5 is possible. Fix a maximum length m of request sequences, and generate
the game tree for requests up to that length. At each leaf, the payoff to the adversary is
the quotient of online and offline cost for serving that sequence. Then convert the game
tree to a linear program, and compute optimal strategies with an LP solver (we used
CPLEX).
However, this straightforward method does not lead to a strict competitiveness
above 1.5, for two reasons. First, "mistakes" of an algorithm, like in the last column
in (3), manifest themselves only later as actual costs, so there is little hope for an improved
lower bound using short request sequences. Secondly, even if only short sequences
are considered, the online player has n! responses to every move of the adversary, so that
the game tree grows so fast that the LP becomes computationally infeasible already for
very small values of m.
The first problem is overcome by adding the number of inversions of the online
list, denoted by I after in (3) and (4) above, to the payoff at each leaf. This yields a
strict competitive ratio greater than 1.5 for rather short sequences. The inversions are
converted into actual costs by attaching a "gadget" to each leaf of the game tree that
generates requests similar to Teia's lower bound construction. The next section describes
the details.
The second problem, the extremely rapid growth of the game tree, is avoided as
follows. First, we limit the possible moves of the online player by allowing only paid
exchanges of a special form, so-called subset transfers [13]. A subset transfer chooses
some items in front of the requested item x and puts them in the same order directly
behind x (e.g. [12345x67] ! [13x24567]). Afterwards, the adversary's strategy computed
against this "weak" online player is tested against all deterministic strategies of the online
player, which can be done quickly by dynamic programming. Then the lower bound still
holds, that is, the "strong" online player who may use arbitrary paid exchanges cannot
profit from its additional power.
4. The game tree gadgets
We compose a game tree from two types of trees or "gadgets". The first gadget called
FLUP (for "finite list update problem") has a small, irregular structure. The second
gadget called IC (for "inversion converter") is regularly structured. Both gadgets come
with a randomized strategy for the adversary, which has been computed by linear programming
for FLUP. an instance of IC is appended to each leaf of FLUP. The resulting
tree with the specified strategy of the adversary defines a one-player decision problem for
the online player that has an expected strict competitive ratio of at least 1:5
about 1.50084, for the simplest version of FLUP that we found; larger versions of FLUP
give higher lower bounds.
Both gadgets assume a particular state of the offline list, which is a parameter
that determines the adversary strategy. Furthermore, at the root of FLUP (which is
the beginning of the entire game), it is assumed that both online and offline list are
in the same state, say [12345]. Then the adversary strategy for FLUP generates only
the request sequences 4, 425, 4253, and 4254 with positive probability, which are the
sequences in (6) or a prefix thereof. After the responses of the online player to one of
these request sequences, the FLUP tree terminates in a leaf with a particular status of
the online list and of the offline list, where the latter is also chosen by the adversary,
independently of the online list. For the request sequence 4, that offline list is [41235],
that is, the offline algorithm has moved 4 to the front. If the FLUP game terminates after
the request sequence 425, the adversary makes an additional internal choice, unobserved
by the online player, between the offline lists [51234] and [52134]. In the first case, the
offline player brought 5 to the front but left 4 and 2 in their place, in the second, 2 was
also moved to the front. Similar choices are performed between the offline lists for the
request sequences 4253 and 4254.
requests offline list probability OFF MTF
The specific probabilities for these choices of the adversary in FLUP are shown in (10).
The last three columns denote the cost for the offline algorithm and, as an example,
the two online algorithms MTF and WAIT , where WAIT moves an item to the front
at the second request. The FLUP tree starts with 4 as the first request, followed by
the possible responses of the online player. Next, the adversary exits with probability
396/1184, without a request, to the leaf with offline list [41235], and with complementary
probability requests item 2, which is followed by the online move, and so on.
Each leaf of the FLUP tree is the root of an IC gadget which generates requests
(similar to the runs in Teia's construction, see below), depending on the offline list. The
number of inversions of the online list relative to this offline list is denoted by I in (10).
The purpose of the IC gadget is to convert these inversions into actual costs. Any request
sequence generated by the IC gadget can be treated with the same offline cost v, here
Thereby, the online algorithm makes mistakes relative to the offline algorithm,
so that the additional online cost in IC is 1:5v.
Let ON be the online cost incurred inside FLUP. Then FLUP can be represented as
a game tree where the IC gadget at each leaf is replaced by the payoff D to the adversary,
Using these payoffs, the probabilities in (10) have been computed by linear programming.
One can show that any online strategy, as represented in the FLUP tree, has an expected
strict competitive ratio of at least 1:5 + 1, or about 1.50084. Two optimal online
strategies are MTF and WAIT , where the values of ON I as used in (11) are also
shown in (10). Here, WAIT moves only item 4 to the front at the end of the request
sequence 4254.
The well-defined behavior of the IC gadget allows to replace it by a single payoff D
as in (11). Furthermore, the online player knows that the FLUP gadget has been left and
the IC gadget has been entered, since IC starts with a request to the first item in the
offline list, like 4 when that list is [41235] as in the first row of (10). In this context, we
make a certain assumption about the internal choice of the adversary between different
offline lists. Namely, at the start of the IC gadgets for the offline lists [12345] and [13524]
which follow the request sequence 4253, the first request is 1 and then the online player
cannot yet tell which IC gadget has been entered. Strictly speaking, the two gadgets have
to be bridged by appropriate information sets for the online player. However, we assume
instead that the internal choice of the adversary between the two lists is revealed to the
online player at the beginning of IC, which is implicit in replacing IC by a single payoff.
This is allowed since it merely weakens the position of the adversary: Any online strategy
without this extra information can also be used when the online player is informed about
the adversary's internal choice, so then the online payoff cannot be worse.
The offline list assigned to a leaf of the FLUP gadget is part of an optimal offline
treatment (computed similar to [13]) for the entire request sequence. However, that list
may even be part of a suboptimal offline treatment, which suffices for showing a lower
bound since it merely increases the denominator in (9). Some of the offline costs in (10) can
only be realized with paid exchanges by the offline algorithm. For example, the requests
4253 are served with cost 10 yielding the offline list [23451] by initial paid exchanges that
move 1 to the end of the list. With free exchanges, this can only be achieved by moving
every requested item in front of 1, which would result in higher costs.
In the remainder of this section, we describe the IC gadget. Its purpose is to convert
the inversions at the end of the FLUP game to real costs while maintaining the lower
bound of at least 1.5. At the same time, these inversions are destroyed so that both the
online list and the offline list are in the same order after serving the IC.
The IC extends the construction by Teia [16] described in Section 2 above. Let T k
be the sequence that requests the first k items of the current offline list in ascending
order, requesting each item with probability 1/2 either once or three times. Assume that
the offline algorithm treats T k by moving an item that is requested three times to the
front at the first request, leaving any other item in place, which is optimal. The triply
requested items, in reverse order, are then the first items of the new offline list, followed
by the remaining items in the order they had before. Then T n is a run as used in Teia's
construction for a list with n items. The random request sequence generated there can be
written as T w
n , that is, a w-fold repetition of T n , where w goes to infinity. Note that the
offline list and hence the order of the requests changes from one run T n to the next, so T 2
for example, is not a repetition of two identical sequences. The optimal offline treatment
of T k costs
units.
The difference between our construction and Teia's is the use of only a prefix of the
elements in the offline list. We show
which for has already been proved above, see (5). To see (12) also for k ! n, we use
projection on pairs and consider the case of only two items. If none of the two items occur
in T k , both sides of (12) are zero, because, by definition, projection on pairs ignores items
that are not projected. If only one item occurs in T k , only the first one in the offline list
was requested, so OFF (T k because the online
algorithm incurs cost at least one if there is an inversion. This shows (12).
As in (5) above, (12) can be extended to concatenations T of sequences T k . We let
IC be the randomly generated sequence defined by
which by the preceding considerations fulfills
If E[I after in (13), that is, there are no inversions left after serving IC, then inversions
would indeed have been converted to actual costs. Otherwise, suppose that after serving
IC, there is an inversion between two items x and y, say, with x in front of y in the final
offline list. Then by the definition of IC, item x was requested at least three more times
after the last request to y. So the online player could have saved a cost unit by moving x in
front of y in his list after the second request to x. To summarize, the sequence IC produces
an additional online cost unit for every inversion that holds between the online and the
offline list at the end of IC. Then, however, we can assume without loss of generality that
both lists are in the same state after IC. Namely, if they are not, the online player could as
well have served IC as intended (leaving no inversions) and invested the saved cost units
in creating the inversions at the beginning of the next FLUP gadget, where their costs
are taken into account. Thus, indeed, (13) holds with E[I after inversions have
become actual costs as stated in (11). The offline costs there are
Since the online and offline list are identical at the end of the IC, a new FLUP
game can be started. This generates request sequences of arbitrary length. In that way,
we obtain a lower bound above 1.5 for the competitive ratio c in (2) for any additive
constant b.
In the above construction, the value of the lower bound does not depend on whether
the online player may use paid exchanges or not, but the adversary's strategy does use
paid exchanges. So it seems that the online player cannot gain additional power from paid
exchanges. This raises the conjecture that by restricting both players to free exchanges
only, the list update problem might still have an optimal competitive ratio of 1.5. However,
this is false. There is a randomized adversary strategy where the offline algorithm uses only
free exchanges which cannot be served better than with a competitive ratio of 1:5+1=5048.
Because of the length of the sequences used in the corresponding FLUP game, this result
is more difficult to obtain. First of all, the sequences used in that game are not found by
brute force any more, but by dynamic game tree search with alpha-beta pruning in an
approximate game. In that approximate game, the online player is restricted to a small
set of random moves, similar to the poset algorithm. Secondly, the above argument about
the order of the elements in the online list after leaving the IC gadget no longer holds.
This can be resolved by a further elaboration of our method. The details are beyond the
scope of this paper.
Extending our result to the full cost model requires a systematic treatment of lists
of arbitrary length n. This is easy for the IC gadget but obviously difficult for the FLUP
gadget. We hope to clarify the connection of FLUP with the sequences in (6) that beat
the partial order approach to make progress in this direction.
--R
Improved randomized on-line algorithms for the list update prob- lem
A combined BIT and TIME-STAMP algorithm for the list update problem
List update posets.
A survey of self-organizing data structures
On the power of randomization in on-line algorithms
Amortized analyses of self-organizing sequential search heuristics
Online Computation and Competitive Analysis.
an optimal online algorithm for metrical task systems.
Two results on the list update problem
Fast algorithms for finding randomized strategies in game trees.
Extensive games and the problem of information.
Optimum off-line algorithms for the list update problem
Randomized competitive algorithms for the list update problem
Amortized efficiency of list update and paging rules
A lower bound for randomized list update algorithms
Efficient computation of behavior strategies.
Probabilistic computations: Towards a unified measure of com- plexity
--TR
Amortized efficiency of list update and paging rules
Amortized analyses of self-organizing sequential search heuristics
On the power of randomization in online algorithms
Two results on the list update problem
An optimal on-line algorithm for metrical task system
A lower bound for randomized list update algorithms
Fast algorithms for finding randomized strategies in game trees
A combined BIT and TIMESTAMP algorithm for the list update problem
Online computation and competitive analysis
Improved randomized on-line algorithms for the list update problem
Self-Organizing Data Structures | list-update;analysis of algorithms;competitive analysis;on-line algorithms |
504560 | Online request server matching. | In the following paper an alternative online variant of the matching problem in bipartite graphs is presented. It is triggered by a scheduling problem. There, a task is unknown up to its disclosure. However, when a task is revealed, it is not necessary to take a decision on the service of that particular task. On the contrary, an online scheduler has to decide on how to use the current resources. Therefore, the problem is called online request server matching (ORSM). It differs substantially from the online bipartite matching problem of Karp et al. (Proc. 22nd Annual ACM Symp. on Theory of Computing, Baltimore, Maryland, May 14-16, 1990, ACM Press, New York, 1990). Hence, the analysis of an optimal, deterministic online algorithm for the ORSM problem results in a smaller competitive ratio of 1.5. An extension to a weighted bipartite matching problem is also introduced, and results of its investigation are presented. Additional concepts for the ORSM model (e.g. lookahead, parallel resources, deadlines) are studied. All of these modifications are realized by restrictions on the input structure. Decreased competitive ratios are presented for some of these modified models. Copyright 2001 Elsevier Science B.V. | Introduction
In this report we study a model which was developed for a rather simple scheduling problem.
Consider a single resource and a discrete time model. The resource is available for one unit
in every time step. In the following this resource is called server. Every time step a task
can occur that has a demand of one unit of the server. This task is called request. Such
a request includes a set of time steps which specifies acceptable times for serving. These
times must not be situated in the past and neither need to be consecutive nor include the
time step of the arrival of the request. It is obvious that we can model this problem by a
bipartite graph (R
E). The two disjoint partitions represent the server resource
at every time step (S) and the set of requests (R). Whenever a request r 2 R can be
served at time j, there is an edge fr; sg between request vertex r and the vertex s 2 S
which represents the j-th time step. Then the problem to decide when a request should
be served is nothing else than to construct a matching in G.
Our scheduling problem is online, i. e. the requests appear through time while the current
usage of the server has to be determined. These decisions have to be made without knowledge
of further requests and cannot be taken back. The definition of our problem implies
that in general it is impossible to serve all requests 1 . Additionally, the lack of information
about the future prevents an optimal solution to be found which maximizes the number of
successful requests.
This online matching problem in a bipartite graph is called online request server matching
(ORSM).
We make use of competitive analysis to investigate the described problem. Here the worst
case factor between the quality of an optimal solution and a solution calculated by an
online algorithm on the same input is determined. This factor is called competitive ratio.
For our matching problem the size of a maximum matching in G is the value of the optimal
solution. It is well-known how to calculate it in O(
(see [ET75] or any
comprehensive textbook on algorithms).
In this report we present an optimal online algorithm which constructs a matching of at
least 2of the maximal size. We also investigate a weighted variant of this online matching
problem. Then all edges have a weight and the objective is the construction of a maximal
weighted matching. For this problem our analysis shows a lower bound of
and an upper bound of 2 for the competitive ratio.
The material of this report is organized as follows. The next subsection presents a short
introduction to competitive analysis. Thereafter, an overview of related work is given.
Section 2 defines our model formally. It will be compared precisely with two models that
have been studied in literature. Additionally, a few definitions and notations are given in
the end. The simple, unweighted variant of our model is analysed in Sect. 3. It includes
a general lower bound for the competitive ratio, a deterministic online algorithm, and a
matching upper bound. Section 4 investigates the weighted version of this problem. The
proof of a general lower bound is followed by the presentation of an algorithm. Its tight
Imagine two requests to exactly the same, single server resource.
analysis establishes a gap to the previous lower bound. The section ends with some remarks
and a suggestion for a more sophisticated online algorithm. This report is completed by a
description of a few open problems.
1.1 Competitive Analysis
Algorithms are often based on the principle of the following sequence: input, computation,
and output. After reading the complete input, which is a description of a finite problem,
it computes a solution and presents it at the end. Such a behaviour is called offline.
In real world applications like e. g. embedded systems we find different requirements.
Here, parts of the input appear by and by for an indefinite period of time and decisions
of the solution have to be made immediately without knowing the future. These types
of problems, including their solving algorithms, are called online. Different methods were
suggested to analyse such online algorithms.
The most popular methods of the past, which are still in use, assume a known input
distribution (a stochastic model for the input). Of course, the expressiveness of results of
these studies are highly dependent on the correct choice of the distribution.
In contrast 1984 Daniel D. Sleator and Robert E. Tarjan introduced a new method for
analysing online algorithms (journal version is [ST85]). It is a method of worst case analysis
which avoids these difficulties and has been becoming more and more popular in the last
decade.
The basic idea is a comparison of the quality of the solution computed by the online
algorithm and the quality of an optimal, offline solution for the same input.
Let A be an online algorithm and let OPT be an optimal offline algorithm for a payoff
maximization problem 2 . Then, for an input sequence oe we denote with perf A (oe) the
performance of the algorithm A and perf OPT (oe) is the performance of OPT respectively.
A is called c-competitive if there is a constant ff such that
The constant ff has to be independent of the input and can compensate irregularities right
at the beginning. Then the infimum over all values c such that A is c-competitive is called
the competitive ratio of A. Whenever the constant alternatively define the
competitive ratio as
sup
oe
perf OPT (oe)
perf A (oe)
2 In this report we do not consider problems which have to minimize costs. The definitions would slightly
differ to ensure that a competitive ratio is always at least 1.
We immediately realize that this analysis is independent of a stochastic model for the inputs
and gives performance guarantees. On the other hand this kind of analysis is sometimes
unrealistically pessimistic.
Indeed, an online algorithm approximates an optimal solution while working under the
restriction of incomplete knowledge. So the distinguished name competitive ratio for the
approximation factor seems to be adequate.
An excellent, introductory article on competitive analysis is [Kar92]. A comprehensive
introduction is the textbook by Allan Borodin and Ran El-Yaniv [BEY98] which includes
an extensive bibliography.
Two essential techniques to prove competitive ratios have been established. A description
of these methods illustrates the character of this type of analysis.
Lower bounds for the competitive ratio are typically shown by an adversary strategy.
We can interpret competitive analysis as a game between an online algorithm A and an
omniscient adversary. The adversary creates the input sequence oe with knowledge of A.
So it can in advance calculate every decision of A on oe, perf A (oe), and perf OPT (oe). Due
to the unlimited computational power the malicious adversary is able to construct a worst
case input sequence.
When we derive general lower bounds for the competitive ratio of a problem we slightly
change our view. Then a strategy is given which tells the adversary how to construct
the next part of the input sequence dependent on the decisions an online algorithm was
able to make. The performance analysis of these sequences gives a lower bound for the
competitive ratio when the strategy is able to generate infinite inputs. Otherwise the loss
of performance of an online algorithm could be compensated by the constant ff in the
above definition.
To prove upper bounds for an online problem the performance of a designed algorithm A
is investigated. Sometimes, ad hoc arguments can be applied to show how A can bound
the competitive ratio. However, it is more common to apply a potential function argument
for an analysis of amortized performance. Hence, every pair of a state of A and a state of
OPT is mapped to a real value by a potential function \Phi. Whenever it can be shown that
for every step i with input oe i holds
and \Phi is bounded above, then A is c-competitive. It was shown that there exists such
a potential function for every online algorithm (see the description in [IK96], page 534).
Nevertheless the variant usage of potential functions and combinations with additional
arguments are nowadays commonly used in proofs of the performance of online algorithms.
Several extensions of the competitive analysis were suggested. A major influence had the
seminal work [BDBK + 94] which introduces randomized online algorithms and adapted
methods for their analysis. However, in this report we limit our study to deterministic
online algorithms.
1.2 Previous Work
We introduced our online problem in terms of a scheduling problem. However, there is a
vast literature on the subject of online scheduling and related problems like load balancing
and routing. We will not review these works here. The reader may consult the survey by
instead. The publications of online matching problems are more relevant
to the studies of this report. In the following, we discuss these papers.
The first article about an online version of a bipartite matching problem is by Richard M.
Karp, Umesh V. Vazirani and Vijay V. Vazirani [KVV90]. The partition U of the graph
E) is known in advance and the vertices of V including their edges arrive
through time. Whenever a vertex v 2 V is revealed, an online algorithm has to decide which
edge incident to v is added to the online matching M . The objective is the maximization
of the size of M . For deterministic online algorithms the competitive ratio is exactly 2.
An adversary can present an input with a vertex being adjacent to two vertices
After the decision of the online algorithm, the adversary presents vertex
which is adjacent to the previous matched vertex u or u 0 , only. The online algorithm is
not able to match v 1 and v 2 . However, there is such an offline solution. The infinite
repetition of this strategy results in the lower bound of 2. The greedy algorithm which
adds the first possible edge of an incoming vertex to the matching M achieves this
competitive ratio. From graph theory we know that this greedy algorithm constructs a
maximal matching (every edge of the graph is either itself a matching edge or is adjacent
to one) and every maximal matching has at least half the size of an optimal, maximum
matching.
The key contribution of the discussed paper is the analysis of a randomized online algorithm
for the described problem.
Ming-Yang Kao and Stephen R. Tate investigated the above model with blocked inputs
[KT91]. When an input occurs k vertices were revealed 'in a block' instead of one at a
time. Let be the number of vertices in the partition V of the bipartite graph. Of
course with the problem is the same as above and for we have the offline
version of the matching problem. For deterministic online algorithms, no improvements
are possible as long as k - n. For randomized algorithms the result of [KVV90] cannot
be improved apart from low order terms as long as
The online b-matching was analysed by Bala Kalyanasundaram and Kirk Pruhs in [KP95].
In this problem the vertices of the partition V can be incident to at most b edges out of
the matching. Again, the size of the matching has to be maximized. The authors give an
optimal deterministic algorithm with competitive ratio
A weighted variant of the online bipartite matching was studied by the same authors
in [KP93]. When allowing an arbitrary weight function for the edges, no competitive
algorithm can exist. To see this consider that an online algorithm has chosen a matching
edge of weight one. As a next step an adjacent edge of arbitrary weight is revealed. Then
no online algorithm can bound the competitive ratio.
Hence, the model was restricted to a weighted, complete bipartite graph of partitions of
equal which is a metric space (especially the triangle inequation is
fulfilled). When asked for a maximal weighted matching, the greedy algorithm is optimal
and 3-competitive.
On the other hand in the minimum weighted matching problem, a perfect matching of
minimal weight has to be determined in a bipartite graph defined as above. Then an
optimal and (2n \Gamma 1)-competitive algorithm is shown. The same result was independently
discovered by Samir Khuller, Stephen G. Mitchell and Vijay V. Vazirani in [KMV94] 3 . The
last article also contains a study of an online version of the stable marriage problem.
In [ANR95] Yossi Azar, Joseph Naor and Raphael Rom studied a different model based on
bipartite graphs. They called it online assignment. The partition U has a fixed size and
vertices out of V are adjacent to a subset of U . For each v 2 V one of its weighted edge
must be selected immediately after its arrival with the objective to minimize the maximal
weight of all selected edges incident to a vertex in U . Indeed, this is a load balancing
problem. For deterministic online algorithms a lower bound of dlog 2
1)e and an upper
bound of dlog 2 ne shown for the competitive ratio.
The technical report by Ethan Bernstein and Sridhar Rajagopalan [BR93] is of major
importance for our following studies. An online matching problem in general graphs called
roommates problem has been introduced. The graph is undirected, simple
and weighted. An unweighted version of this model has also been investigated.
The input sequence consists of the vertices of V . Whenever a vertex is revealed, all of its
adjacent vertices and the weighted edges in between become known. We want to emphasize
that this process includes adjacent vertices never seen before in the input sequence. Then,
an edge of the current vertex to a non-matched vertex (that has been revealed previously)
can be added to the online matching. An edge to a non-revealed and incompletely known
vertex can be selected later when this adjacent vertex is the current one in the input
sequence.
For the roommates problem the unweighted model is interpreted as follows: People arrive
one at a time to a hotel where a conference takes place. The hotel consists of double
rooms, only. Every person gives a list of possible roommates independently of whether
they have arrived yet. The model assumes that these lists are symmetric, i. e., every
potential roommate will accept to share this room. The hotel manager has to decide in
which room the person will stay. The objective is to minimize the allocated rooms, i. e. to
maximize the matching in the implicitly defined graph.
In the weighted version the aim is an online construction of a weighted matching of maximal
size in G.
In the paper a tight analysis of the unweighted model is given. Therefore the competitive
ratio is 1:5 : For the weighted model a lower bound of 3 is proven. A suggested online
algorithm is shown to be 4-competitive.
3 Both articles ([KP93] and [KMV94]) were previously published on conferences in 1991.
At the end of the next section we will revisit the roommates problem and we will compare
it to our model. Indeed, our model is a special case of the roommates problem. For our
investigations we were able to adapt proofs taken from the discussed paper.
2 The Model
In the beginning of the introduction we described an online matching problem. Now we
will present a formal definition for the online request server matching problem (ORSM).
The underlying structure of the problem is a bipartite graph G := (R :
E). Both
partitions R and S are totally ordered. We denote the vertices by r
and the indices indicate the position within the order.
We interpret this order as a discrete time model. The vertices of partition S represent a
simple resource called server, which is available for one unit each time step. Partition R is
interpreted as a set of tasks. Such a task has a demand of one server unit to be completed.
We call them requests and every time step one of them might occur. An edge fr
between a request vertex r i and a server vertex s j means that the request can be served in
time step j. The set of edges E ae R \Theta S is constructed with a restriction:
This means that a request that occurs at time step i must not specify a possible service
time in the past. Without this restriction the modelled scheduling problem does not make
sense and no competitive online algorithm would be possible.
Now we have to specify how this model works online: When the system starts the partition
R is completely unknown 4 . In the beginning of a time step i the request r i is revealed as
input, i. e. the vertex r i and all of its edges are inserted into the previous known part of
G. If no request appears, vertex r i is isolated.
After this input, an algorithm has to decide how to use the server in the current time step
i. Indeed, it can add an edge incident to s i to the online matching M . It is worth noting
that due to the restriction (1) all edges incident to s i are known when this decision has
to be made. The online algorithm has the objective to maximize the cardinality of the
matching M , i. e. to serve as much requests as it can.
Up to now the graph G was unweighted. We also study the weighted variant. Then
the graph is defined as G := (R
w). The weight function defines a
positive real value for every edge. The objective is the construction of a weighted matching
of maximal overall weight. Otherwise, the problem and its online fashion is completely the
same as in the ORSM problem. This version is named online request server weighted
matching problem or in short wORSM problem.
4 When we take a close look this is not the truth. We know that every time step i a new vertex r i is
inserted but its set of incident edges is unknown before. For our convenience we will interpret the input
process in the introduced way.
2.1 Our Model in Comparison with Models out of Literature
At first we want to make a clear distinction between the ORSM model and the online
bipartite matching problem in [KVV90]. In the later one the vertices from the unknown
partition and their edges appear in the same way as in the ORSM problem. However, in the
online bipartite matching problem no order on the vertices of the already known partition
is given and no restrictions on the set of edges. After a new vertex v is inserted the online
algorithm has to decide which edge incident to v should be added to the matching. In
contrast, in the ORSM model, one asks for an edge incident to the current server vertex to
add into the matching. This server vertex is situated in the known partition of the bipartite
graph. The restriction (1) of the set of edges guarantees that all edges of this server vertex
are known at that time. Due to the specified input process (revealing a request vertex and
all of its edges), all adjacent edges are known as well. There is therefore the advantage to
have some extra knowledge of the graph structure whenever a decision has to be taken.
That is also the reason why it is possible to achieve a better competitive ratio for the
ORSM problem. Additionally, in the weighted version, a restriction on a metric space, or
the other restrictions described in Sect. 1.2 are not necessary.
Again, we want to emphasize the difference: In the online bipartite matching problem
decisions about adding an edge to the online matching are made with respect to a set of
edges incident to a just revealed vertex. In the ORSM problem such a decision is made
with respect to a set of edges incident to a vertex of the other (known) partition.
In the previous section it was claimed that the ORSM problem is a special case of the
roommates problem. Now we are able to give a transformation of an instance of the wORSM
problem with its bipartite graph (R
to an instance of the roommates
problem with the underlying graph its total order OE on the set VR .
This transformation defines:
VR
and the order OE on VR is defined by the use of the orders on R and S such that:
Whenever a vertex r i is inserted, the roommates problem on GR is not able to add any edge
to M because no adjacent s-vertices are revealed (remember restriction (1) on E). Then
vertex s i is inserted. All of its incident edges and adjacent vertices respectively, are known
at that time and so every edge of s i can be selected as a matching edge in the roommates
problem. No edge of an unrevealed vertex was given simultaneously. That means s i will
never be a candidate for the matching again. We conclude that the roommates problem
can simulate the wORSM problem using the given transformation.
Nevertheless, the two models are not able to simulate each other. Hence, we have to
prove the lower bound of the competitive ratio for our more restrictive model. Additional
we present an online algorithm and an analysis applying simplified arguments which are
tailor-made to the ORSM problem. In the weighted model, the increase in knowledge of
the graph structure, compared to the roommates problem, results in a lower value for the
competitive ratio.
Before starting our investigations a few definitions and notations are presented.
2.2 Definitions and Notations
Although we assume that the reader is familiar with basic concepts in graph theory, and
although we already used some graph theoretical notations, a few standard definitions are
included in the following list.
(weighted graph)
is called a weighted graph iff
V is a finite set of vertices,
is a set of edges, and
is a weight function.
When using the unit weight function an unweighted graph E) is
derived. As you can see, the weight function is omitted in the notation. This conversation
from a weighted to an unweighted graph applies to the next definitions of graphs and
matchings.
Bipartite graphs consist of two disjoint sets of vertices without edges inside the sets:
Definition 3 (bipartite weighted graph)
is called a bipartite weighted graph iff
U and V are finite sets of vertices,
is a set of edges, and
is a weight function.
Definition 4 (vertex induced subgraph)
be a weighted graph and V S ae V . Then
:=
is the subgraph of G induced by vertex set
be a weighted graph. M is called a matching in G iff
Definition 6 (weight of a matching, jM
be a weighted graph and let M be a matching in G.
w(fu; vg)
is called the weight of matching
It is obvious that jM j counts the number of edges in M when it is applied to an unweighted
graph. In that case jM j is called the cardinality or size of the matching M :
E) be a graph and let M ae E be a matching. For
Definition 8 (maximum weighted matching, M(G))
be a weighted graph. Then M(G) denotes a maximum weighted matching
We will frequently denote a specially defined or calculated maximum matching by
Definition 9 (symmetric difference, \Phi)
Let A and B be two sets. Then A \Phi B denotes the symmetric difference:
To illustrate structures of graphs and matchings, we use a graphical notation. It is fully
interchangeable with the set theoretical representation. Vertices of the two sets in bipartite
graphs were depicted by small, filled circles
v and squares respectively. The circles
represent vertices from the request partition R, and squares represent vertices out of the
server partition S. Additionally, the label of such a vertex is written right next to its
symbol. When we sketch an instance of the wORSM problem, the vertices of the request
partition are drawn in their order along a horizontal line from left to right. The server
vertices are drawn in the same way in a distance below. Indeed, a request vertex is situated
precisely on top of the server vertex of the same time step:
Edges are depicted by a line between two vertices. Of course in bipartite graphs such a
line has a circle and a square on its ends:
To mark the edges of a matching they are symbolized by a double line:
Whenever edges cannot be selected for the matching anymore, due to previous decisions
made by the online algorithm, they were depicted in grey:
3 The Online Request Server Matching Problem
This section starts with a general lower bound for the competitive ratio of the ORSM
problem for deterministic online algorithms. Then the optimal 1.5-competitive algorithm
LMM is presented and analysed in the next subsections.
3.1 The Lower Bound
By applying the standard argument of an adversary strategy, we will show the following
general lower bound:
Theorem 1
Every deterministic online algorithm A for the ORSM problem has a competitive ratio of
at least 1:5.
Proof. The adversary strategy starts with the following input structure:
@
@
@
@ @
Figure
1: Situation at time 2.
A can react to this input at time in three different ways:
Case 1: A puts the edge to the online matching MA . In the next step the adversary
presents edge g.
@
@
@
@ @
@
@
@
@ @
@
@
@
@ @
Figure
2: situation at time
A is not able to use the serving interval s 3 . Therefore, jMA j - 2 whereas the optimal
solution gives jM OPT
Case 2: A puts the edge to the online matching MA . In the next step the adversary
presents edge g.
@
@
@
@ @
Figure
3: situation at time
A cannot use s 4 . Again jMA j - 2 and the optimal matching results in jM OPT
Case 3: A decides not to match s 2 .
The adversary will present the input of Case 1 (the input of Case 2 would work as
well) and jMA
@
@
@
@ @
@
@
@
@ @
Figure
4: situation at time
This strategy can be infinitely repeated every four time steps and this fact shows the ratio
3:
Theorem 1
3.2 The Algorithm LMM
At a time step i the graph G representing the input of an online algorithm is known
up to request vertex r i . More precisely, we know the subgraph of G induced by the set
[S. Due to the irreversible decisions of the former time steps
all previous server vertices matched request vertices r k
cannot be rearranged anymore (M i is the online matching up to time
have a vertex induced bipartite subgraph of G:
~
Our online algorithm is called 'Local Maximum Matching' (LMM) because it constructs a
maximum matching on every local subgraph B i (denoted by M(B i )). The exact function
of LMM follows:
1:
2: loop
3:
4: read input of time i and build up B i
5: construct a maximum matching M(B i ) on
start with all matching edges of M(B i\Gamma1 ) which are edges in B
look for an augmenting path which starts at vertex r i and do the
augmentation when found 5
7: if s i
8: add the matching edge of s i to the online matching MLMM
9: else if s i is not isolated in B i then fall neighbours of s i are matched in M(B i )g
10: add an arbitrary edge fs to the matching MLMM and delete the
matching edge of r in M(B i )
12: end loop
Line 10 of this algorithm is essential and prefers a matching which includes the current
vertex s i . Now we will analyse the performance of LMM.
5 due to the maximum cardinality of M(B augmenting path must have r i on one end
3.3 The Upper Bound
At first, two invariants of LMM will be formalized in lemmata.
After a request vertex r i has been matched in B i for the first time, it is in all following
maximum matchings r i
to the time step where its current matching
edge is added to MLMM and so r i
holds in the end.
Proof. In line 5 of LMM at time j the matching edge of a previous matched request
vertex
copied to M(B j ) and an augmentation in line 6 can
change r's matching edge but r ffi
In line 10 the matching edge of a vertex
but after this step r ffi
Lemma 3
If s i is not isolated in B i , then s i
holds.
The lemma follows directly from line 7 to 11 of LMM. Applying these two invariants we
can prove the upper bound now.
Theorem 4
LMM is 1.5-competitive.
Proof. We will show that no online matching MLMM can be increased by augmenting
paths of length one or three. Therefore, shortest augmenting paths for MLMM must have a
length of five.
This fact immediately results in the theorem because by performing the augmentation, two
matching edges out of MLMM become three edges in M OPT . Longer augmenting paths have
a lower ratio and matching edges outside such paths decrease the overall ratio as well.
Let us fix an arbitrary input graph G I and a maximum matching M
can compare M OPT with the matching MLMM constructed by LMM on G I . The symmetric
difference M OPT \Phi MLMM defines a set of disjoint augmenting paths for MLMM (for more
details see [Edm65]). The augmentation of all of these paths transforms MLMM to M OPT .
By contradiction, we will prove the non-existence of paths of length one and three in this
set.
Figure
5: Structure of augmenting paths of length one and three.
Case 1: augmenting path of length one fs
Vertex r a was never matched because r a
(reverse application of Lemma 2).
a g is in B i and this fact contradicts Lemma 3.
Case 2: augmenting path of length three and
r a was matched at time j only, which implies the edge fs was in B i . Then
Case 3: augmenting path of length three and i ? j:
At time j, request vertices r a ; r b
2 MLMM and so the whole path ffs
gg is in B j .
The case s i
contradicts the optimality of M(B j ) because the path is an
augmenting one. Therefore, at time j s i
must hold, i. e. there exists a
request vertex r c with fr c ; s i Later at time k (j
deletes the matching edge fr c ; s i g and adds fs to MLMM .
Due to the definition of the ORSM problem, both edges are known
at time j. Now the above argument about s i can be recursively applied to s k and
due to the finite structure of B i , this fact contradicts the existence of an augmenting
path of length three in MLMM .
Theorem 4
4 The Online Request Server Weighted Matching
Problem
Like in section 3, we present a general lower bound for the wORSM problem first. Then
the algorithm wLMM is given and analysed. Unfortunately this algorithm cannot achieve
the lower bound and so we suggest the algorithm PHI at the end of this section.
4.1 The Lower Bound
Let OE :=
5+1- 1:618034 be the golden ratio.
Theorem 5
Every deterministic online algorithm A for the wORSM problem has a competitive ratio of
at least
Proof.
The adversary strategy starts with input edges and their weights are
as you can see in Fig. 6.
@
@
@
@ @
Figure
Situation at time
A can react to this input in two ways:
Case 1: A adds edge to the weighted online matching MA . Then the adversary
does not presents any new edge incident to s 2 . So jMA holds.
Case 2: A does not change the online matching MA . Then the adversary presents edge
with weight w(fr
@
@
@
@ @
Figure
7: Situation at time
Now A is able to construct a matching with weight jMA j - OE only, whereas it holds
OE. The ratio of these two values is:
OE
Every two time steps the adversary can repeat this strategy up to infinity and this fact
shows the lower bound of the competitive ratio of
5+1. \Xi Theorem 5
4.2 The Algorithm wLMM
The algorithm wLMM works very similarly to LMM. Of course, wLMM determines a maximum
weighted matching on the local bipartite graph B i . Furthermore, the algorithm works
without the special preference of the vertex s i . Later, on page 30, we will explain why a
special treatment of s i , in the way LMM does, cannot increase the performance of wLMM.
Our investigation will indicate the problems arising from this fact. A formal description of
wLMM follows:
1:
2: loop
3:
4: read input of time step i and build up B i
5: construct a maximum weighted matching M(B i
7: add the matching edge of s i to the online weighted matching MLMM
8: end if
9: end loop
4.3 The Performance of wLMM
In this section we present a tight analysis of wLMM being 2-competitive. At first we exhibit
a lower bound of wLMM which establishes the fact that wLMM is not able to achieve the
lower bound of theorem 5.
The Lower Bound
Theorem 6
The online algorithm wLMM has a competitive ratio of at least 2.
Proof. An adversary presents the input w(fr
and in the next time step w(fr
@
@
@
@ @
Figure
8: Input structure that is used by the adversary
The algorithm wLMM determines the online matching which has a
weight of jMwLMM ". The optimal solution is with
weight jM OPT 2. Consequently,
and the limit for
Theorem 6
The Upper Bound
Firstly, we repeat a few definitions and give new ones which will be used extensively in the
following proofs. Most of the notations are similar and comparable to [BR93], whereas one
different notation is introduced. By the use of this notation, we aim to make the formulae
more accessible.
Definitions and Notations
is the bipartite, weighted graph of the wORSM problem with the
weight function w
M i is the online matching which is calculated up to step i by wLMM.
M(B) is a maximum weighted matching of a bipartite graph B.
is the overall weight of a maximum
weighted matching of B.
is the local bipartite graph of step i which consists of all known and not
matched request vertices R i ae R with
non-passed
server vertices edges between this two sets
(R i
Both notations above can be combined. We use indexed r's and s's to denote vertices of
R and S, so the partition where these vertices come from is unambiguous. Additionally,
we employ a list of inserted or deleted vertices when necessary. A typical usage of this
notation is B i
which describes the vertex induced subgraph Gj R i
Let M(B i ) be a maximum weighted matching of B i and let M(B i
!s ) be such a matching
of B i after removing the vertex s. The symmetric difference of these two sets results in
a path P :=
!s ). It is an augmenting path 6 in B i with respect to the
matching M(B i
!s ). We will use notation P for the set of edges as defined above and
6 this path can be empty
for a graph is the defined set of edges, and
set V P gathers all the incident vertices. With this definition, we define the value of the
corresponding server vertex as:
It is obvious that fi i
!s ) as above. Sometimes we call fi i (s) the weight of path P because this value is
added to the overall matching when augmenting P .
Next we define the global potential function:
At last the following abbreviated notations will be used for an expression f :
f jv is the value of f in step i with
\Deltaf jv is the difference of the value of expression f before and after processing vertex v.
Now some important properties of weighted augmenting paths which corresponds to fi(v)
are listed. In this context we can relax the restriction to bipartite graphs. All above
mentioned definitions applies to simple, undirected, weighted graphs.
Lemma 7
be a simple, undirected, weighted graph. Then it holds:
Proof. G !v is a subgraph of G and both M(G) and M(G !v ) are optimal. So it
holds
Lemma 8
Every subpath P b which arises from P by deleting an even number of edges from the
start vertex v and which starts itself by vertex b is optimally matched by the matching
of P , i. e. M(P b
Every subpath P c which arises from P by deleting an odd number of edges from the
start vertex v and which starts itself by vertex c is optimally matched by the matching
of P !v , i. e. M(P c
v a b c
a b c
s q q q
c
Figure
9: Sketch of the paths described in Lemma 8 including their optimal matchings.
Proof. We prove statement (I) by contradiction. Assume M(P b is not optimal:
The matching edge of b is in P b , fb; cg 2 M(P ) and fb; cg 2 P b because fv; ag 2 M(P )
and b is in a distance of even edges away from v. Path P can be divided into two subpaths
at vertex b and it holds:
Then the above assumption implies:
which contradicts the optimality of M(P
In the same way statement (II) is proven. The line of argumentation is about P !v and
instead. Notice that in M(P !v ) the matching and non-matching edges are exchanged
in comparison to M(P ) by definition of P . \Xi Lemma 8
Lemma 9
path P and subpaths P b and P c be defined as in Lemma 8. The value fi(v) can be
expressed from vertex b or c onwards by fi(b) or fi(c) in the following way:
Matching M(P ) can be divided into M(P nP b ) and M(P b ) by Lemma 8. In the same
way M(P !v ) can be divided into M((P n P b ) !v
Matching M(P ) can be divided into M((P n P c
!c ) by Lemma 8.
In the same way M(P !v ) can be divided into M((P n P c
it holds:
fi(c)
Proof. The Lemma follows from Lemma 9 part (II) with the subpath P c starts with
vertex u. Then m((P n P u is the empty graph with
and the application of Lemma 7 (fi(u) - 0) completes the proof.
Lemma 11
be a simple, undirected, weighted graph, v 2 V , and let fi(v) correspond
to a path P of even length. When a manipulation of G increases the length of P , the value
of fi(v) will never decrease, i. e.
Proof. Suppose b to be the last vertex of path P and P b be the extension of P . We can
use Lemma 9 part (I) because P has an even length. We then achieve:
Then
and by applying of Lemma 7:
Lemma 12
be a simple, undirected, weighted graph, v 2 V , and let fi(v) correspond to
a path P . When a manipulation on G decreases the length of P such that fi + (v) corresponds
to a path of odd length, then the value of fi(v) will never decrease, i. e.
Proof. Suppose c to be the last vertex of the reduced path (corresponding to fi
and P c
!c be the shortening itself. We can use Lemma 9 part (II) because P + has an odd
length, and we achieve:
Then
and by applying of Lemma 7:
Now we are able to give two key lemmata, which are needed to prove the upper bound.
The statements of these lemmata are the same as in [BR93] and there are called identically.
The line of argumentation in our proof follows E. Bernstein's and S. Rajagopalan's proof,
too.
The Key Lemmata and the Proof
Lemma 13 (Stability Lemma)
The value of a server vertex never decreases, i. e.
\Deltafi
Proof. When wLMM is running the following changes on the local graph B can happen:
Case 1 'matching': A matching edge added to the online matching,
This edge including its both vertices is removed from
Case 2 'non-matching': The current server vertex s i is not matched (s i
is removed from
This can happen when the weights of edges incident to s i are too small.
Case 3 'input': A new request vertex r i is added to
We will show that in all three cases the value fi(s) of an arbitrary server vertex s 2 S i will
not decrease. Let Q be the path corresponding to fi(s). Whenever Q is not affected by
one of the above cases, it holds: fi Henceforth, it is sufficient
to assume a modification in the structure of Q.
Case 1 shortened by removing one of its matching edges.
This removal includes all adjacent non-matching edges. Hence, the shortened path Q has
a matching edge on both of its ends and therefore it is of odd length. An application of
Lemma 12 gives \Deltafi
Case
path Q is shortened by removing its last vertex s i (all other
vertices of Q are matched in M(B i )). Again the removing of s i removes all incident edges
has two matching edges on its ends. So it is of odd length, and by Lemma 12 it
holds \Deltafi
first we focus on the possibility that the new request vertex is not matched,
However, such a situation does not change any path Q and therefore it holds
The reason is that r i could be the last, non-matched vertex of Q
only. Then Q has an even length. By definition, Q starts with a server vertex s 2 S i
which implies that all request vertices are situated in an odd distance to s. This fact
implies a contradiction.
Henceforth, we assume that r i is matched to a server vertex s j in the enlarged graph
was augmented which is described
exactly by M(B /r i
We define the weight of path P just as the value of a
server vertex
Now we have to distinguish whether s j was matched in M(B) before.
Case 3.a: s j
M(B). No path Q of odd length could become extended because these
paths have two matched end vertices. This is a contradiction to the assumption of this
case. If Q is of even length and has been extended, then Lemma 11 can be applied and
this results in \Deltafi
Case 3.b: s j
M(B). The following statements are implicated by the preconditions of
this case:
ffl P is an augmenting path of length ? 1 because s j was matched before.
ffl There exists a vertex v which is the first common vertex of path P and Q. At that
point, the paths meet first time with respect to their start vertices.
From this facts we can conclude:
would be matched to a vertex of Q which is not in P and to
another vertex of P . This is a contradiction.
7 Note: the definition of fi(s) was made for server vertices s 2 S only
s
Figure
10: Situation just before P is augmented.
ffl From vertex v onwards, paths P and Q are identical. Lemma 9 part (I) gives the
reason for this statement 8 , since fi(v) is the same for both paths.
Lemma 9 part (I) allows us to express fi(s) by fi(v). Due to the fact that no matching
edge is changed between s and v on path Q (see also Fig. 10) it holds:
Hence, it is sufficient to investigate \Deltafi (v). We can observe that v is matched to the
farther request vertex with respect to r i , before path P is augmented. This augmentation
exchanges all matching and non-matching edges on P . Then, v is matched to the request
vertex that is closer to r i and the path corresponding to fi + (v) has the 'opposite direction',
i. e. it starts at vertex v and ends at vertex u where u is situated between v and r i .
\Gamma! fi(v)
This vertex u has to be a request vertex, u 2 R i . Otherwise it would be matched in M(P )
outside the path described by fi + (v) which is a contradiction to the definition of fi
Vertex u and r i can be identical.
8 Note: A very careful inspection shows the possibility that this statement does not necessarily hold.
On the one hand fi(v) is an optimized value and it is equal in P and Q. On the other hand, there can exist
two different paths P tail
and Q tail
which start at vertex v and have the same value fi(v). Without loss of
generality, select the maximum matchings which define P such that Q tail
is the part of P form v onward.
It is obvious that these matchings exist and that we get proper definitions for P , fi(s j ), and
The definition of \Deltafi (v) gives:
Substituting this claim in equation (2):
and by Lemma 7 fi 0 (u) - 0, which completes the proof. Lemma 7 can be applied here
because it holds for the value fi(\Delta) of a vertex in a general, simple, undirected graph.
Proof of the claim:
On the left hand side, the vertex v is removed out of path P in both terms. Hence, P is
divided into two subpaths
is situated in P 1 .
The maximum weighted matching of P 2 is the same in M(P !v ) and M(P !r i ;v ), and it
has no influence on the difference m(P !v
Let P r be a subpath between r i and u, and let P u be the subpath of P from u onward.
z -
By definition of fi 0 (r i ) and Lemma 8:
To establish the statement m(P r
sufficient to
have a look at P 1 . When P was augmented (from M(P !r i ) to M(P matching and
non-matching edges were exchanged. After the removal of vertex v, path P 1 is augmented,
and again all matching and non-matching edges are exchanged in the subpath between v
and u which is described by fi (v). In this subpath we have the same situation as before
when r i had been inserted. Then the difference between M(P 1 ) and M(P 1
be found in P r (to be more precise: M(P 1
From the above equation, we get:
Lemma 14 (Pay-Off - Lemma)
During a run of wLMM holds:
Before proving the lemma we would like to present an interpretation of the formulae.
Statement (I) establishes the fact that the potential function \Phi increases by at least the
value of fi(s) when the server vertex s is processed. Statement (II) ensures the choice of
the heaviest augmenting path when a request vertex r is inserted and matched. The left
hand side of this inequation is the weight of the selected augmenting path whereas the
right hand side describes the weight of all possible augmenting paths that start in vertex
r (see Lemma 9 part (II)).
Proof. For simplicity of notation the index i is omitted.
and
Substituting the above equations into the definition of \Delta\Phi js :
\Delta\Phi
and by Lemma 10:
\Delta\Phi
Case 2: 'non-matching', s
and
Substituting the above equations into the definition of \Delta\Phi js :
\Delta\Phi
and together we have:
\Delta\Phi
(proof by contradiction)
Let r 2 R and m(B /r ) be the overall weight of an optimal matching of B including vertex
r.
Assumption: 9 s 2 S i such that
The matching M is not changed by the insertion of r and we have:
\Delta\Phi
Substituting the assumption by this equation:
\Delta\Phi
The term is the weight of a matching in graph B /r because both
vertices r and s are not in B !s . This contradicts the assumed optimality of m(B /r )
and the lemma is proven. \Xi Lemma 14
Theorem 15
The deterministic online algorithm wLMM for the wORSM problem is 2-competitive.
Proof. For all request vertices r 2 R we get from Lemma 14 part (II):
while the definition of the wORSM problem ensures that r is processed before s :
We get from Lemma 14 part (I) for all server vertices s 2 S:
and by the use of Lemma 13, since s is processed after r :
The sum of equations (4), (5) and (6) is
Then, for an arbitrary matching MG of G holds:
\Delta\Phi js -
Hence, we get for
final - jM OPT j
and from the definition of \Phi, and the fact that B final = ?, it follows:
which shows the 2-competitiveness of wLMM. \Xi Theorem 15
A Comment on the wLMM-Algorithm
Why does wLMM not prefer the current server vertex s i in the online matching, like LMM
does? The answer to that question is a very short one: It does not help anyway.
Suppose an algorithm wLMM which prefers the current server vertex in the way like LMM
does. Furthermore, assume an input graph such that MwLMM 6= MwLMM . Let
s i be the first vertex where wLMM makes a different decision than wLMM (s i
and s i
decreasing all weights of edges incident to s i by ", wLMM will
behave like wLMM on s i . With the help of this trick, we can construct an input graph
calculates the same matching as wLMM does on input
G. The difference in the weights of the resulting online matchings jMwLMM j and jMwLMM j
is small and disappears for "
Possibly, the next section shows a way out.
4.4 The Algorithm PHI
The algorithm wLMM implements a special greedy strategy. From its current point of view,
it takes the maximal, additional weight for the online matching. Theorem 6 shows a situation
where this strategy is not very clever. With regard to theorem 5 (construction of the
general lower bound) and theorem 6 we suppose that there is an algorithmic improvement.
Using the current vertex s i in the online matching M , and therefore removing a vertex r
out of B i simultaneously, may be more valuable than the originated loss in M(B i ). This
observation leads us to suggest another online algorithm for the wORSM problem. It works
similarly to wLMM but after computing the local maximum weighted matching M(B i ), it
checks the vertex s i . Whenever s i
holds, the weights of all edges of B i incident
to s i will be increased by the factor
5+1. This new local bipartite graph is called
Now the algorithm determines M(B OE
matching edge is added to the online matching M . The new online algorithm is called PHI
and a formal description follows:
1:
2: loop
3:
4: read input of time i and build up B i
5: construct a maximum weighted matching M(B i
7: construct B OE
fincrease the weight of every edge of B i incident to s i by factor
8: calculate the maximum weighted matching M(B OE
9: end if
11: add the matching edge of s i to the online weighted matching M PHI
12: end if
13: end loop
A problem still unsolved is the performance analysis of PHI. It seems as if a modification
of the technique, that is used in the proof of theorem 15 does not work. It is likely that
), then the algorithm does not need to calculate B OE
) and to
test s i
these facts (the necessity of a modified algorithm like PHI and a new technique to analyse
it) also explain the gap in the analysis of the roommates problem in [BR93].
It is worth noting that the algorithm LMM is a special implementation of PHI for unit edge
weights. The increasing of unit weights of s i -edges by a factor of OE is equivalent to the
simple preference of an edge incident to s i at time step i.
5 Open Problems
In the end of the last subsection we presented the most urgent research task: To close
the gap between the lower and upper bound of the wORSM problem (Theorem 5 and
Theorem 15). If we achieve this aim, the roommates problem (weighted version) of [BR93]
should be revisited.
A completely different research task deals with the ORSM problem (unweighted version).
Here, the performance on inputs of strong, pre-determined structures is of interest. Finally,
we would like to model a set of parallel, homogeneous resources and requests with deadlines.
First results have been already proven. It turned out that dependent on the concrete model:
ffl lower competitive ratios are achievable,
ffl the algorithm LMM is not optimal anymore,
ffl different arguments to prove upper bounds are necessary.
These investigations are not completed by now but they will be presented in a later publication
--R
The competitiveness of on-line assignments
On the power of randomization in on-line algorithms
Online Computation and Competitive Analysis.
The roommates problem: Online matching on general graphs.
Network flow and testing graph connectivity.
Online computation.
Online weighted matching.
An optimal deterministic algorithm for online b-matching
Online matching with blocked input.
An optimal algorithm for on-line bipartite matching
Amortized efficiency of list update and paging rules.
--TR
An optimal algorithm for on-line bipartite matching
Online computation
Online computation and competitive analysis
Simple competitive request scheduling strategies
Online Scheduling of Continuous Media Streams
On-line Network Optimization Problems
On-line Scheduling
The Roommates Problem | competitive analysis;online scheduling;online bipontite matching |
504566 | Reductions for non-clausal theorem proving. | This paper presents the TAS methodology as a new framework for generating non-clausal Automated Theorem Provers. We present a complete description of the ATP for Classical Propositional Logic, named TAS-D, but the ideas, which make use of implicants and implicates can be extended in a natural manner to first-order logic, and non-classical logics. The method is based on the application of a number of reduction strategies on subformulas, in a rewrite-system style, in order to reduce the complexity of the formula as much as possible before branching. Specifically, we introduce the concept of complete reduction, and extensions of the pure literal rule and ofthe collapsibility theorems; these strategies allow to limit the size ofthe search space. In addition, TAS-D is a syntactical countermodel construction. As an example of the power of TAS-D we study a class of formulas which has linear proofs (in the number of branchings) when either resolution or dissolution with factoring is applied. When applying our method to these formulas we get proofs without branching. In addition, some experimental results are reported. Copyright 2001 Elsevier Science B.V. | Introduction
Much research in automated theorem proving has been focused on developing satisfiability
testers for sets of clauses. However, experience has pointed out a number of disadvantages of
this it is not natural to specify a real-world problem in clause form, the translation
into clause form is not easy to handle and, although there a number of e#cient translation
methods, models usually are not preserved under the translation. In addition, clausal methods
are not easy to extend to non-classical logics, partially because no standard clause form can
be defined in this wider setting.
Non-clausal theorem proving research has been mainly focused on either tableaux methods
or matrix-based methods; also some ideas based on the data structure of BDDs have been
Partially supported by CICYT project number TIC97-0579-C02-02.
stands for Transformaciones de -
Arboles Sint-acticos, Spanish translation of Syntactic Trees Transformations
used in this context. Recently, path dissolution [6] has been introduced as a generalisation of
the analytic tableaux, allowing tableaux deductions to be substantially speeded up.
The central point for e#ciency of any satisfiability tester is the control over the branching,
and our approach is focussed on the previous reduction of the formula to be branched as much
as possible before actually branching. Specifically, we introduce the concept of complete
reduction, and extensions of the pure literal rule and of the collapsibility theorems. On the
other hand, another interesting point in the design of ATPs is the capability of building
models provided the input formula is satisfiable.
A non-clausal algorithm for satisfiability testing in the classical propositional calculus,
named TAS-D, is described. The input to the algorithm need not be in conjunctive normal
form or any normal form. The output is either "Unsatisfiable", or "Satisfiable" and in
the latter case also a model of the formula is given.
To determine satisfiability for a given formula, firstly we reduce the size of the formula
by applying satisfiability-preserving transformations, then choose a variable to branch and
recursively repeat the process on each generated task. This feature allows:
. to obtain useful information from the original structure of the formula.
. to make clearer proofs.
. to extend the method to non-classical logics which do not have a widely accepted normal
form.
Although our intention in this paper is to introduce the required metatheory, TAS-D
is currently being tested and we are obtaining very promising results. In our opinion, the
results of these tests allow to consider the TAS framework as a reliable approach to Automated
Theorem Proving. TAS ideas are widely applicable, because apply to di#erent types of logics;
flexible, because provide a uniform way to prove soundness and completeness; and, in addition,
easily adaptable, because switching to a di#erent logic is possible without having to redesign
the whole prover. In fact, it has been already extended to Classical First Order Logic [4],
Temporal Logic [5] and Multiple-Valued Logic [1, 2].
The structure of the paper is as follows:
1. Firstly, the necessary definitions and theorems which support the reduction strategy are
introduced in Section 2.
2. Later, the algorithm TAS-D is described in Section 3.
3. Finally, a comparative example is included in Section 4, which shows a class of formulas
which has linear proofs (in the number of branchings) when either resolution or dissolution
with factoring is applied [3, 7]. When applying TAS-D to these formulas we get
proofs without branching.
1.1 Overview of TAS-D
TAS-D is a satisfiability tester for classical propositional logic; therefore it can be used as a
refutational ATP method and, like tableaux methods, it is a syntactical model construction.
The reduction strategies are the main novelty of our method with respect to other non-clausal
ATPs; like these methods, TAS-D is based on the disjunctive normal form. Its power
is based not only on the intrinsically parallel design of the involved transformations, but
also on the fact that these transformations are not just applied one after the other, but
guided by some syntax-directed criteria, described in Sections 2.2 and 2.4, whose complexity
is polynomial. These criteria allow us:
1. To detect subformulas which are either valid, or unsatisfiable, or equivalent to literals.
2. To detect literals # such that it is possible to obtain a equisatisfiable formula in which
# appears at most once. Therefore, we can decrease the size of the problem as much as
possible before branching.
By checking these criteria we give ourselves the opportunity to reduce the size of the problem
while creating just one subproblem; in addition, such reductions do not contribute to exponential
growth. However, the most important feature of the reductions is that they enable
the exponential growth rate to be limited.
As an ATP, TAS-D is sound and complete and, furthermore, as a model building method,
it generates countermodels in a natural manner.
1.2 Preliminary Concepts and Definitions
Throughout the rest of the paper, we will work with a classical propositional language with
connectives {-, #} and their standard semantics; V denotes the set of propositional
denotes the set of literals; if # is a literal then # is its opposite literal; we will
also use the usual notions of clause, cube, implicant, implicate and negation normal form
(nnf). S # A denotes that S is a subformula of A, and S # A denotes that S is a proper
subformula of A.
An assignment I is an application from the set of propositional variables V to {0, 1}; the
domain of an assignment can be uniquely extended, preserving the standard semantics, to
the whole language. A formula A is said to be satisfiable if there exists an assignment I such
that 1, in this case I is said to be a model for A; formulas A and B are said to be
equisatisfiable if A is satisfiable i# B is satisfiable; formulas A and B are said to be equivalent,
denoted A # B, if assignment I ; |= denotes the logical consequence;
finally, the symbols # and # mean truth and falsity.
The transformation of a formula into nnf is linear (by repeated application of De Morgan
rules, the double negation rule and the equivalence A # B # -A#B), so in the following we
will only consider formulas in nnf. In addition, by using the associative laws we can consider
connectives # and # to have flexible arity, and expressions like A 1 # A n or A 1 #A n
to be well formed formulas.
We will use the standard notion of tree and address of a node in a tree. An address # in
the syntactic tree of a formula A will mean, when no confusion arises, the subformula of
A corresponding to the node of address # in will denote the address of the root node.
Similarly, when we say a subformula B of A we mean an occurrence of B in A; if B # A,
denotes the address of the node corresponding to B in
is a set of literals, then # 1 , # 2 , . , # n }.
is a set of literals in A and #}, then the expression A[#]
denotes the formula obtained after substituting in A, for all #, every occurrence of # by #,
and # by #.
and C are formulas and B # A, then A[B/C] denotes the result of substituting in
A any occurrence of B by C. If {# 1 , . , # n } is a set of literals in A and C i are formulas, then
the expression A[# 1 /C 1 , . , # n /C n ] denotes the formula obtained after substituting in A, for
all i, every occurrence of # i by C i .
If # is an address in A and C is a nnf, then the expression A[#/C] is the formula obtained
after substituting in A the subtree rooted in # by C.
Adding Information to the Tree: #-lists and
#-sets
The idea underlying the reduction strategy is the use of information given by partial assignments
(extensively used in Quine's method [8]) just for unitary assignments but, as we will
show, in a powerful manner.
We associate to each nnf A two lists 2 of literals denoted (the associated
#-lists of A) and two sets, denoted
c
c
whose elements are obtained out of
the associated #-lists of the subformulas of A.
The #-lists and the
#-sets are the key tools of our method to reduce the size of the
formula being analysed for satisfiability.
2.1 The #-lists
In a nutshell, are, respectively, lists of implicates and implicants of A. The
purpose of these lists is two-fold: firstly, to transform the formula A into an equivalent and
smaller-sized one (Section 2.2), and secondly, by means of the
c
# b sets (Sections 2.3 and 2.4),
to get an equisatisfiable and smaller-sized one. Their formal definition is the following:
1: Given an nnf A, are recursively defined as follows:
In addition, elements in a # 0 -list are considered to be conjunctively connected and elements
in a # 0 -list are considered to be disjunctively connected, so that some simplifications are
applied. Namely, if {# 0 (A), then # 0 (A) is simplified to #, and if {# 1 (A),
then is simplified to #.
We use lists in lexicographic order just to facilitate the presentation of the examples. The reader can
interpret them as sets.
The intuition behind the definition is easy to explain, since in # 0 (
we intend to
calculate implicates (for it is # 0 ), and since the union of the implicates of each conjunct is a
set of implicates of the conjunction, then we use S , and so on.
Example 1:
2.
3. ps # st
2.2 Information in the #-lists
In this section we study the information contained in the #-lists of a given formula. Our
first theorem states that elements of # 0 (A) are implicates of A, and elements of # 1 (A) are
implicants of A, and follows easily by structural induction from the definition of # b -lists.
Theorem 1 Let A be a nnf and # be a literal in A then:
1. If # 0 (A), then A |= # and, equivalently, A # A.
2. If # 1 (A), then # |= A and, equivalently, A # A.
As an immediate corollary of the previous theorem we have the following result on the
structure of the #-lists:
Corollary 1 For every nnf A we have one and only one of the following possibilities:
. There is b # {0, 1} such that # b
. #, and then A #.
The following corollary states a condition on the # 1 -lists which directly implies the satisfiability
of a formula.
then A is satisfiable, and if # 1 (A), then any assignment
I such that is a model for A.
On the other hand, the following result states conditions on the #-lists assuring the
validity or unsatisfiability of a formula.
Corollary 3 Let A be a nnf, then
1. If
in which a conjunct A i 0
is a clause such that # 1
A #.
2. If A = W n
in which a disjunct A i 0
is a cube such that # 0
A #.
Proof:
1. Let
using the results of Theorem 1, if A i 0
Therefore:
_
_
2. It is similar to the previous one.
Definition 2: If A is a nnf, to #-label A means to associate to each node # in A the ordered
Let us name those formulas whose #-lists allow to determine either its validity or its
(un)satisfiability.
Definition 3: A nnf A is said to be
. finalizable if one of the following conditions holds:
1.
2.
This definition will be applicable to the current formula when it is detected to be
(un)satisfiable. The following three definitions are referred to subformulas of the current
formula which are detected to be either valid, or unsatisfiable, or equivalent to a literal.
. # 1 -conclusive if one of the following conditions holds:
1.
2.
and a disjunct A i 0
is a cube such that # 0
. # 0 -conclusive if one of the following conditions holds:
1.
2.
and a conjunct A i 0
is a clause such that # 1
. #-simple if A is not a literal and #
#(rs, nil )
#(nil, qst )
#(qrs, nil )
s
Figure
1: The tree
The previous results state the amount of information in the #-lists which is enough to
detect (un)satisfiability; when all these results are applied to a given formula, the resulting
one is said to be #-restricted, and its formal definition is the following:
Definition 4: Let A be an nnf, then it is said that A is #-restricted if it satisfies the following
conditions:
. it is not finalizable,
. it has no subtree which is either # 0 -conclusive, or # 1 -conclusive, or #-simple,
. it has neither # nor # leaves. 3
From the previous results we can state that if A is a nnf, then by repeatedly applying the
following sequence of steps we get a #-restricted formula:
1. #-label.
2. Substitute subformulas B # A by either # (if B is # 1 -conclusive), or # (if B is # 0 -
conclusive), or a literal # (if B is #-simple).
3. Simplify logical constants (# or #), as soon as introduced, by using the 0-1-laws.
4. Check for (un)satisfiability of A (namely, chech whether A is finalizable).
Example 2: Given the formula
a linear transformation allows to get a nnf which is equivalent to its negation, A, depicted in
Fig. 1 (for readability reasons, leaves are not labelled in the figures).
When #-labelling A, the method finds that node 6 (the right-most branch) is s-simple.
The s-simple subtree is substituted by s and then formula B in Fig. 2 is obtained.
New applications of the #-lists to get information (up to equivalence) of a formula A are
given by the following theorem and its corollary.
3 Although the input formula is supposed not to contain occurrences of logical constants, they can be
introduced by the reductions as we will see.
#(rs, nil )
#(nil, qst )
r s
Figure
2: The tree TB .
Theorem 2 Let A be a nnf and # be a literal in A, then:
1. If # 0 (A), then A # A[#]
2. If # 1 (A), then A # A[#]
Proof: 1. Let I be an assignment; we have to prove that
. If Theorem 1 (item 1) since # 0 (A), we have that A # A,
therefore Now the result is obvious.
. If
The second item is proved similarly.
As an immediate consequence of the previous theorem, the following satisfiability-preserving
result can be stated, which will be used later:
Corollary 4 Let A be a nnf. If # 0 (A), then A and A[#] are equisatisfiable.
Furthermore, if I is a model of A[#], then the extension I # of I such that I #) = 1 is
a model of A.
The following theorem allows to substitute a whole subformula C of A (not just literals
as in Theorem 2) by a logical constant.
Theorem 3 Let A be a nnf, C # A then:
1. If
2. If
3. If # 0 (A) and # 0 (C), then A # A[C/#]
4. If # 0 (A) and # 1 (C), then A # A[C/#]
Proof:
1. By Theorem 1, we have A # A and C # C. Let I be an interpretation, then
The rest of the items are proved similarly.
2.3 The
#-sets
In the previous section, the information in the #-lists has been used locally, that is, the
information in # b (#) has been used to reduce node #, by using Theorem 1. In this section, the
purpose of defining a new structure, the
#-sets, is to allow the globalisation of the information,
in that the information in # b (#) can be refined by the information in its ancestors.
Given a #-restricted nnf A, we define the sets
c
whose elements are
pairs (#) where # is a filtered # b -list associated to a subformula B of A, and # is the address
of B in A. In Section 2.4 we will see how to transform the formula A into an equisatisfiable
and smaller sized one by using these sets.
The definition of the
#-sets is based on the Filter operator which filters information
in the #-lists according to Theorems 2 and 3. Specifically, some literals in the #-lists can
allow to substitute a subformula by either # or # as a consequence of Theorem 3; on the
other hand, when this theorem is not applicable, it is still possible to delete the rest of which
are dominated, as an application of Theorem 2. In fact, the dominated literals will not be
deleted, but framed, because they will be used in the extension of the mixed collapsibility
theorem.
Given a #-restricted nnf A and B # A we have:
. Filter(# 0 (B)) is:
1. #, if there is a literal
A.
This is a consequence of Theorem 3, items 1 and 3.
2. The result of framing any literal
A.
This is a consequence of Theorem 2, items 1 and 2 resp.
. Filter(# 1 (B)) is
1. #, if there is a literal
A.
This is a consequence of Theorem 3, items 2 and 4.
2. The result of framing any literal
A.
This is a consequence of Theorem 2, items 2 and 1 resp.
Definition 5: Let A be a #-restricted formula. For b # {0, 1}, the set
c
recursively
defined as follows:
. If # is a literal, then
c
c
. Otherwise,
c
a subformula of A and # b (B) #= nil}
In the following example we present a step-by-step calculation of
c
-sets.
Example 3: Consider the formula A whose #-labelled tree appears below,
#(nil, rs )
r s
# (nil, ps )
r #(nil, q )
s q
#(nil, qrs )
s
For this tree, we have
c
Literal p in nodes 3111, 3112 and 311 is framed because of its occurrence in # 0
literal -
s in node 3112 is framed because of the occurrence of s in # 1
On the other hand, the
c
for A is the following:
c
q-s, 22122),
q-r, 221), ( -
qrs, 222), ( -
q, 22), (-p-q, 2), (s, 3)}
Node 211 is substituted by # because of the occurrences of -
q in # 1 (2) and q in # 1 (211); and
node 22121 is substituted by # because of the occurrences of p in # 1 (2) and -
Finally, the occurrences of -
are framed because of the occurrence of - p in # 1 (2), and the
occurrences of -
q are framed because of the occurrence of - q in # 1 (2).
#b
#-sets and meaning-preserving results
In this section we study the information which can be extracted from the
#-sets. This is
stated in Theorem 4, and in its proof we will use the following facts about the
#-sets of a
given #-restricted nnf A:
. No element in
c
is (#), since A is a #-restricted nnf and cannot be finalizable.
. If (# c
# b (A), then # is not the address of a leaf of A (since
c
c
all literal #).
. If # 0 (A), then (# c
and the list # does not have framed literals. (Just
note that a literal # is framed in (#) from the information in the #-lists of its
ancestors).
The following theorem states that, as the #-labels, the
#-labels also allow substituting
subformulas in A by either # or #.
Theorem 4 Let A be a #-restricted nnf then
1. If (# c
2. If (# c
Proof:
1. Suppose (# c
let C be the subformula at address #. By the definition of
c
there exist a formula B such that C # B # A and a literal # satisfying
By Corollary 1, using that # 0 (C), that the address # cannot correspond to a leaf,
and that A is a #-restricted nnf (specifically, A does not have #-simple subformulas),
we get that # 1
Note that, clearly, it is enough to prove that B # B[C/#].
Firstly, we will prove that, under these hypotheses:
(a) If # 1 (B), then # 1 (B[C/#]).
(b) If # 0 (B), then # 0 (B[C/#]).
Proof of (a): By induction on the depth of # in B, denoted dB (#).
(i) If dB to commutativity and
associativity).
It cannot be the case that
(D), we would have # 1 (C), which contradicts the fact that # 1
Therefore, we must have consequently, B[C/# D. Now,
using again # 1 we have that # 1 (D)
and, therefore, # 1
(ii) Assume the result for dX us prove it for dB
then the result is obvious.
by the induction hypothesis we
have that # 1 (B 1 [C/# 1 (B[C/#]).
by the
induction hypothesis, # 1
- The cases C # B 2 are similar.
The proof of (b) is obtained by duality.
Finally, we prove B # B[C/#] by considering the two possibilities in (1) above:
. If # 0 (C) and # 1 (B), then
B[C/#] by Theorem 3
# B[C/#] by (a) and Theorem 1
. If # 0 (C) and # 0 (B), then
B[C/#] by Theorem 3
# B[C/#] by (b) and Theorem 1
2. The proof is similar.
Note that this theorem introduces a meaning-preserving transformation which allows substituting
a subformula by a constant. The information given by the #-lists substitutes subformulas
which are equivalent to either # 1 -conclusive) or # 0 -conclusive); however, under
the hypotheses of this theorem, it need not be true that # is equivalent to either # or #.
A be an nnf then it is said that A is restricted if it is #-restricted and
satisfies the following:
. There are not elements (#) in
c
. There are not elements (#) in
c
If A is a nnf, to label A means #-label and associate to the root of A the ordered pair
c
c
Note that given a #-restricted nnf, A, after calculating (
c
c
the (un)satisfiability of A or an equivalent and restricted nnf by means of the substitutions
determined by Theorem 4, and the 0-1-laws.b
#-sets and satisfiability-preserving results
The following results will allow, by using the information in the
#-sets, to substitute a nnf A
by an equisatisfiable and smaller sized A # with no occurrences of some literals occurring in A.
A complete reduction theorem
To begin with, Corollary 4 can be stated in terms of the
#-sets as follows:
Theorem 5 (Complete reduction) Let A be a nnf such that (# c
then A is satisfiable
if and only if A[#] is satisfiable. Furthermore, if I is a model of A[#],
then the extension I # of I such that I #) = 1 for all # is a model of A.
Note that this result allows to eliminate all the occurrences of all the literals appearing in
#, that is why it is named complete reduction. Its usefulness will be shown in the examples.
Generalised pure literal rule
The introduction of the
#-sets allows a generalisation of the well-known pure literal rule for
sets of clauses. Firstly, recall the standard definition and result for a formula in nnf:
Definition 7: Let # be a literal occurring in a nnf A. Literal # is said to be pure in A if #
does not occur in A.
A be a nnf and # a pure literal in A then A is satisfiable i# A[#] is satisfiable.
Furthermore, if I is a model of A[#], then the extension I # of I such that I #) = 1 is a
model of A.
Our
#-sets allow to generalise the definition of pure literal and, as a consequence, to get
an extension of the lemma above.
Definition 8: Let A be a nnf. A literal # is said to be
#-pure in A if it satisfies the following
conditions:
1. # occurs in
c
2. All the occurrences of # in
c
are framed.
Next theorem is a proper extension of Lemma 1 (for it can be applied even when # and #
occur in A).
Theorem 6 (Generalised pure literal rule) Let A be a nnf, let # be a
#-pure literal in
A and let B be the formula obtained from A by the following substitutions:
(i) If (# c
with #, then node # in A is substituted by #].
(ii) If (# c
with #, then node # in A is substituted by #.
Then A is satisfiable if and only if B is satisfiable. Furthermore, if I is a model of B, then
the extension I # of I such that I #) = 1 is a model of A.
Proof: By Theorem 2, and the definition of the
#-sets, we have that
(a) If (# c
in addition, we have #, then
A # A[#])]
(b) If (# c
and if, in addition, we have #, then
A # A[#])]
Therefore, if we consider the formula A # , obtained when applying the equivalences of items
(a) and (b), we get that literal # is pure in A # . Now, by an application of Lemma 1 to A # we
get the formula B, which completes the proof.
In the rest of the section we introduce the necessary definitions to extend the collapsibility
results introduced in [9].
Collapsibility theorems
Definition 9: Let A be a nnf and # 1 and # 2 literals in A. Literals # 1 and # 2 are 0-1-bound if
the following conditions are satisfied:
1. There are no occurrences 4 of either # 1 or # 2 in
c
have that # 1 # i# 2 #.
2. There are no occurrences of either # 1 or # 2 in
c
we have
that
From the definition of
#-sets we have that:
Remark 1: If # 1 and # 2 are 0-1-bound in A, then every leaf in A with a literal in {# 1
has an ancestor # in A which is maximal in the sense that its associated #-lists satisfy one
of the following conditions:
is an ancestor of #, then none of the literals # 1 , # 2 , # 1 , # 2 occur in
the #-lists associated to # .
is an ascendant of #, then none of the literals # 1 , # 2 , # 1 , # 2 occur
in the #-lists associated to # .
We will use the following notation in the proof of the collapsibility results, where # i are
literals, and b # {0, 1}:
Theorem 7 (Collapsibility) Let A be a nnf and let # 1 and # 2 be literals in A. If # 1 and # 2
are 0-1-bound, then A is satisfiable if and only if A[# 1 /# 1 /#] is satisfiable. Furthermore,
if I is a model of A[# 1 /# 1 /#], then any extension I # of I such that I # 1 is a model
of A.
Proof: The if part is immediate. For the only if part, let us suppose A is satisfiable.
Let I be a satisfying assignment for A. If there is nothing to prove; so, let us
consider prove that A is also satisfied by an assignment I # such that I # 1
From Remark 1, A can be considered as a formula in the language with the following set
of atoms
that is, in A every leaf is either a formula in S # or a literal # 1
Note that if S 1 then we have that I(S 1
Let I # be the assignment obtained from I by changing the values on # i as follows:
I
4 In this section, when we say an occurrence of #, we mean an unframed occurrence of #.
This assignment satisfies I # (S 1 coincides with I in
the rest of leaves. Therefore I # is a satisfying assignment for A with I # 1
This result is a generalisation of van Gelder's collapsibility lemma, which treats the case
in which all the occurrences of # 1 are bound as # 1 # 2 and # 1
# 2 can be represented by a single literal with # 1 # 2 , see [9] for the details. Our result
drops the requirement that all the occurrences in the defining subset of # 1 and # 2 have to be
children of a # node and the occurrences in the defining subset of {# 1 , # 2 } have to be children
of a # node.
Obviously, the previous result can be straightforwardly extended to the case of n literals
which can be collapsed into one.
Definition 10: Let A be a nnf and let # 1 , . , # n be literals in A, literals # 1 , . , # n are
0-1-bound if the following conditions are satisfied:
1. In
c
there are no occurrences of # 1 , . , # n and if (# c
{1, . , n}, then we have that # i # i# j #.
2. In
c
there are no occurrences of # 1 , . , # n and if (# c
{1, . , n}, then we have that # i # i# j #.
Corollary 5 (Generalised collapsibility) Let A be a nnf, and let # 1 , . , # n be literals 0-
1-bound in A then A is satisfiable i# A[# 1 /# n-1 /# 1 /# n-1 /#] is satisfiable.
Furthermore, if I is a model of A[# 1 /# n-1 /# 1 /# n-1 /#], then any extension I #
of I such that I # j is a model of A.
Example 4: van Gelder's reduction lemmas cannot be applied to the formula in Example 3,
but it is collapsible in the sense of Theorem 7. We had the following
#-sets:
c
c
q-s, 22122),
q-r, 221), ( -
qrs, 222), ( -
q, 22), (-p-q, 2), (s, 3)}
Specifically, p and q are 0-1-bound.
In order to state the generalisation of mixed collapsibility we need the following definition:
Definition 11: Let A be a nnf, b # {0, 1} and let # 1 and # 2 be literals in A. Literal # 1 is
b-bounded to # 2 if the following conditions are satisfied
1. In
c
(A) there are no occurrences (neither framed nor unframed) of either # 1 or # 1 .
2. If (# c
(A), then we have that
. If # 1 #, then # 2 #.
. If # 1 # then # 2 #.
By this definition if # 1 is b-bound to # 2 in a formula A, then every leaf of A belonging to
has an ascendant # in S #
(A).
Theorem 8 (Mixed collapsibility) Let A be a nnf and # 1 , # 2 literals in A,
1. If # 1 is 0-bound to # 2 , and A # is the formula obtained from A by applying the following
substitutions
. If (# c
is substituted in A by # 1 /# 1 /#].
. If (# c
is substituted in A by # 1 /# 1 /#].
then A is satisfiable if and only if A # is satisfiable. In addition, if I is a satisfying
assignment of A # , then any extension I # of I such that I(# 1
assignment for A.
2. If # 1 is 1-bound to # 2 , and A # is the formula obtained from A by applying the following
substitutions
. If (# c
is substituted in A
by #.
then A is satisfiable if and only if A # is satisfiable. In addition, if I is a satisfying
assignment of A # , then any extension I # of I such that I(# 1
assignment for A.
Proof: 1. Note that A can be considered as a formula in the language with set of atoms
that is, in A every leaf is either a formula in S #
(A) or is a literal # 1 , # 1 }.
Let I be a satisfying assignment for A:
1. if we have, by Theorem 2, that
and for every leaf S in S # 1 # 2 ,0
(A) we have, again by Theorem 2,
2. If
(A) we have, by Theorem 2,
Consider the assignment I # , obtained from I by changing only its value on
obviously, we have I(S) # I # (S).
By monotonicity of boolean conjunction and disjunction, we have
I #
Conversely, let I be a satisfying assignment for A # and let I # any extension of I such that
I Theorem 2, for every leaf S in S # 1 # 2 ,0 (A) we have
and for every leaf S in S # 1 # 2 ,0 (A) we have
2. The proof is similar.
Example 5: Following with the formula in Example 2, for the formula B in figure 2, we had
c
c
therefore the first subtree can be pruned, obtaining the tree in Fig. 3.
#(rs, nil )
# (nil, qst )
r s
Figure
3: The tree TC .
variables r and s can be deleted by Theorem 5 of complete reduction (for (rs, #
c
storing the information to be able to generate a model (if it
exists) of the input formula.
In addition, p is 1-bounded to t; therefore, by Theorem 8 of mixed collapsibility, (1) and
(3) can be substituted by # and the information (p = t) is stored.
The resulting formula is q # t, which is finalizable (for # 1 (q # Specifically,
it is satisfiable and a model is
We can deduce that the input formula in Example 2 is non-valid (for its negation is
by collecting the stored information we get the following countermodel
two possibilities:
the information
Splitting a formula
We finish the section by introducing a satisfiability-preserving result which prevents a branching
when suitable hypotheses hold. The splitting, as we call it, results as a consequence of
the following well-known theorem.
Theorem 9 (Quine) A is satisfiable if and only if A[p/# A[p/#] is satisfiable. Further-
more, if I is a model of A[p/#], the extension of I with the assignment is a model
of A; similarly, if I is a model of A[p/#], the extension of I with the assignment
a model of A
If no satisfiable-preserving reduction can be applied to a restricted conjunctive nnf, then
we would have to branch. The following definition states a situation in which the formula has
not to be branched but split.
be a restricted nnf; A is said to be p-splittable if J p #J
where
Corollary 6 Let
be a restricted and p-splittable nnf. Then A is satisfiable if
and only if V i#Jp A i [p/# V i#J p
A i [p/#] is satisfiable. Furthermore, if I is a model of
A i [p/#], then the extension of I with the assignment is a model of A; similarly,
if I is a model of
then the extension of I with the assignment
model of A.
This result can be seen as a generalisation of the Davis-Putnam rule with the following
advantages:
. It is applicable to nnf, not only cnf.
. It can be shifted to non-classical logics.
. Its interactions with the reduction strategies turn out to be extremely e#cient.
The advantage in the use of this transformation is that the problem is not branched but split
in two subproblems where the occurrences of p are substituted by logical constants.
Now, we can describe the algorithm of the prover following the steps we have applied in
the previous examples.
3 The TAS-D algorithm
In this section we describe the algorithm TAS-D and its soundness and completeness are
proved. The flowchart of the algorithm appears in Figure 4; we have to keep in mind that:
. TAS-D determines the (un)satisfiability of the input formula. Therefore, it can be
viewed as a refutational ATP.
. The data flow of the algorithm is a pair (B, M) where B is a nnf, and M is a set of
expressions is a literal not occurring in B.
. The elements in M define a partial interpretation for the input formula, which is used
by CollectInfo, if necessary. This interpretation is defined as follows:
In general, due to the second condition, M might define more than one interpretation,
depending on the choosing of I(# ).
The operators involved in the algorithm are described below, the soundness of each one
follows from the results in previous sections:
The initialisation stage: NNF
The user's input A is translated into nnf by the operator NNF; specifically,
where B is a nnf which is equivalent to A.
A
(B, M)
Reduce
Finalizable?
Root #?
Parallel
Update
SPReduce
SubReduce
QBranch
Unsat.
Model
Satisfiable
#-restrict
-restrict
Figure
4: Flowchart of the TAS-D algorithm.
The Update module
The di#erent stages in the algorithm transform subtrees of the input tree; in each trans-
formation, the labels of the ascendant nodes (of the transformed node) are deleted; Update
processes these trees by recalculating the missing labels and giving as output either a restricted
nnf or a finalizable formula. From another point of view, this stage updates the formula in
the sense that it prunes those subtrees that can be directly deduced to be equivalent either
to #, or #, or a literal.
The #-restrict operator
The input of #-restrict is a pair (B, M), where B is a partially labelled formula, possibly
with logical constants.
Given a nnf B we have that #-restrict(B, M) = (C, M) where C is the #-restricted
formula obtained from B as indicated in Definition 4.
The
#-restrict operator
The input of
#-restrict is a pair (B, M) where B is a #-restricted formula. We have
where C is the restricted formula obtained from B as indicated in Definition 6.
Parallelization
The input of Parallel is a pair (B, M), where B is a restricted formula and
We have
Parallel(
_
Since a disjunction is satisfiable i# some disjunct is satisfiable, each pair independently
passed to Reduce, the following module in the algorithm.
The Reduce module
The input of Reduce is the labelled syntactic tree of a restricted nnf
. In this stage
we decrease, if possible, the size of B before branching, by using the information provided by
the
#-labels and the #-labels. Specifically,
. the
#-labels of the root node allow, using the SPReduce operator, to substitute B by
an equisatisfiable formula in which some propositional variables have been eliminated;
. the #-labels of a proper subtree X allow, using the SubReduce operator, to substitute
the subformula X by an equivalent formula in which the symbols in its #-lists occur
exactly once.
The SPReduce operator
A restricted nnf B is said to be SP-reducible if either it is completely reducible (i.e. there is
an element (# c
or it has
#-pure literals, or it has a pair of 0-1 bound literals, or
it has a literal b-bound to other literal; for these formulas we have
obtained by applying the following items:
1. If (# c
Theorem 5.
2. If # is
#-pure in B, then C is the obtained formula after applying in B the substitutions
in Theorem 6, and M
3. If # and # are 0-1-bound, then C is the obtained formula after applying in B the
substitutions in Theorem 7, and M
4. If # is b-bound to # , then C is the obtained formula after applying in B the substitutions
in Theorem 8, and
The SubReduce operator
The input of SubReduce is a restricted, not SP-reducible nnf A; its e#ect can be described
as an application of Theorem 2 up to associativity and commutativity. The formal definition
needs some extra terminology, included below:
Definition 13: Let a such that is not SP-reducible, and consider
W .
The integers denoted by m(# j ), defined below, are associated to A:
where | - | denotes the cardinality of a finite set.
It is said that A is #-reducible if m(#) > 1 and
associated with A }
Let A be #-reducible and consider is defined as
follows, by application of Theorem 2
A is subreducible if it has a subformula B such that one of the following conditions holds:
.
.
. B is #-reducible for some literal #.
By Theorem 2 we have that the subreduction preserves meaning, therefore
where C is obtained by traversing the tree A depth-first in order to find the first subtree B
indicated above, and
1. Apply Theorem 2, if either
2. Substitute B by B # , otherwise.
The interest of using sub-reductions is that they can make possible further reductions.
It is this use of reductions before branching one of the main novelties of this method with
respect to others; specifically, the unit clause rule of the Davis-Putnam procedure is a special
case of SP reduction; also [9] uses a weak version of our sub-reductions in his dominance
lemma, but he only applies the substitutions to the first level of depth of each subformula.
The Split operator
The input of Split is a pair (B, M) where is a restricted and p-splittable nnf
which is neither SP-reducible nor subreducible; we have
These two tasks are treated independently by the Update process.
Branching: the QBranch operator
The input of QBranch is a pair (B, M) where B is a restricted nnf which is neither SP-
reducible, nor splittable, nor sub-reducible, nnf. We have:
These two tasks are treated independently by the Update process.
Our experimental tests show that the best results are obtained when choosing p as the
propositional variable with more occurrences in the formula being analysed (this information
can be easily obtained from the
#-sets).
Collecting partial results: CollectInfo
The CollectInfo operator collects the outputs of Update for each subproblem generated by
either Parallel, or Split, or QBranch, and finishes the execution of the algorithm:
. If all the outputs of the subproblems are #, then CollectInfo ends the algorithm with
the output Unsatisfiable.
. If some of the subproblems outputs (#, M), then CollectInfo ends the algorithm with
output Satisfiable and a model, which is built from M.
. If some of the subproblems outputs (A, M) satisfying
ends the algorithm with output Satisfiable and a model is built from
where # is the first element of # 1 (A).
3.1 Soundness and completeness of TAS-D
The termination of the algorithm just described is obvious, for each applied process reduces
the size and/or the number of propositional variables of the formula. Specifically, in the
worst case, in which no reduction can be applied, the only applicable process is QBranch
which decreases by one the number of propositional variables in the formula.
Now, we can prove the soundness and completeness of TAS-D.
Theorem 10 TAS-D(A)=Satisfiable if and only if A is satisfiable.
Proof: It su#ces to show that all the processes in the algorithm preserve satisfiability.
Process NNF clearly preserves the meaning, for it is the translation into nnf; all processes in
the modules Update and Reduce preserve either meaning or satisfiability, by the results in
Section 2. To finish the proof, one only has to keep in mind that the subproblem generating
processes (Parallel, Split, QBranch) are based in the following fact: a disjunction is satisfiable
if and only if a disjunct is satisfiable. So, the process CollectInfo preserves satisfiability
as well.
3.2 Some complete examples
Example Consider the formula
The result of Update(NNF(-A)) is = (B, ?), where
#({(pr, #)}, {( pq r , 1), ( pq, 2)} )
Now, as we have
c
reduction can be applied wrt p and as a
consequence we get Update(SPReduce(B, and then the output
is "-A is Unsatisfiable", therefore A is valid.
Example 7: Consider the formula
we have
#(ps, nil )
s q
# (nil, rs )
r s
with
c
c
the reduction module does not apply to this tree, that is, (B, ?) is the input of QBranch. We
apply QBranch wrt variable p and obtain,
The subproblem C 1 is studied below:
After #-restrict we get the following tree
# (nil, rs )
r s
as
For the second subproblem C 2 we have
#(qs, nil )
rs )
r s
for which
c
c
#-restrict's output is fed into SPReduce; the formula can be completely reduced,
for (qs, # c
therefore, by applying substitutions [q/#] and [s/#] and simplifying the
logical constants we get,
which is # 0 -conclusive. Therefore, Update(C 2
As all the subproblems generated by QBranch output #, then CollectInfo produces the
output "-A is Unsatisfiable", therefore A is valid.
Example 8: Let us study the satisfiability of the formula in Example 4:
# (nil, rs )
r s
#(nil, ps )
s q
#(ps, nil )
s
The
c
-sets for the previous formula are the following:
c
c
An application of
#-restrict substitutes (211) and (22121) by #, the result is an
equivalent formula B:
#(nil, rs )
r s
r s q
#(nil, qrs )
#(ps, nil )
s
c
c
Once again, the # in
c
allows to substitute (232) by #, obtaining the equivalent
formula C:
#(nil, rs )
r s
s
c
c
Therefore, Update(A, Now SPReduce can be applied, for literals p and q are
0-1-bound, we substitute all the occurrences of p by #, i.e. #-restrict(SPReduce(C,
(D,
# (nil, rs )
r s
# (nil, rs )
r s
s
c
c
After substituting (311) by # we get :
# (?,{(rs, 1), (qrs, 2), (qs, 3)} )
#(nil, rs )
r s
#(nil, qrs )
In this formula q is 1-bounded to s and, SPReduce substitutes branches at addresses 2 and 3
by #; then
As r # s is finalizable, for its # 1 #= nil, the stage CollectInfo ends the algorithm with
output "A is Satisfiable" and the model determined by
any interpretation I such that is a model of A. Note
that I # defined as I # is also a model of A.
4 A comparative example
To put our method in connection with other existent approaches in the literature, we will
study the collection {T n } of clausal forms taken from [3], we also use their notation for the
propositional variables. Consider, for instance, T 3 below:
each clause contains atoms of the form p i
, where # is a string of +'s and -'s. The superscripts
in each clause always form the sequence 1,2, . , n. The subscript of each literal is exactly
the sequence of signs of the preceding literals in its clause. When T n is built from T n-1 ,
each
is added both positively and negatively. It is easy to see that T n has 2 n
propositional variables, 2 n clauses, each of which contains n literals.
In [3], Cook and Reckhow described the family {T n , n # 1} and showed that the study
of its satisfiability is intractable for analytic tableaux but can be handled in linear time by
resolution. In [7], Murray and Rosenthal showed that dissolution with factoring provides
proofs for this class that are linear in the number of input clauses, |T n |.
When we apply TAS-D to test the satisfiability of T n we get that it is subreducible. For
instance, formula Reduce(T 3 ) can be expressed equivalently as the formula
Thus, #-restrict reduces the previous tree, for there are four # 0 -conclusive subtrees
(namely, the conjunctions p 3
simplifying the four # leaves, we get #. Therefore,
when using TAS-D we can detect the unsatisfiability of the formulas T n with no branching at
all.
Conclusions
We have presented a non-clausal satisfiability tester, named TAS-D, for Classical Propositional
Logic. The main novelty of the method, in di#erence to other approaches, is that
the reductions applied on each formula are dynamically selected, and applied to subformulas
like in a rewrite system, following syntax-directed criteria. Specifically, we have introduced
extensions of the pure literal rule and of the collapsibility theorems. This fact increases the
e#ciency, for it decreases branching.
As an example of the power of TAS-D we have studied a class of formulas which has linear
proofs (in the number of branchings) when either resolution or dissolution with factoring is
applied; on the other hand, when applying our method to these formulas we get proofs without
branching.
Acknowledgments
The authors would like to thank Jos-e Meseguer and Daniele Mundici for their valuable comments
on earlier drafts of this work.
--R
A reduction-based theorem prover for 3-valued logic
Reducing signed propositional logics.
The relative e
Reduction techniques for translating into clause form by using prime implicates.
Implicates and reduction techniques for temporal logics.
Dissolution: Making paths vanish.
On the relative meris of path dissolution and the method of analytic tableaux.
Methods of logic.
A satisfiability tester for non-clausal propositional calculus
--TR
A satisfiability tester for non-clausal propositional calculus
Dissolution
On the relative merits of path dissolution and the method of analytic tableaux
Implicates and Reduction Techniques for Temporal Logics
--CTR
Jun Ma , Wenjiang Li , Da Ruan , Yang Xu, Filter-based resolution principle for lattice-valued propositional logic LP(X), Information Sciences: an International Journal, v.177 n.4, p.1046-1062, February, 2007
P. Cordero , G. Gutirrez , J. Martnez , I. P. De Guzmn, A New Algebraic Tool for Automatic Theorem Provers, Annals of Mathematics and Artificial Intelligence, v.42 n.4, p.369-398, December 2004
P. Cordero , G. Gutirrez , J. Martnez , I. P. De Guzmn, A New Algebraic Tool for Automatic Theorem Provers, Annals of Mathematics and Artificial Intelligence, v.42 n.4, p.369-398, December 2004 | SAT problem;prime implicates/implicants;non-clausal theorem proving |
504571 | A typed context calculus. | This paper develops a typed calculus for contexts i.e., lambda terms with "holes". In addition to ordinary lambda terms, the calculus contains labeled holes, hole abstraction and context application for manipulating first-class contexts. The primary operation for contexts is hole-filling, which captures free variables. This operation conflicts with substitution of the lambda calculus, and a straightforward mixture of the two results in an inconsistent system. We solve this problem by defining a type system that precisely specifies the variable-capturing nature of contexts and that keeps track of bound variable renaming. These mechanisms enable us to define a reduction system that properly integrates&brg;-reduction and hole-filling. The resulting calculus is Church-Rosser and the type system has the subject reduction property. We believe that the context calculus will serve as a basis for developing a programming language with advanced features that call for manipulation of open terms. Copyright 2001 Elsevier Science B.V. | Introduction
A context in the lambda calculus is a term with a \hole" in it. The operation
for contexts is to ll the hole of a context with a term. For the purpose of
1 This is the authors' version of the article to appear in Theoretical Computer
Science.
current a-liation: Department of Information Science,
University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan.
3 Atsushi Ohori's work was partly supported by the Japanese Ministry of Education
Grant-in-Aid for Scientic Research on Priority Area no. 275: \Advanced
databases," and by the Parallel and Distributed Processing Research Consortium,
Japan.
Preprint submitted to Elsevier Preprint 23 January 2001
explanation in this section, we write C[] for a context containing the hole
indicated by [], and write C[M ] for the term obtained from C[] by lling its
hole with M . For example, if C[] (x:[]+y) 3 then C[x+z] (x:x+z+y) 3.
As seen from this simple example, the feature that distinguishes this operation
from substitution of the lambda calculus is that it captures free variables. In
the above example, x in x z becomes bound when it is lled in the context
One motivation behind using contexts in the theory of lambda calculus is to
study properties of open terms. Since the behavior of an open term depends
on bindings of their free variables, in order to analyze its behavior, it is essential
to consider possible contexts in which the open term occurs. Study of
program analyses based on contexts such as observational equivalence [13,11]
yields important results in analysis of programming languages. In these and
most of other usages, context is a meta-level notion and its applicability to
programming languages has largely been limited to meta-level manipulation
of programs. We believe that if a programming language is extended with
rst-class contexts, then the extended language will provide various advanced
features that call for manipulation of open terms. Let us brie
y mention a few
of them.
Programming environment. In conventional programming environments, programs
must rst be compiled into \object modules", and they must then be
linked together to form an executable program. Moreover, an executable program
must be a closed term. If a programming environment can be extended
with the ability to link various software components dynamically, then its
exibility will signicantly increase. Since the mechanism of contexts we are
advocating oers a way of performing linking at runtime, it would provide a
basis for developing such an environment in a theoretically sound way.
Distributed programming. In distributed programming, one often wants to send
a piece of code to a remote site and execute it there. As witnessed by recently
emerging Internet programming languages such as Java [4], this feature will
greatly enhance the expressive power of distributed programming. One naive
approach to send a program is to pack all the necessary resources as a closure
and send the entire closure to the remote site. An obvious drawback to this
approach is ine-ciency. Since in most cases, communicating sites share common
resources such as standard runtime libraries, a better approach would be
to send an open term and to make the necessary binding at the remote site.
A typed calculus with rst-class contexts would provide a clean and type safe
mechanism for manipulating open terms.
First-class modules. A program using a module can naturally be regarded as
an open term containing free variables whose values will be supplied by the
module. One way of modeling a module exporting a set of functions F
through identiers f would therefore be regarding it as a context that
captures variables f bind them to F respectively. Using
(or \opening") a module then corresponds to lling the hole of the context
with the variables. This approach can provide a new foundation for
exible
module systems. In conventional languages with modules such as Modula-2
[18] and Standard ML [12], there is rigid separation between the type system
for modules and that of terms, and allowable operations on modules are rather
limited. Signicant potential advantage of the \modules-as-contexts" approach
is that modules can be freely combined with any other constructions available
in the language i.e. that modules are treated as rst-class citizens. Needless to
say, an actual module system must account for various features such as type
abstraction, type sharing and separate compilation, and the above simple view
alone does not immediately provide a proper basis for module systems. We
nonetheless believe that, when properly rened with various mechanisms for
module systems studied in literature, the above approach will open up a new
possibility for
exible module systems. Indeed, a recent work by Wells and
Vestergaad [17] shows a connection between their module language and our
context calculus.
The general motivation for this study is to develop a programming language
with rst-class contexts that can represent those features in a clean way.
Despite those and other potentially promising features of contexts, a language
with rst-class contexts has not been well investigated. Lee and Friedman
[10] proposed a calculus where contexts and lambda terms are two disjoint
classes of objects: contexts are regarded as \source code" and lambda terms
as \compiled code". This separation is done by assuming two disjoint variable
name spaces: one for lambda terms and one for contexts. As a consequence, in
their system, -reduction and ll-reduction are two disjoint relations without
non-trivial interaction. Dami [2] also announced a system for dynamic binding
similar to that of Lee and Friedman. While these approaches would be useful
for representing source code as a data structure, they do not allow contexts
of the language itself to be treated as rst-class values inside the language.
Kahrs [9] have developed a combinatory term rewriting system that is compatible
with contexts. However, contexts and hole-lling themselves are not
represented as terms within the system of terms. Talcott [16] developed an algebraic
system for manipulating binding structures. Her system includes suitable
mechanisms for manipulating contexts. In particular, it contains holes and
hole-lling which commutes with substitution. However, this is a meta-level
system, and the issue of representing contexts and the associated hole-lling
operation inside of the reduction system of lambda calculus is not addressed.
One of the features of contexts is to bind variables through holes. In this
sense, contexts are closely related to environments. Abadi et al. [1] developed
the -calculus for explicit substitutions. Their motivation is similar in spirit
to ours in that it internalizes a meta-level mechanism in the lambda calculus.
However, they did not address the problem of rst-class treatment of substitu-
tions. In revising the present article, the authors noticed that Sato et al. [14]
recently developed an environment calculus where environments are rst-class
values. In obtaining a con
uent calculus, they also address the problem of
variable binding in the presence of rst-class environments. Their solution to
this problem has some similarity to ours, although more general mechanisms
are needed for a calculus with rst-class contexts. We shall comment on this
in some detail when we describe our approach in the next section.
The goal of this paper is to establish a type theoretical basis for a programming
language with rst-class contexts by developing a typed context calculus
where lambda terms are simply a special case of contexts. In particular, contexts
and lambda terms belong to the same syntactic category sharing the
same set of variables, and substitution and hole-lling are dened on the same
syntactic objects. This property is essential for achieving various features explained
above. As observed in the literature [9,10], however, -reduction and
ll-reduction for contexts do not mix well, and a (naive) integration of them
yields an inconsistent system. The development of a meaningful calculus containing
-reduction and ll-reduction both acting on the same set of terms
constitutes a non-trivial technical challenge. Our main technical contribution
is to establish that such a calculus is possible. We prove that the calculus is
Church-Rosser and its type system has the subject reduction property.
To obtain a con
uent calculus, we have to overcome various delicate problems
in dealing with variables, and to introduce several new mechanisms in the
lambda calculus. Before giving the technical development, in the next section,
we explain the problems and outline our solution.
2 The Problem and Our Solution
It is not hard to extend the syntax of the (untyped) lambda calculus with
constructors for contexts. In conventional study, holes in contexts are name-
less. However, since our goal is to develop a calculus with rst-class contexts,
we should be able to consider a context containing other contexts. This requires
us to generalize contexts to contain multiple dierent holes, only one
of which is lled by each hole-lling operation. One way to dene a uniform
syntax for those contexts is to introduce labeled holes [9]. We use upper case
letters labeled holes. To incorporate operations for contexts as
terms in a lambda calculus, we introduce hole abstraction -X:M which abstracts
hole X in term M and creates a term that acts as a context whose
hole is X, and we introduce context application which denotes the
operation to ll the abstracted hole in M 1 with term M 2 . For example, the
context C[] (x:[] represented by the term
and the context application term
denotes the term obtained by lling the hole in the context with x
call a subterm of the form (-X:M which contracts to the
term obtained from M 1 by lling the X-hole in M 1 with M 2 . Dierent from
the meta notation C[x application is a term constructor, which
allows us to exploit the features of rst-class contexts by combining it with
lambda abstraction and lambda application. For example, we can write a term
like
which is contracted to the above term.
The goal of this paper is to develop a type system and a reduction system
for the lambda calculus extended with the above three term constructors, i.e.,
labeled holes, hole abstraction and context application. The crucial step is the
development of a proper mechanism for integrating variable-capturing hole-
lling and capture-avoiding substitution in the lambda calculus. To see the
problem, consider the term
where we use dierent type faces (x, x and x) to distinguish dierent occurrences
of variable x to which we should pay attention. The above term has
two -redexes and one ll-redex. Our intention is that the inner x should be
captured by the x when it is lled in hole X, while the outer x is free. The
following reduction sequence produces the intended result.
However, reducing any of the -redexes before the ll-redex will result in a
dierent term. If we reduce the inner -redex before the ll-redex then the
binding of inner x will be lost, yielding x y. If we reduce the outer
-redex before the ll-redex, then the outer x is unintentionally captured by
y depending on the order of the ll-redex and
the other -redex.
To avoid these inconsistencies, we should redene the scope of lambda binding
to re
ect the behavior of terms of the form (-X:M 1 )M 2 . Suppose there is a
x in M 1 whose scope contains X. Since M 2 is lled in X, the scope of the
x also extends to M 2 . This property implies the following two requirements.
First, a -redex containing a hole X cannot be contracted. Secondly, when
substituting a term containing x for a free variable in M 2 , the x in M 1
and the corresponding variables in M 1 and M 2 need to be renamed to avoid
unwanted capture. In the above example, we should not contract the inner -
redex before hole-lling, and when we contract the outer -redex before hole-
lling, we should rename x and x before doing -substitution. The situation
becomes more subtle when we consider a term like
(w: ((z:w(x
Since w is a variable, simple inspection of term w(x+z) no longer tells which
variables in x+z should be regarded as bound. However, variable-capture will
still occur when the hole abstraction is substituted for w.
Our strategy to solve this problem is to dene a type system that tells exactly
which variables should be considered bound, and to introduce a rened notion
of -equivalence that reconciles hole-lling and -substitution.
To tell which variables should be considered bound, we type a hole abstracted
term -X:M with a context type of the form:
where 1 is the type of the abstracted hole, 2 is the type of the term that will
be produced by lling the hole in the context with a term, and
describes the set of variables being captured when they are lled in the hole
X. We call those variables interface variables. For example,
would be typed as [fx : intg . int] ) int. However, if we simply list the set
of actual bound variables surrounding X in -X:M as interface variables in
its type, then we cannot rename those bound variables. Since -substitution
can only be dened up to renaming of bound variables, this causes a problem
in extending substitution to hole abstracted terms. For example, we cannot
rename bound variable x in the term -X:(x:X It should be noted
that the usual \bound variable convention" does not solve the problem. In the
lambda calculus, we can simply assume that \all bound variables are dierent
from the free variables" for each -redex. This is only possible when we can
freely rename bound variables. As well known in the theory of lambda calculus,
the above condition is not preserved by substitution. Even if we start with a
term satisfying the bound variable condition, anomalous terms like the above
may appear during -reduction.
To avoid this problem, we separate actual bound variables in -X:M and the
corresponding interface variables, and rene hole-lling to an operation that
also performs variable renaming. For manipulation of binding structures, Talcott
[16] developed a technique to pair a hole with a substitution. We use this
approach and annotate a hole X with a variable renamer , which renames
interface variables to the corresponding bound variables. We write X for the
hole X annotated with . The above context can now be represented as the
typed term
where x is an interface variable and is renamed to a when it is lled in X. By
this separation, bound variable a can be renamed without changing the type
of this term. This allows us to achieve a proper integration of hole-lling and
-substitution with terms of the form -X:M . The semantics of hole-lling is
preserved by applying the renamer fa=xg to the term to be lled in X. For
example, we have the following reduction for the example before.
Yet another delicate problem arises when we consider the interaction between
substitution and a term of the form MN . This construct may bind some
variables in N . In order to determine those bound variables, we need to annotate
this construct with the set of variables in N that will be bound by forming
this term. Since this set must correspond to the set of interface variables of
the context term M , a naive attempt would be to annotate the constructor
MN with this set. A subterm of the example term might then be represented
as the term (-X:(a:X fa=xg
z). As we noted earlier,
the variable x must be treated as bound variable. This implies that, when
combining -substitution, this variable needs to be renamed. Unfortunately,
this is impossible for terms of the form w fxg
we cannot rename the corresponding interface variables of the hole abstracted
term that will be substituted later for w. So, again we need to separate the
set of interface variables in the type of hole abstracted term and the set of
variables that will be captured when they are lled in the hole of the context.
To achieve this, we annotate the constructor for context application with a
renamer and write M N . The renamer renames variables in N that are
to be bound by hole-lling to the corresponding interface variables in the hole
abstracted term. Its eect is obtained by composing it with the renamer of
the hole. Now the bound variables in N are independent of the corresponding
interface variables, we can perform bound variable renaming. The above
example can be correctly represented by the following term:
(b
In this term, both a and b are bound variables, which can be renamed without
changing the typing of the term. Again, the semantics of hole-lling is
preserved by applying the composition fa=xg ? fx=bg( fa=bg) of renamers
fa=xg and fx=bg to the term to be lled in X. The following is an example of
reduction involving renamer applications.
(b
3:
Another slightly more general alternative to M N is to make a renamer as a
term constructor [:N ] and introduce a new type constructor
n g . ] for this constructor. We believe that this is also possible. In our
system, however, we shall not take this approach, since the only elimination
operation would be (the modied version of) the hole-lling and therefore
the additional
exibility is not essential in achieving our goal of rst-class
treatment of contexts.
Based on the strategies outlined above, we have worked out the denition of
the type system of the calculus, and its reduction system, and proved that the
type system has the subject reduction property and that the reduction system
is Church-Rosser.
In the work by Sato et al. [14], a type-theoretical approach similar to ours was
taken in order to identify the set of free and bound variables. However, their
system does not fully address the problem of mixing such a construct with -
substitution. Their calculus contains a term constructor e 1 intuitive
meaning is to evaluate e 2 under the bindings provided by the environment e 1 .
However, the reduction for nested application of this construction is restricted
to variables, and does not act on general terms. Because of this restricted
treatment, the subtle problem of -equivalence explained above does not arise
in their system.
The careful reader may have noticed that some aspects of contexts can already
be represented in the lambda calculus. If one can predetermine the exact order
of variables exported by a context and imported by a term to be lled in the
context, then one can represent hole abstractions and context applications
simply by functionals as seen in the following encoding scheme. A hole-lling
of the form:
can be represented as a lambda term of the form:
However, such encoding eliminates the ability to bind variables through names,
and it therefore signicantly reduces the benets of rst-class contexts we have
advocated in the introduction.
The rest of the paper is organized as follows. In Section 3 we dene the context
calculus. Section 4 denes the reduction system and proves the subject reduction
property and Church-Rosser property of the calculus. Section 5 concludes
the paper with the discussion of further investigations. Appendix contains
proofs of some of the lemmas.
3 The Calculus
We use the following notation for functions. The domain and the codomain
of a function f are written as dom(f) and cod(f) respectively. We sometimes
regard a function as a set of pairs and write ; for the empty function. Let f; g
be functions. We write f ; g for f [ g provided that dom(f) \ dom(g) = ;. We
omit \;" if g is explicitly represented as a set, writing ff: : :g for f :g. The
restriction of a function f to the domain D is written as f j D .
The set of types (ranged over by ) of the calculus is given by the syntax:
where b ranges over a given set of base types, and ranges over variable type
assignments each of which is a function from a nite set of variables to types.
We let x range over a countably innite set of variables; we let X range over
a countably innite set of labeled holes; and we let range over variable
renamers each of which is a function from a nite set of variables to variables
denoted by fy 1 =x g. Let be a renamer. To
avoid unnecessary complication, we assume that fy i ng \ fx i
n). That is, a renamer changes each name in
the domain of the renamer to a fresh name, if it is not an identity. A renamer
is extended to the set of all variables by letting In
what follows, we identify a renamer with its extension. However, we maintain
that the domain dom() of a renamer always means the domain of the
original nite function . The composition 1 ? 2 of two variable renamers 1
and 2 is the function such that
The set of (unchecked) terms (ranged over by M) of the calculus is given by
the syntax:
A term -X:M binds the hole X in M . The denitions of bound holes and free
holes are given similarly to the usual denition of bound variables and free
variables in the ordinary lambda calculus. We write FH(M) for the set of
Fig. 1. The sets of free and bound variables
free holes in M . Since -X is the only binder for holes, this does not create
any of the subtle problems we have explained for variables in our calculus,
and therefore we can safely assume -renaming of bound holes just as in -
congruence in the ordinary lambda calculus. In what follows, we regard terms
as their -equivalence classes induced by bound holes renaming.
The set of free variables, denoted by FV (M ), and that of bound variables
of M , denoted by BV (M) are given in Figure 1. These denitions correctly
model the eect of context application terms of the form M 1 M 2 which binds
the variables in dom() in M 2 .
In addition to the sets of free and bound variables, we need to distinguish
three other classes of variables. Let M be a term containing a hole X . The
variables in cod(), which we call free variable candidates, behave similarly
to free variables if they are not abstracted in M ; The variables in the set
dom(), which we call interface variable candidates, are the source of interface
variables. To see the last one, consider a term which contains M 1 M 2 . The
variables in cod(), which we call exported variables, are used to match the
variables exported by the context M 1 with bound variables in M 2 . The formal
denitions of the set FV C(M) of free variable candidates of M , and the set
IV C(M) of interface variables candidates of M are given in Figure 2, and the
denition of the set EV (M) of exported variables of M is given in Figure 3.
Fig. 2. The sets of free variable candidates and interface variable candidates.
Fig. 3. The set of exported variables
We dene the set PFV (M) of potentially free variables of M as
We are now in the position to dene the type system of the calculus. Since
a term may contain free holes as well as free variables, its type depends not
only on types of variables but also on types of free holes. A hole type is
determined by a triple ([ . ]; ) consisting of type of a term to be lled,
type assignment describing the set of interface variables and their types,
and variable renamer which is used to keep track of the correspondence
between bound variables and interface variables. While describes the set
of all abstracted variables, describes the set of free variable candidates to
be abstracted. We for the triple obtained from
([ . ]; ) by abstracting x, whose denition is given below.
. ]; )
This operation is extended to type assignments as follows:
A hole type assignment , ranged over by , is a nite function which assigns
a hole to a triple ([ . ]; ) describing the type of the hole, and we call the
variables in dom() interface variables. We write Clos(; ) for the hole type
assignment dom()g. We write for the hole type
assignment g.
The type system of the calculus is dened as a proof system to derive a typing
of the form:
which indicates that term M has type under variable type assignment and
hole type assignment . The set of typing rules is given in Figure 4.
Some explanations are in order.
Rule (hole). Since X is not surrounded by any at this moment, the associated
type assignment in the hole type assignment is empty, and the set of
variable candidates of X is specied by . They will be abstracted by
the rule (abs) and (ll).
Rule (abs). Lambda abstracting x not only discharges x from the type hypothesis
for the term M , but also extends the set of interface variables
for each hole in M with corresponding x 0 . The later eect is represented by
the operation Clos(fx : g; ), which extends each appearing in .
Rule (ll). By forming the term M 1 fx 0=x 1 ;:::;x 0
=xng M 2 , each x i in M 2 becomes
bound, and the set of interface variables of each hole in M 2 is extended
with it. This property is modeled by discharging each x i from the typing
judgment for M 2 and abstracting it from 2 . This rule is similar to the
one for a \closure" i.e., a term associated with an explicit substitution, in
-calculus [1].
Figure
5 shows an example of typing derivation.
(abs)
(app)
(ll)
if dom() \ fx 0
Fig. 4. The Type System
Fig. 5. Example of Typing Derivation
In our calculus, each free hole occurs linearly in a well-typed term. If multiple
occurrences of a hole are allowed, then they could have dierent interface
variables. This would considerably complicate the conceptual understanding
of contexts as well as the type system. The linearity condition is ensured by the
rule (hole), the condition implied by the notation 1 ; 2 in rules (app), (ll),
and the property that there is no rule for adding redundant hypothesis to .
The following lemma is easily shown by induction on the typing derivations.
dom(). Moreover, each free
hole appears exactly once in M .
The following standard properties also hold for this type system, and can be
easily shown by induction on the typing derivations.
Lemma
Lemma
Lemma 7 If X fx 0
=yng occurs in M , fz
ng \ fx 0
. ]; fy 0
. ]; fy 0
M 0 is obtained from M by substituting X fx 0
=yng
for
=wng .
4 The Reduction System
To dene the reduction relation, we need to dene substitution and hole-lling
operations. In the ordinary lambda calculus, substitution can be dened modulo
-congruence, which allows us to simply assume that unwanted variable
capture will not happen. In our calculus, since we have not yet obtained -
congruence, we need at rst to dene substitution as an operation on syntactic
terms (not on equivalence class).
We write fM 0 =xgM for the term obtained by substituting M 0 for any free
occurrence of x in M . The following lemma shows that substitution preserves
typing under a strong variable hygiene conditions.
Lemma
The proof is deferred to the Appendix. As in the standard denition of sub-
stitution, we have the following composition lemma:
y.
As we have explained earlier, hole-lling involves application of the variable
renamer associated with the hole to the term being lled. To dene hole-lling,
we extend a variable renamer to a function on terms as follows:
We have the following renaming lemma, whose proof is deferred to the appendix
Hole-lling is dened as a combination of variable renamer and substitution.
We write M [M 0 =X] for the term obtained from M by syntactically substituting
the term (M 0 ) for X in M where is the variable renamer associated with X.
Its denition is obtained by simply extending the following clauses according
to the structure of M .
From this denition and the property of typing, it is easily seen that if ; '
. The following lemma shows
that hole-lling preserves the typing.
Lemma
The proof is deferred to the appendix.
The following is the composition lemma for the hole-lling, where IV CX (M)
denotes the domain of the variable renamer on the shoulder of hole X in M .
Lemma 12
PROOF. If dom( 1 )\(PFV (M)ndom( 2 M)).The notion of -congruence in our calculus is now dened as the congruence
relation on the set of well typed terms generated by the following two axioms:
=yng fy 1 =x
if each y i 62 BV
The following lemma shows that -renaming preserves typing, which is proved
by induction on the derivation of M using lemma 10.
-congruence allows us to rename bound variables whenever it is necessary.
In what follows, we assume the following variable convention for our calculus:
bound variables are all distinct and the set of bound variables has no intersection
with the set of interface variable candidates, the set of potentially
and the set of exported variables.
Under this variable convention, the reduction axioms of our calculus are given
as follows:
ll
In the axiom (), the restriction FH(M 2 is needed to ensure that each
linearly. The restriction FH(M 1 is needed to maintain
the binding generated by x for the holes in M 1 . Since in our calculus contexts
are represented not by terms with free holes but by hole abstracted terms, this
does not restrict rst-class treatment of contexts.
The one-step reduction relation M ! N is dened on the set of well typed
terms as: is well typed and M 0 is obtained by applying one
of the two reduction axioms to some subterm of M . We write M
the re
exive, transitive closure of !.
For this reduction, we have the following desired results.
Theorem 14 (Subject Reduction) If ;
(app)
(ll)
Fig. 6. Denition of the Parallel Reduction
PROOF. This is a direct consequence of lemmas 7,8, 11 and 13. 2
Theorem 15 (Con
uence) For any well typed term M , if M
then there is some M 3 such that M 1
The proof is by using the technique of parallel reduction due to Tait and
Martin-Lof. The parallel reduction relation of our calculus, written ! !, is
given in Figure 6.
From this denition, it is easily seen that the transitive closure of the parallel
reduction coincides with the reduction relation of the calculus (
!). To prove
the theorem, it is therefore su-cient to prove the diamond property of ! !. To
show this, we follow Takahashi [15] and prove the following stronger property.
Lemma 16 For any well typed term M , there exists a term M such that if
In the lemma above, M denotes the term obtained from M by parallel reducing
all the possible redexes of M , whose denition is given Figure 7.
The proof of lemma 16 is by induction on the derivation of ;
using the following lemmas:
1 is not a lambda abstraction
1 is not a hole abstraction
Fig. 7. Denition of M
Lemma
terms M;M 0 such that fx :
PROOF. We proceed by induction on the derivation of M ! !M 0 . Here we
only show the cases (betred) and(lred).
Case (betred)
By the induction hypothesis,
.
Therefore by the rule(betred),
1 . The rest of
this case is by lemma 9.
Case (lred)
By the induction hypothesis,
.
Therefore by the rule(lred),
can assume x 62 dom() by the hygiene condition. Then,
Lemma
PROOF. We proceed by induction on the derivation of M ! !M 0 . We only
show the crucial case (lred).
Case (lred)
By the induction hypothesis,
.
2 =X] by
the rule (lred). Since fx; x 0 g\BV
Therefore ((fx 0 =xgM 0
for any terms M;M 0 such that
n .
PROOF. We proceed by induction on the derivation of M ! !M 0 . Here we
only show the cases (hole) and (lred).
. By repeated application of lemma 18.
Case (lred)
(Y 6 X).
Suppose
. By the
induction hypothesis, M 1 [M
Therefore by the rule(lred),
The rest of this sub-case
is by (M 0
. By the induction hypothesis,
Therefore by the rule(lred),
Therefore by lemma 12,
This completes the proof of theorem 15.
Conclusions
We have developed a typed calculus for contexts. In this calculus, contexts and
lambda terms share the same set of variables and can be freely mixed (as far
as they type-check). This allows us to treat contexts truly as rst-class values.
However, a straightforward mixture of -reduction and ll-reduction results
in an inconsistent system. We have solved the problem by developing a type
system that precisely species the variable-capturing nature of contexts. The
resulting typed calculus enjoys the subject reduction property and Church-Rosser
property. We believe that the typed context calculus presented here
will serve as a type theoretical basis for developing a programming language
with advanced features for manipulation of open terms. There are a number
of interesting topics that merit further investigation. We brie
y discuss some
of them below.
Integration with Explicit Substitution. In our calculus, -contraction is restricted
to those redexes that do not contain free holes. While this does not
restrict rst-class treatment of contexts, removing this restriction will make
the reduction system slightly more general. As we have noted earlier, one reason
for this restriction is that if we contract a -redex containing a free hole,
then the binding through the hole will be lost. One way of solving this problem
would be to integrate our calculus with -calculus of Abadi et al. [1], and
to generalize variable renamers to explicit substitutions. Dowek et al. [3] considered
a calculus containing holes and grafting, which roughly corresponds to
hole-lling, and developed a technique to mingle capture-avoiding substitution
with grafting by encoding them in a calculus of explicit substitution using de
Bruijn notation. Although their calculus does not contain a term constructor
for context application and therefore their technique is not directly applicable
to our calculus, we believe that it is possible to extend their technique for our
calculus by translating all the machinery we have developed for our calculus
into de Bruijn notation. However, such translation would signicantly decrease
the
exibility of access to exported variables by names. It should also be noted
that the notion of de Bruijn indexes presupposes -equivalence on terms, and
therefore dening the context calculus using de Bruijn notation requires the
mechanisms (or something similar to those) for obtaining -equivalence we
have developed in this paper.
Programming Languages with Contexts. Our motivation is to provide a basis
for developing a programming language with the feature of rst-class contexts.
The context calculus we have worked out in this article guarantees that we
can have such a typed language with rst-class contexts. In order to develop
an actual programming language, however, we need to develop a realistic evaluation
strategy for the calculus. Our preliminary investigation shows that the
usual call-by-value evaluation strategy using closures can be extended to our
calculus. A more challenging topic is to develop a polymorphic type system
and a type inference algorithm for our calculus, which will enable us to develop
an ML-style programming language with the feature of contexts we have
advocated. One crucial issue is the
exible treatment of context types. In the
current denition, the constructor fx1=x 0;:::;x n=x 0
n g is annotated with a variable
renamer. This reduces the
exibility of the calculus. A better approach
would be to rene the type system so that if 0 then a context of type
can be used whenever a context of type [ .
One of the authors has recently developed an ML-style language with rst-
class contexts [5] where an ML-style polymorphic type system, a call-by-value
operational semantics and a type inference algorithm are given.
Relationship with formula-as-type notion. It is intuitively clear that a context
represented as a term in our calculus has constructive meaning. An important
question is to characterize this intuition formally in the sense of Curry-Howard
isomorphism [7]. This would lead us to a new form of proof normalization process
corresponding to our ll-reduction. Since the context calculus is Church-
Rosser, it should be possible to develop a proof system that is conservative
over the conventional intuitionistic logic and supports a proof normalization
process corresponding to ll-reduction. The authors recently noticed that there
is an intriguing similarity between the proof system of typings in the context
calculus and Joshi and Kulick's partial proof manipulation system [8] which is
used to represent linguistic information. Another relevant system is Herbelin's
lambda calculus isomorphic to a variant of sequent calculus, where proofs of
certain sequents are interpreted by applicative contexts [6]. These results suggest
some interesting connections between context calculus and proof systems.
Acknowledgements
The authors thank Pierre-Louis Curien, Laurent Dami, Yasuhiko Minamide,
Didier Remy, Masahiko Sato and anonymous referees for their careful reading
of a draft of this paper and numerous useful comments. The second author
also thanks Shinn-Der Lee, and Dan Friedman for insightful discussions on
contexts.
--R
Explicit substitutions.
A lambda-calculus for dynamic binding
The feel of Java.
A lambda-calculus structure isomorphic to sequent calculus structure
The formulae-as-types notion of construction
Partial proof trees as building blocks for a categorized grammars.
Context rewriting.
Enriching the Lambda Calculus with Contexts: Towards A Theory of Incremental Program Construction.
Fully abstract models of typed
LCF considered as a programming language.
Explicit Environments.
Parallel reductions in
A theory of binding structures and applications to rewriting.
Con uent Equational Reasoning for Linking with First-Class Primitive Modules
Programming in Modula-2
--TR
Programming in MODULA-2 (3rd corrected ed.)
A theory of binding structures and applications to rewriting
Parallel reductions in MYAMPERSANDlgr;-calculus
Enriching the lambda calculus with contexts
A lambda-calculus for dynamic binding
The Definition of Standard ML
The Feel of Java
Explicit Environments
First-Class Contexts in ML
A Lambda-Calculus Structure Isomorphic to Gentzen-Style Sequent Calculus Structure
Context Rewriting
Higher-order Unification via Explicit Substitutions
--CTR
Brigitte Pientka, Functional Programming With Higher-order Abstract Syntax and Explicit Substitutions, Electronic Notes in Theoretical Computer Science (ENTCS), v.174 n.7, p.41-60, June, 2007
Roger Keays , Andry Rakotonirainy, Context-oriented programming, Proceedings of the 3rd ACM international workshop on Data engineering for wireless and mobile access, September 19-19, 2003, San Diego, CA, USA
Yosihiro Yuse , Atsushi Igarashi, A modal type system for multi-level generating extensions with persistent code, Proceedings of the 8th ACM SIGPLAN symposium on Principles and practice of declarative programming, July 10-12, 2006, Venice, Italy
Christian Urban , Andrew M. Pitts , Murdoch J. Gabbay, Nominal unification, Theoretical Computer Science, v.323 n.1-3, p.473-497, 14 September 2004
Murdoch J. Gabbay, A new calculus of contexts, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.94-105, July 11-13, 2005, Lisbon, Portugal
Makoto Hamana, Term rewriting with variable binding: an initial algebra approach, Proceedings of the 5th ACM SIGPLAN international conference on Principles and practice of declaritive programming, p.148-159, August 27-29, 2003, Uppsala, Sweden
Steven E. Ganz , Amr Sabry , Walid Taha, Macros as multi-stage computations: type-safe, generative, binding macros in MacroML, ACM SIGPLAN Notices, v.36 n.10, October 2001
Gavin Bierman , Michael Hicks , Peter Sewell , Gareth Stoyle , Keith Wansbrough, Dynamic rebinding for marshalling and update, with destruct-time ?, ACM SIGPLAN Notices, v.38 n.9, p.99-110, September
Makoto Hamana, An initial algebra approach to term rewriting systems with variable binders, Higher-Order and Symbolic Computation, v.19 n.2-3, p.231-262, September 2006 | alpha-renaming;lambda-calculus;type system;context |
504573 | Tractable disjunctions of linear constraints. | We study the problems of deciding consistency and performing variable elimination for disjunctions of linear inequalities and disequations with at most one inequality per disjunction. This new class of constraints extends the class of generalized linear constraints originally studied by Lassez and McAloon. We show that deciding consistency of a set of constraints in this class can be done in polynomial time. We also present a variable elimination algorithm which is similar to Fourier's algorithm for linear inequalities. Finally, we use these results to provide new temporal reasoning algorithms for the Ord-Horn subclass of Allen's interval formalism. We also show that there is no low level of local consistency that can guarantee global consistency for the Ord-Horn subclass. This property distinguishes the Ord-Horn subclass from the pointizable subclass (for which strong 5-consistency is sufficient to guarantee global consistency), and the continuous endpoint subclass (for which strong 3-consistency is sufficient to guarantee global consistency). Copyright 2001. Elsevier Science B.V. | Introduction
Linear constraints over the reals have recently been studied in depth by researchers in
constraint logic programming (CLP) and constraint databases (CDB) [JM94, KKR95,
Kou94c]. Two very important operations in CLP and CDB systems are deciding consistency
of a set of constraints, and performing variable elimination. Subclasses of linear
constraints over the reals have also been considered in temporal reasoning [DMP91, Kou92,
Kou94a, Kou95, NB95]. Important operations in temporal reasoning applications are the
deciding consistency of a set of binary temporal constraints, (ii) performing
variable elimination, and (iii) computing the strongest feasible constraints between every
pair of variables.
Disjunctions of linear constraints over the reals are important in many applications
[JM94, DMP91, Kou92, Kou94a, Kou94b, Kou95, NB95]. The problem of deciding consistency
for an arbitrary set of disjunctions of linear constraints is NP-complete [Son85]. It is
therefore interesting to discover classes of disjunctions of linear constraints for which consistency
can be decided in PTIME. In [LM89a], Lassez and McAloon have studied the class
of generalized linear constraints which includes linear inequalities (e.g., 2x 1
This is a longer version of a paper with the same title which appears in the Proceedings of the 2nd
International Conference on Principles and Practice of Constraint Programming (CP96), Cambridge, MA,
August 19-22, 1996. Lecture Notes in Computer Science, Vol. 1118, pages 297-307.
and disjunctions of linear inequations 1 (e.g.,
Among other things, they have shown that the consistency problem for this class can be
solved in PTIME.
[Kou92, IvH93, Imb93, Imb94] have studied the problem of variable elimination for
generalized linear constraints. The basic algorithm for variable elimination has been discovered
independently in [Kou92] and [Imb93], but [Kou92] has used the result only in
the context of temporal constraints. The basic algorithm is essentially an extension of
Fourier's elimination algorithm [Sch86] to deal with disjunctions of inequations. If S is a
set of constraints, let jSj denote its cardinality. Let be a set of generalized linear
constraints, where I is a set of inequalities and D n is a set of disjunctions of inequations.
If we eliminate m variables from C using the basic algorithm proposed by Koubarakis and
Imbert then the resulting set contains O(jI
inequalities and O(jD n j jI j 2 m+1
disjunctions
of inequations. A lot of these constraints are redundant. Imbert has studied this
problem in more detail and presented more advanced algorithms that eliminate redundant
constraints [Imb93, Imb94].
In this paper we go beyond the above work on generalized linear constraints. Our
contributions can be summarized as follows:
ffl We extend the class of generalized linear constraints to include disjunctions with an
unlimited number of inequations and at most one inequality per disjunction. For
example:
The resulting class will be called the class of Horn constraints since there seems to
be some analogy with Horn clauses. We show that deciding consistency can still be
done in PTIME for this class (Theorem 3.4). This result has also been obtained
independently by Jonsson and B-ackstr-om [Jon96]. We also extend the basic variable
elimination algorithm of [Kou92, Imb93] for this new class of constraints.
ffl We study a special class of Horn constraints, called Ord-Horn constraints, originally
introduced in [NB95]. This class is important for temporal reasoning based on the
Ord-Horn class of interval relations expressible in Allen's formalism [All83, NB95].
Our results allow us to improve the best known algorithms for consistency checking
and computing the strongest feasible constraints for this class. This answers an open
problem of [NB95].
The paper is organized as follows. Section 2 introduces the basic concepts needed for
the developments of this paper. Section 3 presents the algorithm for deciding consistency.
Section 4 presents the algorithm for variable elimination. Section 5 presents our results
for the class of Ord-Horn constraints. Finally, Section 6 discusses future research.
Preliminaries
We consider the n-dimensional Euclidean space R n . A linear constraint over R n is an
expression a are integers, x are variables
ranging over the real numbers, and - is 6=. Depending on what - is, we will
1 Some people prefer the term disequations [Imb94].
distinguish linear constraints into inequalities (e.g. equations (e.g.,
inequations (e.g.,
Let us now define the class of constraints that we will consider.
Definition 2.1 A Horn constraint is a disjunction d
is a weak linear inequality or a linear inequation, and the number of inequalities
among does not exceed one. If there are no inequalities then a Horn constraint
will be called negative. Otherwise it will be called positive. Horn constraints of the form
will be called disjunctive.
Example 2.1 The following are examples of Horn constraints:
The first and the third constraint are positive while the second and the fourth are negative.
The third and fourth constraint are disjunctive.
According to the above definition weak inequalities are positive Horn constraints.
Sometimes we will find it more convenient to consider inequalities separately from positive
disjunctive Horn constraints. If d is a positive disjunctive Horn constraint then
where E is a conjunction of equations and i is an inequality. We will often use this notation
for positive Horn constraints.
Notice that we do not need to introduce strict inequalities in the above definition. A
strict inequality like x 1 can be equivalently written as follows:
Similarly, the constraint x 1 a disjunction of inequations can
be equivalently written as the conjunction of the following constraints:
A similar observation is made in [NB95].
Negative Horn constraints have been considered before in [LM89a, LM89b, Kou92,
IvH93, Imb93, Imb94, Kou95]. Nebel and B-urckert have studied the class of Ord-Horn
constraints in the context of qualitative interval reasoning [NB95]. Ord-Horn constraints
form a proper subclass of Horn constraints, and will be considered in detail in Section 5.
Horn constraints have also been studied by Jonsson and B-ackstr-om [Jon96] who discovered
independently the result discussed in Section 3.
We will now present some standard definitions.
Definition 2.2 Let C be a set of constraints in variables x . The solution set of
C, denoted by Sol(C), is:
Each member of Sol(C) is called a solution of C. A set of constraints is called consistent
if its solution set is non-empty. We will use the same notation, Sol(\Delta), for the solution set
of a single constraint.
Remark 2.1 In the rest of the paper we will usually consider one or more sets of constraints
e.g., . In this case we will always regard
a subset of R n even though C i might contain less than n variables. For example, if we
happen to deal with
we may write Similarly, we may write
We will also use the alternative notation Sol (\Delta). If C is a set of constraints, Sol (C)
will always be regarded a subset of R k where k is the number of variables of C (indepen-
dently of any other constraint set considered at the same time). This notation will come
handy in Section 4 where we study variable elimination.
Definition 2.3 Let C 1 and C 2 be sets of constraints in the same set of variables. C 1 will
be called equivalent to C 2 if logically follows from a set
of constraints C, denoted by C every solution of C satisfies c.
We will now present some concepts of convex geometry [Sch86, Gru67] that will enable
us to study the geometric aspects of the constraints considered. We will take our definitions
from [LM89a]. If V is a subspace of the n-dimensional Euclidean space R n and p a vector
in R n then the translation called an affine space. The intersection of all affine
spaces that contain a set S is an affine space, called the affine closure of S and denoted
by Aff(S). If e is a linear equation then the solutions set of e is called a hyperplane.
In R 3 the hyperplanes are the planes. In R 2 the hyperplanes are the straight lines. A
hyperplane is an affine space and every affine space is the intersection of a finite number
of hyperplanes. If E is a set of equalities then Sol(E) is an affine space. If i is a linear
inequality then the solution set of i is called a half-space. If I is a set of inequalities then
Sol(I) is the intersection of a finite number of half-spaces, and is called a polyhedral set.
A set S ' R n is called convex if the line segment joining any pair of points in S is
included in S. Affine subspaces of R n are convex. Half-spaces are convex. Also, polyhedral
sets are convex.
If d is a negative Horn constraint then the solution set of d is
The constraint :d is a conjunction of equations thus Sol(:d) is an affine space. If :d is
inconsistent then d is equivalent to true (e.g., x In the rest of the paper
we will ignore negative Horn constraints that are equivalent to true.
If d is a positive disjunctive Horn constraint of the form :(E - i) then
R n n Sol(:d). The constraint :d is a conjunction E - i where E is a conjunction of
equations and i is a strict inequality. If E j true then d is essentially a weak inequality
inconsistent then its corresponding Horn
constraint d is equivalent to true (e.g., x
and Sol(i) ' Sol(E) then d j :E, so d is actually a negative Horn constraint (e.g.,
consistent and Sol(i) 6' Sol(E)
then its solution set will be called a half affine space. In R 3 the half affine spaces are
half-lines or half-planes. For example, plane. In the rest of the
paper we will ignore positive disjunctive Horn constraints equivalent to a weak inequality,
a negative Horn constraint or true.
3 Deciding Consistency
[LM89a] showed that negative Horn constraints can be treated independently of one another
for the purposes of deciding consistency. The following is one of their basic results.
Theorem 3.1 Let be a set of constraints where I is a set of linear inequalities
and D n is a set of negative Horn constraints. Then C is consistent if and only if I is
consistent, and for each d 2 D n the set I [ fdg is consistent.
Whether a set of inequalities is consistent or not, can be decided in PTIME using
Kachian's linear programming algorithm [Sch86]. We can also detect in PTIME whether
I [ fdg is consistent by simply running Kachian's algorithm 2n times to decide whether
I implies every equality e in the conjunction of n equalities :d. In other words, deciding
consistency in the presence of negative Horn contraints can be done in PTIME. 2
Is it possible to extend this result to the case of positive disjunctive Horn constraints?
In what follows, we will answer this question affirmatively. Let us start by pointing out
that the independence property of negative Horn constraints does not carry over to positive
ones.
Example 3.1 The constraint sets
I
are consistent. But the set I [ inconsistent.
Fortunately, there is still enough structure available in our problem which we can
exploit to come up with a PTIME consistecy checking algorithm. Let
a set of constraints where I is a set of inequalities, D p is a set of positive disjunctive Horn
constraints, and D n is a set of negative Horn constraints. Intuitively, the solution set of
C is empty only if the polyhedral set defined by I is covered by the affine spaces and half
affine spaces defined by the Horn constraints.
The algorithm Consistency shown in Figure 1 proceeds as follows. Initially, we check
whether I is consistent. If this is the case, then we proceed to examine whether Sol(I)
can be covered by Sol(f:d To verify this, we make successive passes
over In each pass, we carry out two checks. The first check discovers whether
there is any positive Horn constraint d j :(E - i) such that Sol(I) is included in the
affine space defined by E. If this is the case then d is discarded and I is updated to reflect
the part possibly "cut off " by d. The resulting solution set Sol(I) is still a polyhedral set.
An inconsistency can arise if Sol(I) is reduced to ; by successive "cuts". In each pass
we also check whether there is an affine space (represented by the negation of a negative
Horn constraint) which covers Sol(I). In this case there is an inconsistency as well. The
algorithm stops when there are no more affine spaces or half affine spaces that pass the
two checks. In this case C is consistent.
Let us now prove the correctness of algorithm Consistency. First, we will need a few
technical lemmas. The first two lemmas show that the sets resulting from successive "cuts"
inflicted on Sol(I) by positive Horn constraints passing the first check of the algorithm
are indeed polyhedral. The lemmas also give a way to compute the inequalities defining
these sets.
2 The exact algorithm that Lassez and McAloon give in [LM89a] is different but this is not significant
for the purposes of this paper.
Algorithm Consistency
Input: A set of constraints
Output: "consistent" if C is consistent. Otherwise "inconsistent".
If I is inconsistent then return "inconsistent"
Repeat
Done / true
For each d 2 D p [ D n do
I / I - :i
If I is inconsistent then return "inconsistent"
Done / false
Remove d from D p
Else If d 2 D n and Sol(I) ' Sol(:d) then
Return "inconsistent"
End If
End For
Until Done
Return "consistent"
Figure
1: Deciding consistency of a set of Horn constraints
Lemma 3.1 Let I be a set of inequalities and :(E - i) be a positive disjunctive Horn
constraint such that Sol(I) ' Sol(E). Then Sol(I - :(E -
The other direction of the proof is trivial.
Lemma 3.2 Let I be a set of inequalities and d k j be a set of
positive disjunctive Horn constraints such that Sol(I) '
l
Then
Proof: The proof is by induction on m. The base case 3.1.
For the inductive step, let us assume that the lemma holds for
Then
using the inductive hypothesis.
The assumptions of this lemma and Lemma 3.1 imply that
Thus
The following lemmas show that if there are Horn constraints that do not pass the two
checks of algorithm Consistency then the affine spaces or half affine spaces corresponding
to their negations cannot cover the polyhedral set defined by the inequalities.
Lemma 3.3 Let S be a convex set of dimension d and suppose that S are convex
sets of dimension d
Proof: See Lemma 2 of [LM89a].
Lemma 3.4 Let I be a consistent set of inequalities and d k j
be a set of Horn constraints such that Sol(I) 6'
Proof: The proof is very similar to the proof of Theorem 1 of [LM89a].
This means that
Aff(Sol(I)) is an affine space of strictly lower dimension than Aff(Sol(I)). Then
is of strictly lower dimension than Sol(I) since the dimension of Sol(I) is
equal to that of Aff(Sol(I)). Thus from Lemma 3.3, Sol(I) 6' S m
now
that
We can now conclude that Sol(I) 6'
The following theorems demonstrate that the algorithm Consistency is correct and
can be implemented in PTIME.
Theorem 3.2 If algorithm Consistency returns "inconsistent" then its input C is inconsistent
Proof: If the algorithm returns "inconsistent" in its first line then I , and therefore C,
is inconsistent.
If the algorithm returns "inconsistent" in the third if-statement then there are positive
Horn constraints d k j such that the assumptions of
Lemma 3:2 hold for I and d Therefore
Consequently,
If the algorithm returns "inconsistent" in the fourth if-statement then there are positive
Horn constraints d negative constraint dm+1 2 D n such that the
assumptions of Lemma 2 hold for I and d
But then
Theorem 3.3 If algorithm Consistency returns "inconsistent" then its input C is inconsistent
Proof: If the algorithm returns "consistent" then I is consistent. Let d
the positive Horn constraints removed from D p [ D n by the algorithm, and
be the remaining Horn constraints. Then
Notice that Sol(I -
otherwise the algorithm outputs "inconsistent"
in Step 2. Also, Sol(I -
otherwise the
algorithm would have removed d k from D p [ D n .
any loss of generality we can also assume that
for all this does not hold for constraint d k , this constraint can
be discarded without changing Sol(C)). From Lemma 3.4 we can now conclude that
Theorem 3.4 The algorithm Consistency can be implemented in PTIME.
Proof: It is not difficult to see that the algorithm can be implemented in PTIME. The consistency
of I can be checked in PTIME using Kachian's algorithm for linear programming
[Sch86]. The test Sol(I) ' Sol(E) can be verified by checking whether every equation e in
the conjunction E is implied by I . This can be done in PTIME using Kachian's algorithm
2n times where n is the number of equations in E. In a similar way one can implement
the test Sol(I) ' Sol(:d) in PTIME when d is a negative Horn constraint.
We have just showed that the consistency of a set of Horn constraints can be determined
in PTIME. This is an important result with potential applications in any CLP or CDB
system dealing with linear constraints [JM94, KKR95, Kou94c]. We will now turn our
attention to the problem of eliminating one or more variables from a given set of Horn
constraints.
Algorithm VarElimination
Input: A set of Horn constraints C in variables X , and a variable
to be eliminated from C.
Output: A set of Horn constraints C 0 in variables X n fxg such that
Projection Xnfxg (Sol (C)).
Rewrite each constraint containing x as x - U - OE or L - x - OE or x 6= A - OE
where OE is a disjunction of inequations and x does not appear in OE.
For each pair of positive Horn constraints x - U - OE 1 and L - x - OE 2 do
End For
For each pair of positive Horn constraints x - U - OE 1 and L - x - OE 2 do
For each negative Horn constraint x 6= A - OE do
Add A 6= L - A 6= U - OE - OE 1 - OE 2 to C 0
End For
End For
Add each constraint not containing x to C 0
Return C 0
Figure
2: A variable elimination algorithm
4 Variable Elimination
In this section we study the problem of variable elimination for sets of Horn constraints.
The algorithm VarElimination, shown in Figure 2, eliminates a given variable x from a
set of Horn constraints C. More variables can be eliminated by successive applications of
VarElimination. This algorithm does not consider inequalities separately from positive
disjunctive Horn constraints (as algorithm Consistency did in Section 3).
The algorithm VarElimination is similar to the one studied in [Kou92, Imb93] for
the case of negative Horn constraints.
Theorem 4.1 The algorithm VarElimination is correct.
Proof: Let the variables of C be g. If
is an
element of Sol (C) then it can be easily seen that it is also an element of Sol (C 0 ).
Conversely, take
consider the set C(x; x 0
If this set is simplified by removing constraints equivalent to true, disjunctions equivalent
to false, and redundant constraints then
Let us now assume (by contradiction) that there is no value x 0 2 R n such that
(C). This can happen only under the following cases:
1. come from positive Horn constraints
otherwise these constraints would have been discarded from C(x; x 0
during
its simplification. But because l -
then l 0 - u 0 . Contradiction!
2. l
reasoning similar to the above, we can show that this
case is also impossible.
Finally, we can conclude that there exists a value x 0 2 R such that
be a set of Horn constraints. Eliminating m variables from C with
repeated applications of the above algorithm will result in a set with O((jI
positive Horn constraints and O(jD n j (jI negative Horn constraints. Many
of these contraints will be redundant; it is therefore important to extend this work with
efficient redundancy elimination algorithms that can be used together with VarElimina-
tion.
This section concludes our study of the basic reasoning problems concerning Horn
constraints. We will now turn our attention to a suclass of Horn constraints with important
applications to temporal reasoning.
5 Reasoning with Ord-Horn Constraints
This section studies a special class of Horn constraints, called Ord-Horn constraints, originally
introduced in [NB95]. This class is important in interval based temporal reasoning
[All83] as we will immediately show below.
Definition 5.1 An Ord-Horn constraint is a Horn constraint d each
is an inequality x - y or an inequation x 6= y and x and y are variables
ranging over the real numbers.
Example 5.1 The following are examples of Ord-Horn constraints:
The first and the last constraint are positive while the second and the third are negative.
In [All83], Allen introduced a calculus for reasoning about intervals in time. An interval
is an element of the following set I:
If i is an interval variable, will denote the endpoints of i. Allen's calculus is
based on thirteen mutually exclusive binary relations which can capture all the possible
ways two time intervals can be related. These basic relations are
and their inverses (equals is its own inverse). Figure 3 gives a pictorial representation of
these relations. For reasons of brevity, we will use the symbols b; m; o; d; s; f and e to refer
Basic Interval Symbol Pictorial Endpoint
Relation Representation Constraints
during j d iiiiiiiii
includes i di
Figure
3: The thirteen basic relations of Allen
to the basic relations in Allen's formalism. The inverse of each relation will be denoted
by the name of the relation with the suffix i (for example, the inverse of b will be denoted
by bi).
Allen's calculus has received a lot of attention and has been the formalism of choice
for representing qualitative interval information. Whenever the interval information to
be represented is indefinite, a disjunction of some of the thirteen basic relations can be
used to represent what is known. There are 2 13 such disjunctions representing qualitative
relations between two intervals. Each one of these relations will be denoted by the set of
its constituent basic relations e.g., fb; bi; d; mg. The empty relation will be denoted by ;,
and the universal relation will be denoted by ?. The set of all 2 13 relations expressible in
Allen's formalism will be denoted by A [NB95].
The following definition will be useful below.
Definition 5.2 Let S be a subset of A, i and j be variables representing intervals, and
An S-constraint is any expression of the form i R j.
Example 5.2 If interval i denotes the time that John reads his morning newspaper and
denotes the time that he has breakfast, and we know that John never reads a newspaper
while he is eating, then the A-constraint
characterizes i and j according to the information given.
Definition 5.3 Let C be a set of S-constraints. The solution set of C is:
Unfortunately, all interesting reasoning problems associated with Allen's interval calculus
are NP-hard [VKvB89] therefore it is interesting to consider subsets of Allen's for-
malism, in the form of subsets of A, that have better computational properties. 3 Three
such subsets of A have received more attention:
ffl The set C which consists of all relations R 2 A which satisfy the following condition.
If i and j are intervals, i R j can be equivalently expressed as a conjunction of
inequalities are endpoints of i and j.
The set C is called the continuous endpoint subclass of A [VKvB89].
ffl The set P which consists of all interval relations R 2 A which satisfy the following
condition. If i and j are intervals, i R j can be equivalently expressed as a conjunction
of inequalities are endpoints
of i and j.
The set C is called the pointisable subclass of A [VKvB89, vBC90, vB92]. Because
ae P the pointisable subclass is more expressive than the continuous endpoint
subclass.
ffl The set H which consists of all interval relations R 2 A which satisfy the following
condition. If i and j are intervals, i R j can be equivalently expressed as a conjunction
of Ord-Horn constraints on the endpoints of i and j. The disjunctive Ord-Horn
constraints involved in this equivalence are not arbitrary. There are at most three
of them, and each one consists of two disjuncts of the form
The set H was introduced by Nebel and B-urckert and named the Ord-Horn sub-class
[NB95]. Because P ae H the Ord-Horn subclass is more expressive than the
pointisable subclass. It consists of 868 relations i.e., it covers more than 10% of A.
Example 5.3 The following are P-constraints:
Their equivalent endpoint constraints are:
second P-constraint is also a C-constraint while the first one is not. For an enumeration
of C and P , see [vBC90].
Example 5.4 The A-constraint i fb; big j is not an H-constraint. The constraint
is an H-constraint which is not a P-constraint. Its equivalent endpoint constraints are:
enumeration of H together with several related C programs has been provided by
Nebel and Burckert. See [NB95] for details.
3 At the expense of being less
The following reasoning problems have been studied for the above subclasses [VKvB89,
ffl Given a set C of S-constraints, decide whether C is consistent.
ffl Given a set C of S-constraints, determine the strongest feasible relation between
each pair of interval variables i and j. The strongest feasible relation between two
interval variables i and j is the smallest set R such that C j. This is the same
as computing the minimal network corresponding to the given set of constraints. 4
In this section we will show how our results can be used to improve the best known
algorithms for the above reasoning problems in the case of the Ord-Horn subclass. We
start with two theorems from [NB95].
Theorem 5.1 Let C be a set of H-constraints. Deciding whether C is consistent can be
done in O(n 3 is the number of variables in C.
Theorem 5.2 Let C be a set of H-constraints. Computing the feasible relations between
all pairs of variables can be done in O(n 5 is the number of variables in C.
We will now use the results of Section 3 to improve the complexity bounds of the above
theorems.
Theorem 5.3 Let C be a set of H-constraints. Let n be the number of variables in C,
and h be the number of constraints (i R such that R 2 H n C. Deciding whether C
is consistent can be done in O(max(n 2 ; hn)) time.
Proof: First we translate C into a set of Ord-Horn constraints C 0 . Since
this translation can be achieved in O(n 2 ) time. Let C I is a set of
inequalities, D p is a set of positive disjunctive Horn constraints and D n a set of negative
Horn constraints. The translation of Nebel and B-urckert shows that C 0 contains
point variables and jD p [
We will use algorithm Consistency from Figure 1 to decide C 0 . In this case Consistency
can be made to work in O(max(n 1
Checking
the consistency of I can be done in O(n 1
constructing a directed graph G
corresponding to I and examining its strongly connected components [vB92]. Now notice
that the statement If-Else-End-If in algorithm Consistency is executed O(jD p [ D n
times. Each execution of this statement takes O(n 1 ) time. Let us see why. If the examined
constraint d is in D p , the test Sol(I) ' Sol(E) amounts to checking whether the
single inequality E is implied by I . This can be done in O(n 1 examining the
strongly connected components of G. Similarly, if d is in D n , the test Sol(I) ' Sol(:d)
can be done in O(n 1 ) time. Therefore deciding whether C 0 is consistent can be done in
deciding whether C is consistent can be done in
Theorem 5.4 Let C be a set of H-constraints. Let n be the number of variables in C,
and h be the number of constraints (i R such that R 2 H n C. Computing the
feasible relations between all pairs of variables can be done in O(max(n 4 ; hn 3
4 We will not define minimal networks formally here. The interested reader can consult [vB92] (or many
other temporal reasoning papers).
Proof: As in [NB95], we will consider all O(n 2 ) pairs of variables in turn. For each pair we
check whether each of the thirteen basic relations is consistent with the given constraints.
The basic relations that satisfy this criterion form the strongest feasible relation between
the pair. Using the algorithm of Theorem 5.3, each check can be done in O(max(n 2 ; hn))
time. The bound of the theorem follows immediately.
In the worst case the parameter h, as specified in the above theorems, can be O(n 2 ).
However in practical applications, we expect h to be significantly less than O(n 2 ) thus the
above theorems are real improvements over [NB95].
6 Future Research
In future research we would like to study more advanced variable elimination algorithms for
Horn constraints. The results of [Imb93, Imb94] that apply to negative Horn constraints
only, should be a good starting point in this direction.
Another interesting problem, which occupies us currently, is to extend the results of
[Kou95] to the pointizable subclass P and the Ord-Horn subclass H of A. [Kou95] studied
the problem of enforcing global consistency for sets of quantitative temporal constraints
over the real numbers. In a globally consistent constraint set all interesting constraints
are explicitly represented and the projection of the solution set on any subset of the
variables can be computed by simply collecting the constraints involving these variables.
An important consequence of this property is that a solution can be found by backtrack-
Enforcing global consistency can take an exponential amount of time
in the worst case [Fre78, Coo90]. As a result it is very important to identify cases in which
local consistency, which presumably can be enforced in polynomial time, implies global
consistency [Dec92].
The class of temporal constraints considered in [Kou95] includes equalities of the form
inequalities of the form x \Gamma y - r and inequations of the form x \Gamma y 6= r where
x; y are variables ranging over the real numbers and r is a real constant. [Kou95] shows
that strong 5-consistency is necessary and sufficient for achieving global consistency for
this class of constraints. It also gives an algorithm which achieves global consistency in
is the number of variables and H is the number of inequations. The
details of this algorithm demonstrate that there are situations where it is impossible to
enforce global consistency without introducing disjunctions of inequations e.g.,
The results of [Kou95] can provide the basis for efficient global consistency algorithms for
the pointizable subclass P . The open question is whether one can use the results of this
paper and [Kou95] to find efficient algorithms for global consistency for the ORD-Horn
subclass.
--R
Maintaining Knowledge about Temporal Intervals.
An optimal k-consistency algorithm
From local to global consistency.
Temporal Constraint Networks.
Synthesizing Constraint Expressions.
A Sufficient Condition For Backtrack-Free Search
Convex Polytopes.
On the Handling of Disequations in CLP over Linear Rational Arithmetic.
Constraint Logic Programming: A Survey.
Constraint Query Languages.
Dense Time and Temporal Constraints with 6
Complexity Results for First-Order Theories of Temporal Con- straints
Database Models for Infinite and Indefinite Temporal Infor- mation
Foundations of Indefinite Constraint Databases.
From Local to Global Consistency in Temporal Constraint Networks.
A Canonical Form for Generalized Linear Constraints.
A Canonical Form for Generalized Linear Costraints.
On binary constraint problems.
Bernhard Nebel and Hans-J-urgen B-urckert
Theory of Integer and Linear Programming.
Real Addition and the Polynomial Time Hierarchy.
Reasoning About Qualitative Temporal Information.
Exact and Approximate Reasoning about Temporal Relations.
Constraint Propagation Algorithms for Temporal Reasoning: A Revised Report.
--TR
Theory of linear and integer programming
An optimal <italic>k</>-consistency algorithm
Constraint propagation algorithms for temporal reasoning: a revised report
Exact and approximate reasoning about temporal relations
Temporal constraint networks
From local to global consistency
A canonical form for generalized linear constraints
Reasoning about qualitative temporal information
Variable elimination for generalized linear constraints
On the handling of disequations in CLP over linear rational arithmetic
On binary constraint problems
Database models for infinite and indefinite temporal information
Redundancy, variable elimination and linear disequations
Reasoning about temporal relations
Constraint query languages
From local to global consistency in temporal constraint networks
The complexity of query evaluation in indefinite temporal constraint databases
A unifying approach to temporal constraint reasoning
A Sufficient Condition for Backtrack-Free Search
Maintaining knowledge about temporal intervals
Synthesizing constraint expressions
Foundations of Indefinite Constraint Databases
From Local to Global Consistency in Temporal Constraint Networks
--CTR
Mathias Broxvall, A method for metric temporal reasoning, Eighteenth national conference on Artificial intelligence, p.513-518, July 28-August 01, 2002, Edmonton, Alberta, Canada
Peter Jonsson , Andrei Krokhin, Complexity classification in qualitative temporal constraint reasoning, Artificial Intelligence, v.160 n.1, p.35-51, December 2004
Mathias Broxvall , Peter Jonsson, Point algebras for temporal reasoning: algorithms and complexity, Artificial Intelligence, v.149 n.2, p.179-220, October
Andrei Krokhin , Peter Jeavons , Peter Jonsson, Reasoning about temporal relations: The tractable subalgebras of Allen's interval algebra, Journal of the ACM (JACM), v.50 n.5, p.591-640, September
Manolis Koubarakis, Querying Temporal Constraint Networks: A Unifying Approach, Applied Intelligence, v.17 n.3, p.297-311, November-December 2002 | global consistency;ORD-horn constraints;variable elimination;linear constraints;interval algebra |
504577 | Loop checks for logic programs with functions. | Two complete loop checking mechanisms have been presented in the literature for logic programs with functions: OS-check and EVA-check. OS-check is computationally efficient but quite unreliable in that it often misidentifies infinite loops, whereas EVA-check is reliable for a majority of cases but quite expensive. In this paper, we develop a series of new complete loop checking mechanisms, called VAF-checks. The key technique we introduce is the notion of expanded variants, which captures a key structural characteristic of in finite loops. We show that our approach is superior to both OS-check and EVA-check in that it is as efficient as OS-check and as reliable as EVA-check. Copyright 2001 Elsevier Science B.V. | Introduction
The recursive nature of logic programs leads to possibilities of running into innite loops with top-down
query evaluation. By an innite loop we refer to any innite SLD-derivation. An illustrative
example is the evaluation of the goal p(a) against the logic program
which leads to the innite loop
Another very representative logic program is
Currently on leave at Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
T6G 2H1. Email: ydshen@cs.ualberta.ca. Fax: (780) 492-1071.
against which evaluating the query p(g(a)) generates the innite loop
Loop checking is a long recognized problem in logic programming. 1 Although many loop
checking mechanisms have been proposed during the last decade (e.g. [1, 2, 6, 7, 9, 12, 14, 17, 19, 20,
22, 23]), a majority of them (e.g. [1, 2, 6, 7, 9, 12, 19, 20, 22, 23]) are suitable only for function-free
logic programs because they determine innite loops by checking if there are variant goals/subgoals
in SLD-derivations. Variant goals/subgoals are the same goals/subgoals up to variable renaming.
Hence, an innite loop like L 2 can not be detected because no variant goals/subgoals occur in the
derivation.
An important fact is that for function-free logic programs, innite loops can be completely
avoided by appealing to tabling techniques [4, 5, 18, 21, 23, 24]. However, innite loops with
functions remain unresolved even in tabling systems [13].
To our best knowledge, among all existing loop checking mechanisms only two can deal with
innite loops like L 2 . One is called OS-check (for OverSize loop check) [14] and the other EVA-check
(for Extended Variant Atoms loop check) [17].
OS-check, rst introduced by Sahlin [14, 15] and further formalized by Bol [3], determines
innite loops based on two parameters: a depth bound d and a size function size. Informally, OS-
check says that an SLD-derivation may go into an innite loop if it generates an OverSized subgoal.
A subgoal A is said to be OverSized if it has d ancestor subgoals in the SLD-derivation that have
the same predicate symbol as A and whose size is smaller than or equal to A. For example, if we
choose is an innite loop.
It is proved that OS-check is complete in the sense that it cuts all innite loops. However,
because it merely takes the number of repeated predicate symbols and the size of subgoals as its
decision parameters, without referring to the informative internal structure of the subgoals, the
underlying decision is fairly unreliable; i.e. many non-loop derivations may be pruned unless the
depth bound d is set su-ciently large.
EVA-check, proposed by Shen [17], determines innite loops based on a depth bound d and
generalized variants. Informally, EVA-check says that an SLD-derivation may go into an innite
loop if it generates a subgoal A 0 that is a generalized variant of all its d ancestor subgoals. A
subgoal A 0 is said to be a generalized variant of a subgoal A if it is the same as A up to variable
renaming except for some arguments whose size increases from A via a set of recursive clauses.
Recursive clauses are of the form like C 21 in P 2 , one distinct property of which is that repeatedly
applying them may lead to recursive increase in size of some subgoals.
1 There are two dierent topics on termination of logic programs. One is termination analysis (see [8] for a detailed
survey), and the other is loop checking (see [1, 23]). In this paper, we study loop checking.
Recursive increase in term size is a key feature of innite loops with functions. That is, any
innite loops with innitely large subgoals are generated by repeatedly applying a set of recursive
clauses. Due to this fact, EVA-check is complete and much more reliable than OS-check in the
sense that it is less likely to mis-identify innite loops [17].
OS-check has the obvious advantage of simplicity, but it is unreliable. In contrast, EVA-check
is reliable in a majority of cases, but it is computationally expensive. The main cost of EVA-check
comes from the computation of recursive clauses. On the one hand, given a logic program we need
to determine which clauses in it are recursive clauses. On the other hand, for any subgoals A and
A 0 in an SLD-derivation, in order to determine if A 0 is a generalized variant of A, we need to check
if A 0 is derived from A by applying some set of recursive clauses. Our observation shows that both
processes are time-consuming.
In this paper, we continue to explore complete loop checking mechanisms, which have proved
quite useful as stopping criteria for partial deduction in logic programming [11] (see [3] for the
relation between stopping criteria for partial deduction and loop checking). On the one hand, unlike
OS-check, we will fully employ the structural characteristics of innite loops to design reliable loop
checking mechanisms. On the other hand, instead of relying on the expensive recursive clauses,
we extract structural information on innite loops directly from individual subgoals. We will
introduce a new concept expanded variants, which captures a key structural characteristic of
certain subgoals in an innite loop. Informally, a subgoal A 0 is an expanded variant of a subgoal A
if it is a variant of A except for some terms (i.e. variables or constants or functions) in A each of
which grows in A 0 into a function containing the term.
The notion of expanded variants provides a very useful tool by which a series of complete loop
checking mechanisms can be dened. In this paper, we develop four such VAF-checks (for Variant
Atoms loop checks for logic programs with Functions) V AF 1 4 (d), where d is a depth bound.
loops based on expanded variants. V AF 2 (d) enhances V AF 1 (d) by
taking into account one (innitely) repeated clause. V AF 3 (d) enhances V AF 2 (d) with a constraint
of a set of (innitely) repeated clauses. And V AF 4 (d) enhances V AF 3 (d) with a constraint of
recursive clauses. The reliability increases from V AF 1 (d) to V AF 4 (d), but the computational
overhead increases, too. By balancing between the two key factors, we choose V AF 2 (d) as the best
for practical applications. V AF 2 (d) has the same complexity as OS-check, but is far more reliable
than OS-check. When d 2, V AF 2 (d) is reliable for a vast majority of logic programs. Moreover,
while no less reliable than EVA-check, V AF 2 (d) is much more e-cient than EVA-check (because
like OS-check it does not compute recursive clauses).
The plan of this paper is as follows. In Section 2, we review basic concepts concerning loop
checking. In Section 3, we introduce expanded variants and examine their properties. In Section
4, we dene four VAF-checks and prove their completeness. In Section 5, we make a comparison
of the VAF-checks with OS-check and EVA-check.
Preliminaries
In this section, we review some basic concepts concerning loop checking. We assume familiarity
with the basic concepts of logic programming, as presented in [10]. Here and throughout, by a
logic program we always mean a positive logic program. Variables begin with a capital letter,
and predicate symbols, function symbols and constants with a lower case letter. Let A be an
atom/function. The size of A, denoted jAj, is the count of function symbols, variables and constants
in A. We use rel(A) to refer to the predicate/function symbol of A, and use A[i] to refer to the
i-th argument of A, A[i][j] to refer to the j-th argument of the i-th argument, and A[i]:::[k] to refer
to the k-th argument of . of the i-th argument. For example, let
Denition 2.1 By a variant of an SLD-derivation (resp. a goal, subgoal, atom or function) D we
mean a derivation (resp. a goal, subgoal, atom or function) D 0 that is the same as D up to variable
renaming.
Denition 2.2 ([1, 3]) Let P be a logic program, G 0 a top goal and S a computation rule.
1. Let L be a set of SLD-derivations of P [ fG 0 g under S. Dene
that is a proper subderivation of Dg.
L is subderivation free if
2. A (simple) loop check is a computable set L of nite SLD-derivations such that L is closed
under variants and is subderivation free.
Observe that a loop check L formally denes a certain type of innite loops generated from
under S; i.e. an SLD-derivation G 0 is said to step into an innite
loop at G k if G 0 ) is in L. Therefore, whenever such an innite loop is detected, we
should cut it immediately below G k . This leads to the following denition.
Denition 2.3 Let T be the SLD-tree of P [ fG 0 g under S and L a loop check. Let
the SLD-derivation from the top goal G 0 to G 0 is in Lg. By applying L to T we obtain a new
SLD-tree which consists of T with all the nodes (goals) in CUT pruned. By pruning a node
from an SLD-tree we mean removing all its descendants.
In order to justify a loop check, Bol et al. introduced the following criteria.
Denition 2.4 ([1]) Let S be a computation rule. A loop check L is weakly sound if the following
condition holds: for every logic program P , top goal G 0 and SLD-tree T of P [ fG 0 g under S, if T
contains a successful branch, then contains a successful branch. A loop check L is complete if
every innite SLD-derivation is pruned by L. (Put another way, a loop check L is complete if for
any logic program P and top goal G 0
An ideal loop check would be both weakly sound and complete. Unfortunately, since logic
programs have the full power of the recursive theory, there is no loop check that is both weakly
sound and complete even for function-free logic programs [1]. As mentioned in the Introduction,
in this paper we explore complete loop checking mechanisms. So in order to compare dierent
complete loop checks, we introduce the following concept.
Denition 2.5 A complete loop check L 1 is said to be more reliable 2 than a complete loop check
logic program P and top goal G 0 , the successful SLD-derivations in TL1 are not less
than those in TL2 , and not vice versa.
It is proved that EVA-check is more reliable than OS-check [17]. In the Introduction, we
mentioned a notion of ancestor subgoals.
Denition 2.6 ([17]) For each subgoal A in an SLD-tree, its ancestor list ALA is dened recursively
as follows:
1. If A is at the root, then fg.
2. Let Am be a node in the SLD-tree, with A 1 being selected to resolve against a
clause A 0
1 . So M has a child node
Let the ancestor list of each A i at M be ALA i . Then the ancestor list ALB i of each B i at
N is ALA1 [ fA 1 g and the ancestor list ALA j of each A j is ALA j .
Obviously, for any subgoals A and B, if A is in the ancestor list of B, i.e. A 2 ALB , the proof
of A requires the proof of B.
Denition 2.7 Let G i and G k be two nodes in an SLD-derivation and A and B the selected
subgoals in G i and G k , respectively. We say A is an ancestor subgoal of B, denoted A ANC B, if
The following result shows that the ancestor relation ANC is transitive.
Theorem 2.1 If A 1 ANC A 2 and A 2 ANC A 3 , then A 1 ANC A 3 .
Proof. By the denition of ancestor lists, for any subgoal A if A 2 ALA 0 , then ALA ALA 0 . So
ALA 3
. Thus A 1 2 ALA 2
ALA3
implies A 1 2 ALA 3
. That is, A 1 ANC A 3 . 2
In [17], it is phrased as more sound.
With no loss in generality, in the sequel we assume the leftmost computation rule. So the
selected subgoal at each node is the leftmost subgoal. For convenience, for any node (goal) G i ,
unless otherwise specied we use A i to refer to the leftmost subgoal of G i .
3 Expanded Variants
To design a complete and reliable loop check, we rst need to determine what principal characteristics
that an innite loop possesses. Consider the innite loop L 2 (see the Introduction) again.
We notice that for any i 0, the subgoal p(f(::f(f(g(a)))::)) at the (i + 1)-th node G i+1 is a
variant of the subgoal p(f(::f(g(a))::)) at the i-th node G i except for the function g(a) at G i that
grows into a function f(g(a)) at G i+1 . However, If we replace g(a) with a constant a in L 2 , then
p(f(::f(f(a))::)) at G i+1 is a variant of p(f(::f(a)::)) at G i except for the constant a at G i that
grows into a function f(a) at G i+1 . Furthermore, If we replace g(a) with a variable X in L 2 , then
p(f(::f(f(X))::)) at G i+1 is a variant of p(f(::f(X)::)) at G i except for the variable X at G i that
grows into a function f(X) at G i+1 .
As another example, consider the program
Let the top goal G Z). Then we will get an innite loop L 3 as depicted in Fig.1. Observe
that for any i > 0, the subgoal at G 2(i+1) is a variant of that at G 2i except that the variable Y at
G 2i grows into f(a; Y ) at G 2(i+1) .
Fig.1 The innite loop L 3 .
These observations reveal a key structural characteristic of some subgoals in an innite loop
with functions, which can be formalized as follows.
Denition 3.1 Let A and A 0 be two atoms/functions. A 0 is said to be an expanded variant of A,
denoted A 0 wEV A, if A 0 is a variant of A except that there may be some terms at certain positions
in A each A[i]:::[k] of which grows in A 0 into a function A 0 Such terms
like A[i]:::[k] in A are then called growing terms w.r.t. A 0 .
The following result is immediate.
Theorem 3.1 If A is a variant of B, then A wEV B.
Example 3.1 At each of the following lines, A 0 is an expanded variant of A because it is a variant
of A except for the growing terms GT .
However, at the following lines no A 0 is an expanded variant of A.
c) /*c and b are not uniable
is not in f(X)
to the case that p(X; X) is not a variant of p(Y; X)
In the above example, p(X; f(X)) is an expanded variant of p(X; X). It might be doubtful
how that would happen in an innite loop. Here is an example.
Example 3.2 Let P be a logic program and G
a top goal. We have the following innite loop:
Clearly, for any i 0, the subgoal A i+1 at G i+1 is the subgoal A i at G i with the second X growing
to f(X). That is, A i+1 is a variant of A i except for A i+1
Any expanded variant has the following properties.
3 This example is suggested by an anonymous referee.
Theorem 3.2 Let A 0 wEV A.
(1) jAj jA 0 j.
(2) For any i, ., k, jA[i]:::[k]j jA 0 [i]:::[k]j.
(3) When are variants.
When jAj 6= jA 0 j, there exists i such that jA[i]j < jA 0 [i]j.
Proof: (1) and (2) are immediate from Denition 3.1. By (2), when
That is, there is no growing term in A, so by Denition 3.1 A 0 is a variant
of A. This proves (3). Finally, (4) is immediate from (2). 2
These properties are useful for the computation of expanded variants. That is, if jA 0 j < jAj,
we conclude A 0 is not an expanded variant of A. Otherwise, if we determine if both are
variants. Otherwise, we proceed to their arguments (recursively) to nd growing terms and check
if they are variants except for the growing terms.
The relation \variant of" dened in Denition 2.1 yields an equivalent relation; it is re
exive
(i.e., A is a variant of itself), symmetric (i.e., A being a variant of B implies B is a variant of
A), and transitive (i.e., if A is a variant of B and B is a variant of C, then A is a variant of C).
However, the relation wEV is not an equivalent relation.
Theorem 3.3 (1) A wEV A.
(2) A wEV B does not imply B wEV A.
(3) A wEV B and B wEV C does not imply A wEV C.
does not imply B wEV C.
Proof. (1) Straightforward by Theorem 3.1. (2) Here is a counter-example: p(f(X)) wEV p(X),
but p(X) 6w EV p(f(X)). (3) Immediate by letting
and Immediate by letting
Although wEV is not transitive, the sizes of a set of expanded variants are transitively decreas-
ing. The following result is immediate from Theorem 3.2.
Corollary 3.4 If A wEV B and B wEV C, then jAj jCj.
The concept of expanded variants provides a basis for designing loop checking mechanisms for
logic programs with functions. This claim is supported by the following theorem.
Theorem 3.5 Let be an innite SLD-derivation with
innitely large subgoals. Then there are innitely many goals G such that for any j 1,
Proof. Since D is innite, by the justication given by Bol [3] (page 40) D has an innite
subderivation D 0 of the form
where for any j 1, A 0
. Since any logic program has only a nite number of clauses,
there must be a set of clauses in the program that are invoked an innite number of times in D 0 .
be the set of all dierent clauses that are used an innite number of times in
must have an innite subderivation D 00 of the form
where for any j 1, A 00
and
any logic program has only a
nite number of predicate/function/constant symbols and D contains innitely large subgoals, there
must be an innite sequence of A 00
s in such that for any j 1, A i j ANC A i j+1
and A i j is a variant of A i j+1 except for a few terms in A i j+1 whose size increases. Note that such
an innite increase in term size in D 00 must result from some clauses in S that cause some terms
I to grow into functions of the form f(:::I:::) each cycle S is applied. This means that A i j is a
variant of A i j+1 except for some terms I that grow in A i j+1 into f(:::I:::), i.e., A i j+1 wEV A i j with
VAF-Checks
Based on expanded variants, we can dene a series of loop checking mechanisms for logic programs
with functions. In this section, we present four representative VAF-checks and prove their
completeness.
Denition 4.1 Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Dene
in which there are up to d goals
that satisfy the following conditions:
(1) For each j d, A i j ANC A i j+1 and A i j+1 wEV A i j .
(2) For any j d, jA or for any j d, jA
Theorem 4.1 (1) V AF 1 (d) is a (simple) loop check. (2) V AF 1 (d) is complete w.r.t. the leftmost
computation rule.
Proof. (1) Straightforward from Denition 2.2.
(2) Let be an innite SLD-derivation. Since P has
only a nite number of clauses, there must be a set of clauses in P that are invoked an innite
4 Note that (1) the order of clauses in fC j 1
is not necessarily the same as that in S, say fC
may contain duplicated clauses, say fC
C1g.
number of times during the derivation. Let be the set of all distinct clauses that
are applied an innite number of times in D. Then, by the proof of Theorem 3.5 D has an innite
sub-derivation of the form
where for any j 1, fC
distinguish between two cases.
(i) There is no subgoal in D whose size is innitely large. Because any logic program has only a
nite number of predicate symbols, function symbols and constants, there must be innitely
many atoms in T that are variants. Let fB 1 be the rst in T that
are variants. Then, by Theorem 3.1, for each 1 j d B j+1 wEV B j with jB j+1
the conditions of V AF 1 (d) are satised, which leads to the derivation D being pruned at the
node with the leftmost subgoal B d+1 .
(ii) There is a subgoal in D with innitely large size. Then by Theorem 3.5, there must be innitely
many atoms in T that are expanded variants with growing terms. Let fB 1
be the rst in T such that for each 1 j d, B j+1 wEV B j with jB j+1 j > jB j j.
Again, the conditions of V AF 1 (d) are satised, so that the derivation D will be pruned. 2
is complete for any d 1, taking d ! 1 leads to the following immediate
corollary to Theorem 4.1.
Corollary 4.2 For any innite SLD-derivation, there is an innite sub-derivation of the form
such that all A i j s satisfy the two conditions of V AF 1 (d) (d !1).
Observe that V AF 1 (d) identies innite loops only based on expanded variants of selected
subgoals. More reliable loop checks can be built by taking into account the clauses selected to
generate those expanded variants.
Denition 4.2 Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Dene
in which there are up to d goals
that satisfy the following conditions:
(1) For each j d, A i j ANC A i j+1 and A i j+1 wEV A i j .
(2) For any j d, jA or for any j d, jA
(3) For all j d, the clause selected to resolve with A i j s is the same. g)
Theorem 4.3 (1) V AF 2 (d) is a (simple) loop check. (2) V AF 2 (d) is complete w.r.t. the leftmost
computation rule.
Proof. (1) Straightforward.
(2) By Corollary 4.2, for any innite SLD-derivation D, there is an innite sub-derivation in
D of the form
such that all A 0
s satisfy the rst two conditions of V AF 2 (d). Since any logic program has only
a nite number of clauses, there must be a clause C k that resolves with innitely many A 0
s in
the sub-derivation. Let A d be the rst d A 0
s that resolve with C k . The third condition of
(d) is then satised, so we conclude the proof. 2
Again, taking d !1 leads to the following immediate corollary to Theorem 4.3.
Corollary 4.4 For any innite SLD-derivation, there is an innite sub-derivation of the form
such that all A i j s satisfy the two conditions of V AF 1 (d) (d !1).
(d) is a special case of V AF 1 (d), any SLD-derivation pruned by V AF 2 (d) must be
pruned by V AF 1 (d), but the converse is not true. As an example, consider the SLD-derivation
It will be cut by V AF 1 (2) but not by V AF 2 (2) because condition (3) is not satised. This leads to
the following.
Theorem 4.5 V AF 2 (d) is more reliable than V AF 1 (d).
considers only the repetition of one clause in an innite SLD-derivation. More constrained
loop checks can be developed by considering the repetition of a set of clauses.
Denition 4.3 Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Dene
in which there are up to d goals
that satisfy the following conditions:
(1) For each j d, A i j ANC A i j+1 and A i j+1 wEV A i j .
(2) For any j d, jA or for any j d, jA
(3) For all j d, the clause selected to resolve with A i j s is the same.
(4) For all j d, the set S of clauses used to derive A i j+1 from A i j is the same. g)
Theorem 4.6 (1) V AF 3 (d) is a (simple) loop check. (2) V AF 3 (d) is complete w.r.t. the leftmost
computation rule.
Proof. (1) Straightforward.
(2) By Corollary 4.4, for any innite SLD-derivation D, there is an innite sub-derivation in
D of the form
such that all A 0
s satisfy the rst two conditions of V AF 3 (d). Obviously, the third condition of
is satised as well. Since any logic program has only a nite number of clauses, there
must be an innite sequence, A 0
l 1
l j
; :::, of A 0
in the sub-derivation such that the set S of
clauses used to derive A 0
l j+1
from A 0
l j
is the same. Let A be the rst
s.
The fourth condition of V AF 3 (d) is then satised. 2
Taking d !1 leads to the following immediate corollary to Theorem 4.6.
Corollary 4.7 For any innite SLD-derivation, there is an innite sub-derivation of the form
such that all A i j s satisfy the three conditions of V AF 2 (d) (d ! 1) and that for any j 1,
g.
Obviously, any SLD-derivation pruned by V AF 3 (d) must be pruned by V AF 2 (d). But the
converse is not true. Consider the SLD-derivation
It will be cut by V AF 2 (2) but not by V AF 3 (2) because condition (4) is not satised. This leads to
the following.
Theorem 4.8 V AF 3 (d) is more reliable than V AF 2 (d).
Before introducing another more constrained loop check, we recall a concept of recursive
clauses, which was introduced in [16].
Denition 4.4 ([16]) A set of clauses, fR are called recursive clauses if they are of the
form (or similar forms)
where for any 0 < i < m, q i (:::X i 1 :::) in R i 1 is uniable with q i (:::X i :::) in R i with an mgu
containing in Rm is uniable with q 0 (:::X 0 :::) in R 0 with an mgu
containing f(:::X m :::)=X 0 . Put another way, fR is a set of recursive clauses if starting
from the head of R 0 ( replacing X 0 with X) applying them successively leads to an inference chain
of the form
such that the last atom q 0 (:::f (:::X:::):::) is uniable with the head of R 0 with an mgu containing
Example 4.1 The sets of clauses, fC 11 g in P 1 , fC 21 g in P 2 , fC in P 3 , and fC 41 g in P 4 ,
are all recursive clauses.
Recursive clauses cause some subgoals to increase their size recursively; i.e., each cycle fR
is applied, the size of q 0 (:) increases by a constant. If fR can be repeatedly applied an
innite number of times, a subgoal q 0 (:) will be generated with innitely large size (note that not
any recursive clauses can be repeatedly applied). Since any logic program has only a nite number
of clauses, if there exist no recursive clauses in a program, there will be no innite SLD-derivations
with innitely large subgoals, because no subgoal can increase its size recursively. This means that
any innite SLD-derivation with innitely large subgoals is generated by repeatedly applying a
certain set of recursive clauses. This leads to the following.
Denition 4.5 Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Dene
in which there are up to d goals
that satisfy the following conditions:
(1) For each j d, A i j ANC A i j+1 and A i j+1 wEV A i j .
(2) For any j d, jA or for any j d, jA
(3) For all j d, the clause selected to resolve with A i j s is the same.
(4) For all j d, the set S of clauses used to derive A i j+1 from A i j is the same.
(5) If for any j d jA contains recursive clauses that lead
to the size increase. g)
Theorem 4.9 (1) V AF 4 (d) is a (simple) loop check. (2) V AF 4 (d) is complete w.r.t. the leftmost
computation rule.
Proof. (1) Straightforward.
(2) By Corollary 4.7, for any innite SLD-derivation D, there is an innite sub-derivation E
in D of the form
such that all A 0
s satisfy the rst four conditions of V AF 4 (d) (d !1). Now assume that for any
j. Then E contains A 0
innitely large size. Such innitely increase in
term size in E must be generated by the repeated applications of some recursive clauses. This
means that there must be an innite sequence, A 0
l 1
l j
; :::, of A 0
s in E such that the clauses
used to derive A 0
l j+1
from A 0
l j
contain recursive clauses that lead to the size increase from A 0
l j
to
l j+1
. Let A be the rst
s. Then all A i j s satisfy the ve conditions of
When d !1, we obtain the following corollary to Theorem 4.9.
Corollary 4.10 For any innite SLD-derivation, there is an innite sub-derivation of the form
such that for any j 1, A g, and for
all or for all where the size increase results from the
application of a set of recursive clauses in fC k ; :::; C n j g.
is an enhancement of V AF 3 (d), any SLD-derivation pruned by V AF 4 (d) must
be pruned by V AF 3 (d). But the converse is not true. Consider the program that consists of the
clauses p(f(a)). The SLD-derivation
p(a) )C1 p(f(a)) )C2 2
will be cut by V AF 3 (1) but not by V AF 4 (1) because there are no recursive clauses in the program.
So we have the following result.
Theorem 4.11 V AF 4 (d) is more reliable than V AF 3 (d).
Example 4.2 Let us choose the depth bound d = 1. Then by applying any one of the four VAF-
checks, all the four illustrating innite loops introduced earlier, will be cut
at some node. That is, L 1 , L 2 and L 4 will be pruned at G 1 (the second node from the root), and
pruned at G 4 .
Example 4.3 Consider the following list-reversing program (borrowed from [3])
and the top goal G Z). Note that C 53 is a recursive clause.
Again, let us choose d = 1. After successively applying the clauses C 52 , C 53 and C 53 , we get the
following SLD-derivation:
It is easy to check that there is no expanded variant, so we continue to expand G 3 . We rst apply
C 51 to G 3 , generating a successful node 2; we then apply C 52 to G 3 , generating a node
As A 3 ANC A 5 and A 5 wEV A 3 with jA 5 are satised, which stop expanding
G 5 . We then apply C 53 to G 3 , generating a node
Obviously, A 3 ANC A 6 and A 6 wEV A 3 with jA 6 j > jA 3 j where the size increase of A 6 is via the
recursive clause C 53 , so V AF 1 4 (1) are satised again, which stop expanding G 6 . Since V AF 1 4 (1)
cut all innite branches while retaining the (shortest) successful SLD-derivation
they are weakly sound for g.
Observe that each condition of the above VAF-checks captures one characteristic of an innite
loop. Obviously, except (1) and (5), all the conditions (2) (4) make sense only when d > 1.
Because expanded variants capture a key structural characteristic of subgoals in innite loops, all
the VAF-checks with are weakly sound for a majority of representative logic programs (see
the above examples). However, considering the undecidable nature of the loop checking problem,
choosing d > 1 would be safer. 5 The following example, although quite articial, illustrates this
point.
Example 4.4 Consider the following logic program
p(f(a)). C 62
and the following successful SLD-derivation D for the top goal G
5 As mentioned by Bol [3], the question of which depth bound is optimal remains open. However, our experiments
show that V AF2(2) is weakly sound for a vast majority of logic programs.
Obviously, p(a) ANC p(f(a)), p(f(a)) wEV p(a), and C 61 is a recursive clause. If we choose
the derivation D will be pruned at G 1 by all the above four VAF-checks. That is, V AF 1 4 (1) are
not weakly sound for this program. Apparently, V AF 1 4 (2) are weakly sound.
Observe that from V AF 1 (d) to V AF 4 (d), the reliability increases, but the computational overhead
increases as well. Therefore, we need to consider a trade-o in choosing among these VAF-
checks. For practical applications, when d > 1 we suggest choosing the VAF-checks in the following
(d). The basic reasons for such a preference are
(i) our experience shows that V AF 2 (2) is weakly sound for a vast majority of logic programs, and
(ii) the check of condition (3) of V AF 2 (d) takes little time, whereas the check of recursive clauses
(condition (5) of V AF 4 (d)) is rather costly.
5 Comparison with OS-Check and EVA-Check
Because OS-check, EVA-check and V AF 1 4 (d) are complete loop checks, we make the comparison
based on the two key factors: reliability and computational overhead.
5.1 Comparison with OS-Check
We begin by recalling the formal denition of OS-check.
Denition 5.1 ([3, 14]) Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Let
size be a size-function on atoms. Dene
in which there are up to d goals
such that for any 1 j d
There are three versions of OS-check, depending on how the size-function size is dened [14, 3].
In the rst version, atoms A and B, so condition (2) will always hold
and thus can be ignored. In the second version, atom A. And in the third
version, for any atoms A and B with the same arity n,
jA[i]j jB[i]j. Obviously, the third version is more reliable than the rst two versions so we can
focus on the third version for the comparison.
OS-check is complete [3], but is too weak in that it identies innite loops mainly based on the
size-function, regardless of what the internal structure of atoms is. Therefore, in order to increase
its reliability, we have to choose the depth bound d as large as possible. For example, in [14]
However, because the internal structure of atoms with functions may vary
drastically in dierent application programs, using only a large depth bound together with the
size-function as the loop checking criterion could in general be ineective/ine-cient. For example,
when applying OSC(10; size) to the programs would generate a lot of redundant
nodes. The following example further illustrates this fact.
Example 5.1 Consider the following logic program and top goal:
100). C 7;100
The successful SLD-derivation for is as follows:
| {z }
It is easy to see that OSC(d; size) is not weakly sound for this program unless we choose d 100.
In contrast, in our approach the common structural features of repeated subgoals in in-
nite loops are characterized by expanded variants. Based on expanded variants the VAF-checks
are weakly sound with small depth bounds (e.g. d 2) for a majority of logic
programs. For instance, V AF 1 4 (1) are weakly sound for P 7 in the above example, which shows a
dramatical dierence.
The above discussion is summarized in the following results
Theorem 5.1 Let size be the size-function of the third version of OS-check. For any atoms A and
B, A wEV B implies size(B) size(A).
Proof. Immediate from Theorem 3.2. 2
Theorem 5.2 For any 1 i 4, V AF i (d) is more reliable than OSC(d; size).
Proof. By Theorem 5.1 and Corollary 3.4, OSC(d; size) will be satised whenever condition (1)
of V AF i (d) holds. So any SLD-derivations pruned by V AF i (d) will be pruned by OSC(d; size)
as well. But the reverse is not true. As a counter-example, when d < 100, the SLD-derivation in
Example 5.1 will be pruned by OSC(d; size) but not by V AF i (d). 2
We now discuss computational overhead. First note that in both OS-check and the VAF-
checks, the ancestor checking, A i j ANC A i j+1 , is required. Moreover, for each ancestor subgoal
A i j of A k , in OSC(d; size) we compute
Although the computation of expanded variants is a little more expensive than
that of the size-function, both are processes of two strings (i.e. atoms). Since string processing
is far faster than ancestor checking (which needs to scan the goal-stack), we can assume that the
two kinds of string computations take constant time w.r.t. scanning the goal-stack. Under such an
assumption, the complexity of OSC(d; size) and V AF 1 2 (d) is the same (note that the check of
conditions (2) and (3) of the VAF-checks takes little time).
Since the check of condition (4) of the VAF-checks requires scanning the goal-stack, V AF 3 (d)
is more expensive than OSC(d; size). Furthermore, condition (5) of the VAF-checks, i.e. the
computation of recursive clauses, is quite expensive because on the one hand, given a logic program
we need to determine which clauses in it are recursive clauses, and on the other hand, for two
subgoals A i j and A i j+1 with jA in an SLD-derivation, we need to nd if the size
increase from A i j to A i j+1 results from some recursive clauses. This means that V AF 4 (d) could be
much more expensive than OSC(d; size).
The above discussion further suggests that V AF 2 (d) is the best choice (balanced between
reliability and overhead) among OSC(d; size) and V AF 1 4 (d).
5.2 Comparison with EVA-Check
We begin by reproducing the denition of EVA-check.
Denition 5.2 ([17]) Let P be a logic program, G 0 a top goal, and d 1 a depth bound. Dene
in which there are up to d goals
such that for any 1 j d
(2) A k is a generalized variant of A i j .g)
Here, a subgoal A 0 is said to be a generalized variant of a subgoal A if it is a variant of A except
that there may be some arguments whose size increases from A via a set of recursive clauses.
The following characterization of generalized variants is immediate from the above denition
and Denition 3.1.
Theorem 5.3 For any subgoals A 0 and A in an SLD-derivation, A 0 is a generalized variant of A
if and only if A 0 wEV A and if jA 0 j > jAj then the size increase is via a set of recursive clauses.
EV A(d) relies heavily on recursive clauses, so its complexity is similar to V AF 4 (d). Since
the computation of recursive clauses is too expensive, we will not choose EV A(d) in practical
applications unless it is more reliable than some V AF i (d). However, the following example shows
that EV A(d) can not be more reliable than any of the four VAF-checks.
Example 5.2 Consider the following logic program and top goal:
p(f(a)). C 83
A successful SLD-derivation for is as follows:
It can be easily seen that fC 81 ; C 82 g and fC 82 g are two sets of recursive clauses. Let us choose
2. Then A 2 is a generalized variant of both A 0 and A 1 , so EV A(2) will cut the derivation at
. However, this SLD-derivation will never be cut by any V AF i (2) because condition (2) of the
VAF-checks is not satised (i.e. we have jA
6 Conclusions
We have developed four VAF-checks for logic programs with functions based on the notion of
expanded variants. We observe that the key structural feature of innite loops is repetition (of
selected subgoals and clauses) and recursive increase (in term size). Repetition leads to variants
(because a logic program has only a nite number of clauses and predicate/function/constant
recursive increase introduces growing terms. The notion of expanded variants
exactly catches such a structural characteristic of certain subgoals in innite loops. Due to this,
the VAF-checks are much more reliable than OS-check and no less reliable than EVA-check even
with small depth bounds (see Examples 5.1 and 5.2). On the other hand, since the structural
information is extracted directly from individual subgoals, without appealing to recursive clauses,
the VAF-checks (except V AF 4 (d)) are much more e-cient than EVA-check.
In balancing between the reliability and computational overhead, we choose V AF 2 (d) as the
best one for practical applications. Although V AF 2 (2) is reliable for a vast majority of logic
programs, due to the undecidability of the loop checking problem, like any other complete loop
checks, V AF 2 (d) in general cannot be weakly sound for any xed d. The only way to deal with this
problem is by heuristically tuning the depth bound in practical applications. Methods of carrying
out such a heuristic tuning then present an interesting open problem for further study.
Acknowledgements
We thank the anonymous referees for their constructive comments, which have greatly improved the
presentation. The rst author is supported in part by Chinese National Natural Science Foundation
and Trans-Century Training Programme Foundation for the Talents by the Chinese Ministry of
Education.
--R
An analysis of loop checking mechanisms for logic programs
Towards more e-cient loop checks
Loop checking in partial deduction
Tabulated resolution for the Well-Founded semantics
Tabled evaluation with delaying for general logic programs
Eliminating unwanted loops in Prolog
A further note on loops in Prolog
Termination of logic programs: the never-ending story
Redundancy elimination and loop checks for logic pro- grams
Foundations of Logic Programming
Partial evaluation in logic programming
On eliminating loops in Prolog
The XSB Programmer's Manual (Version 1.8)
The mixtus approach to automatic partial evaluation of full Prolog
Mixtus: an automatic partial evaluator for full Prolog
Verifying local strati
An extended variant of atoms loop check for positive logic programs
Linear tabulated resolution for the well founded semantics
An abstract approach to some loop detection problems
OLD resolution with tabulation
the power of logic
Memoing for logic programs
--TR
Controlling recursive inference
OLD resolution with tabulation
Efficient loop detection in Prolog using the tortoise-and-hare technique
Foundations of logic programming; (2nd extended ed.)
Recursive query processing: the power of logic
An analysis of loop checking mechanisms for logic programs
Partial evaluation in logic programming
The Mixtus approach to automatic partial evaluation of full Prolog
Towards more efficient loop checks
Memoing for logic programs
Mixtus
Sound and complete partial deduction with unfolding based on well-founded measures
Redundancy elimination and loop checks for logic programs
Tabled evaluation with delaying for general logic programs
An extended variant of atoms loop check for positive logic programs
An abstract approach to some loop detection problems
Linear Tabulated Resolutions for the Well-Founded Semantics
--CTR
Yi-Dong Shen , Jia-Huai You , Li-Yan Yuan , Samuel S. P. Shen , Qiang Yang, A dynamic approach to characterizing termination of general logic programs, ACM Transactions on Computational Logic (TOCL), v.4 n.4, p.417-430, October
Etienne Payet , Fred Mesnard, Nontermination inference of logic programs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.2, p.256-289, March 2006
Alexander Serebrenik , Danny De Schreye, Inference of termination conditions for numerical loops in Prolog, Theory and Practice of Logic Programming, v.4 n.5-6, p.719-751, September 2004 | logic programming;loop checking |
504580 | Partial correctness for probabilistic demonic programs. | Recent work in sequential program semantics has produced both an operational (He et al., Sci. Comput. Programming 28(2, and an axiomatic (Morgan et al., ACM Trans. Programming Languages Systems 18(3) (1996) 325-353; Seidel et al., Tech Report PRG-TR-6-96, Programming Research group, February 1996) treatment of total correctness for probabilistic demonic programs, extending Kozen's original work (J. Comput. System Sci. 22 (1981) 328-350; Kozen, Proc. 15th ACM Symp. on Theory of Computing, ACM, New York, 1983) by adding demonic nondeterminism. For practical applications (e.g. combining loop invariants with termination constraints) it is important to retain the traditional distinction between partial and total correctness. Jones (Monograph ECS-LFCS-90-105, Ph.D. Thesis, Edinburgh University, Edinburgh, UK, 1990) defines probabilistic partial correctness for probabilistic, but again not demonic programs. In this paper we combine all the above, giving an operational and axiomatic framework for both partial and total correctness of probabilistic and demonic sequential programs; among other things, that provides the theory to support our earlier---and practical---publication on probabilistic demonic loops (Morgan, in: Jifeng et al. (Eds.), Proc. BCS-FACS Seventh Refinement Workshop, Workshops in Computing, Springer, Berlin, 1996. Copyright 2001 Elsevier Science B.V. | Introduction
Deterministic computation over a state space S can be modelled as functions
of type S ! S, from initial to final states. A 'powerdomain' construction
extends that to nondeterminism, and although the traditional powerdomains
- Smyth, Hoare and Plotkin - differ in their treatment of non-termination,
they all agree that nondeterminism is 'demonic', resolved in some arbitrary
way.
The probabilistic powerdomain [7, 6] instead resolves nondeterminism according
to some specified distribution over final states: demonic choice is
removed, and replaced by probabilistic choice.
He et al. [4] do not remove demonic choice; rather they model demonic
and probabilistic nondeterminism jointly by combining a special case of the
probabilistic powerdomain, for imperative programs [9], with the Smyth (de-
monic) construction. Morgan et al. [13] then complement that with a programming
logic of 'greatest pre-expectations' (extending Kozen's work [10]),
resulting overall in a treatment of total correctness for probabilistic and demonic
sequential programs.
In this paper we extend the constructions of He and Morgan to partial
correctness also. One important application for both forms of correctness
is the justification of invariant/variant principles for probabilistic demonic
loops: although published [11], those principles are based on (only) postulated
connections between wp and wlp for probabilistic programs - here we
provide the theory for their proof.
Another application is in the abstraction from probabilistic to demonic
choice; we discuss both issues in the conclusion.
To model partial (Hoare-style) and total (Smyth-style) correctness within
the same framework, we choose the Egli-Milner construction for demonic
choice (rather than the Smyth); and its being based on distributions rather
than simple states introduces some novel considerations (in particular the
need for linear interpolation between distributions). For the logic we are
then able to formulate both greatest- and greatest liberal pre-expectations,
reflecting the same total/partial distinction as in standard programming logic
[2].
In Sec. 2 we construct the probabilistic Plotkin-style powerdomain, the
'convex' powerdomain (which is a subset of the general representation in
Abramsky and Jung [1]), and we show how to extract partial and total information
from it; in Sec. 3 we link that to greatest-precondition-based probabilistic
programming logic; and in Sec. 4 and Sec. 5 we specialise to both a
liberal and 'ordinary' logic for a sequential programming language with both
probabilistic and demonic nondeterminism.
An extensive discussion of examples, and the general treatment of loops,
is given elsewhere [11].
Throughout we use infix dot ':' for function application, associating to
the left so that f:x:y means (f(x))(y); and we write ': =' for 'is defined to
be equal to'. Quantifications (including set comprehensions) are written in
the order quantifier (or 'f' for sets), bound variable with optional type, range
and finally term - thus for example
is the set of the first ten squares.
2 A convex powerdomain of distributions
In program semantics, powerdomains are used to study nondeterminism, a
phenomenon arising when a program outputs an arbitrary result (drawn from
a set of possible results) rather than a single, determined function of its input.
Here we consider powerdomains over a domain of probability distributions
rather than of 'definite' single states.
There are several ways of ordering sets of distributions (each resulting
in a different powerdomain): the choice depends on criteria which can
be explained in terms of the desired treatment of programs' possible non-terminating
behaviour. The Smyth order 1 (Def. B2) treats non-termination
as the worst behaviour and thus the Smyth powerdomain models total cor-
rectness. Similarly the Hoare order (Def. B3) models partial correctness: non-termination
is treated as the best outcome in that order. The Plotkin power-
domain uses the Egli-Milner order (Def. B4) which combines both views, and
is useful when both partial and total correctness are to be modelled within
a single framework.
In general the Plotkin powerdomain is not decomposable into the Smyth
and Hoare powerdomains, but in some special cases it is: Abramsky and Jung
[1] show that one such case is when the underlying domain is !-continuous
(Def. B9). In this section we show how Abramsky and Jung's results apply
to an appropriately-defined powerdomain of distributions, thus providing a
single semantic space for both partial and total correctness of programs exhibiting
probabilistic and demonic nondeterminism.
1 For this and other facts and definitions from domain theory, we follow the conventions
set out in [1]. We summarise the details for this paper (often specialising them to our
particular application) in Appendix B.
We write S for the state space, and assume it is finite. The space of
probability distributions over S is defined as follows.
Definition 2.1 For state space S, the space of distributions 2 (S; v) over S
is defined
and for F; F 0 in S we define
show first that (S; v) is an !-continuous complete partial order.
Lemma 2.2 For S a finite state space, its distributions (S; v) form an !-
continuous complete partial order.
Proof: The completeness of (S; v) is trivial, given the completeness of
the interval [0; 1] under - over the reals.
To show that S is !-continuous we only need exhibit a countable basis
(Def. B8). Since S is finite, we use the set of distributions contained in
since any real is the least upper bound of the rationals way-below it (Def. B7).We now define a Plotkin-style powerdomain over S. For subset A of S
we write "A for its up-closure and #A for its down-closure (Def. B1). We
say that A is up-closed (Smyth-closed) if "A = A, that it is down-closed
(Hoare-closed) if A and that it is Egli-Milner-closed 3 if
A further closure condition is related to continuity.
These special distributions are more precisely called discrete sub-probability measures
[9]; they do not necessarily sum to 1, and the deficit gives the probability of nontermination.
The 'everywhere zero' distribution for example, that assigns zero probability to all states,
models nowhere-terminating behaviour. (An alternative though less convenient treatment
would assign probability 1 to some special state ?.)
3 This is often called convex closed ; but we will need that term for another purpose.
Definition 2.3 For subset A of S we define its limit closure lc:A to be the
smallest set containing A itself, together with tA 0 and uA 00 for all up-directed
(Def. B5) subsets A 0 and down-directed (Def. B6) subsets A 00 of A. 4 We say
that A is limit-closed if
Before defining our powerdomain we must introduce one further closure
condition, specific to distributions. For distributions F; F 0 in S and p in
[0; 1] we can form F p \Phi F 0 , the weighted average, defined pointwise over S
as p \Theta F + (1\Gammap) \Theta F 0 (with usual scalar multiplication and addition). For
sets of distributions we define p-averaging as follows.
Definition 2.4 For p in [0; 1] and subsets A; A 0 of S we define
We say that A is convex if A p \Phi
We now can define our powerdomain over S; it is a subset of the Plotkin
powerdomain.
Definition 2.5 The convex powerdomain (CS; vEM ) over the space of distributions
S comprises those subsets of S that are non-empty, Egli-Milner
closed, limit closed and convex. Its order vEM is the usual Egli-Milner order
(Def. B4). 2
The convex powerdomain is a subset of the Plotkin powerdomain because
it includes only the convex subsets of the latter. Our aim for this section is to
show that CS is itself limit complete, and that from its limits the Smyth and
Hoare limits can be extracted - for that is what makes it suitable for our
application of it to probabilistic program semantics. Thus we show that the
least upper bound of an Egli-Milner-directed subset of CS lies within
CS, and that it is a combination of the Hoare least upper bound (tH ) and
the Smyth least upper bound (t S ).
The next lemma is the specialisation of a general decomposition result of
Plotkin powerdomains to (S; v). We write Lens(S) (Def. B10) for the set of
non-empty, Egli-Milner closed, limit closed subsets of S. 5
4 Note that both tA and uA exist for all subsets A of S in particular, directed or not.
5 Thus CS comprises just the convex lenses of S.
Lemma 2.6 For any Egli-Milner-directed subset A of Lens(S) the limit
tEMA exists, and satisfies
(Insisting on limit closure after tH can be seen here as a continuity 6 condi-
tion.)
Proof: The decomposition will be a consequence of the isomorphim between
the abstract Plotkin powerdomain (Def. B12) and the space of lenses of
an !-continuous domain with the topological Egli-Milner ordering (Def. B13).
The isomorphism is given by Abramsky and Jung [1] and is reproduced here
in Thm. B.1. The decomposition (1) holds for abstract Plotkin powerdomains
in general [8].
To establish the isomorphism, note first that the closure conditions on
Lens(S) imply that vEM between lenses reduces to the topological Egli-
Milner ordering; then Lem. 2.2 ensures the conditions of Thm. B.1, namely
that (S; v) is an !-continuous complete partial order. 2
Since CS is a subset of Lens(S), we have in the following corollary our
closure under limits.
Corollary 2.7 For any Egli-Milner-directed subset A of CS the limit tEM
exists in CS, and satisfies
Proof: Given Lem. 2.6, we need only show that t S A " lc:(t H
if all the elements of A are, and that follows from these elementary facts: up-
closing preserves convexity (v-monotonicity of p \Phi); the intersection of convex
sets is convex; down-closing preserves convexity (similar to up-closing); the
union of a '-directed set of convex sets is convex; and limit-closing preserves
convexity (v-continuity of p \Phi). 2
6 Consider the Egli-Milner chain on sets of real intervals in [0; 1],
which has limit f1g (the limit point of the underlying series). The union of the down sets
is the half-closed interval [0; 1) however, and the intersection of the up sets is f1g. Failing
to limit-close the Hoare limit would produce an empty result.
Cor. 2.7 gives us our main result, that the Egli-Milner limit determines
the Smyth limit (in the Smyth ordering) and the Hoare limit (in the Hoare
ordering). We write respectively (Def. B14) for Smyth equivalence
and Hoare equivalence between elements of CS: our theorem below shows
in addition that the limits are indistinguishable relative to the appropriate
equivalences.
Theorem 2.8 For any vEM -directed subset A of CS, the following equivalences
hold:
Proof: This too is a property of abstract Plotkin powerdomains [8], and
so follows from the isomorphism (Thm. B.1) used in the proof of Thm. 2.7.This section has defined the convex powerdomain, whose use for modelling
probabilistic imperative programs now follows from the constructions
for the Smyth-style domain [4]: for example sequential composition is a generalised
functional composition; nondeterministic choice is union (then convex
closure); and probabilistic choice is weighted average as defined above. In
Sec. 4 we give further details.
For recursion one takes limits of chains, and here is the significance of
Thm. 2.8: we must be sure that taking the limit in the convex domain agrees
with the more specialised limit in the Smyth domain and with the more
specialised limit in the Hoare domain - for that is what allows us to use the
more general convex domain for either. It is known that the equivalence holds
for standard (nonprobabilistic) domains 7 ; Thm. 2.8 confirms the preservation
of the property when probability is included.
In the next section we link the convex powerdomain with its partial/total
probabilistic programming logic.
7 In fact the same general argument of this section establishes that, since the standard
approach takes a flat domain S for the state space, trivially !-continuous if S is finite, or
even countable.
3 Probabilistic programs and logic
In this section we use the convex powerdomain CS of Sec. 2 to construct a
model for both total and partial correctness of probabilistic demonic programs. 8
We begin with a review of methods that treat the two aspects separately.
Over standard (non-probabilistic) demonic programs, a popular model for
total correctness is S ! SS? , where S? is the flat domain extending state
space S with ? for non-termination, and S forms the Smyth powerdomain
over that; Dijkstra's weakest `ordinary' preconditions PS ! PS [2] support a
programming logic suitable for total correctness. For partial correctness one
can use S ! HS? (Hoare) for the model and weakest 'liberal' preconditions
for the logic. Finally, although partial and total correctness are available
simultaneously via S ! KS? (Plotkin), for r in S ! KS? and postcondition
Q in PS still it is more convenient to define separately
weakest precondition
weakest liberal precondition (2)
to give the total (wp) and partial (wlp) programming logics. Note that the
definitions (2) work together only over KS (the intersection of HS? and SS? )
- wp does not work over HS? and wlp does not work over SS? . (Nelson
[14] gives a nice treatment of the issues.)
For probabilistic programs, He et al. [4] propose S ! C S S for total
correctness, where C S S is like CS of the previous section, but based on
the Smyth order. Morgan et al. [13] provide a probabilistic 'greatest pre-
expectation' logic for that, where expectations are non-negative real-valued
functions over the state space (extending Kozen's treatment [10] for non-
programs).
To access total and partial correctness simultaneously, by analogy with
the standard case we simply replace He's Smyth-based C S by our Egli-Milner-
based C. Yet rather than define two forms of logic (as at (2) above) we
generalise as do Morgan and McIver [12] by allowing expectations to take
negative values: roughly speaking, for total correctness one uses non-negative
post-expectations and for partial correctness one uses non-positive.
Kozen [9] has modelled total correctness of probabilistic (but not demonic) programs,
and Jones [6] extended that to partial correctness. He [4] models total (but not partial)
correctness of probabilistic demonic programs.
We begin the details with the construction of the probabilistic, demonic
model of programs.
Definition 3.1 For (finite) state space S the space of probabilistic, demonic
programs (MS;vEM ) is given by
with the order induced pointwise from CS, so that for in MS we have
We occasionally use v S and vH over MS, analogously lifted from CS. 2
Thus our programs take initial states to sets of final distributions: the plurality
of the sets represents demonic nondeterminism; the distributions they
contain each represent probabilistic nondeterminism.
The next task is to investigate the dual representation of programs as
expectation transformers. We extend the expectations found in Morgan et
al. [13, 11], where the topic was total correctness (the Smyth order and
up-closed sets) and expectations were of type S ! [0; 1], by using [\Gamma1; 1]
instead: we write ES for S ! [\Gamma1; 1], and use lower-case Greek letters for
typical elements.
Expectation transformers T S are thus functions of type ES ! ES. We
R
F ff for the expected value of ff in ES averaged over distribution F
in S. As a special case of expectations, we interpret predicates as f0; 1g-
valued functions of the state space, and for predicate A holding at state s
we convenient. For a scalar c we write
c for the constant expectation evaluating to c over all of S. With those
conventions the predicates true and false correspond to the expectations 1
and 0 respectively. Finally, for relations between expectations we write
everywhere no more than
everywhere equal to
so that we generalise respectively implication, equivalence and reverse implication
on predicates.
Our logic is based on the 'extended greatest pre-expectation transformer',
defined as follows.
Definition 3.2 Let r be a program in MS, taking initial states in S to sets
of final distributions over S. Then the greatest pre-expectation at state s of
program r, with respect to post-expectation ff in ES, is defined: 9
Z
F
ff) :The effect of the definition is to consider all possible post-distributions F in
r:s, and then demonically to choose the one that gives the least (the 'worst')
expectation for ff: thus nondeterminism is demonic in that it minimises
the pre-expectation at each initial state, and Def. 3.2 is then the greatest
expectation everywhere no more than those pointwise minima.
For standard programs, if executing a program r from a state s is certain
to establish a postcondition A then that state is contained in the associated
weakest precondition; with our definition we would have ewp:r:A:s = 1. For
probabilistic programs, if the standard postcondition A is established with
only a probability at least p say, then the greatest pre-expectation on executing
r from s initially is at least p and we have Thus as a
special case, when A is a predicate we can interpret ewp:r:A:s as the greatest
assured probability that A holds after execution of r from s.
Now we discover the various refinement orders over T S that correspond
via ewp with orders over MS. First, we generalise the observation from standard
programming (eg. [14]) that the Smyth order on programs corresponds
to the implication order lifted to predicate transformers and that the Hoare
order similarly corresponds to (lifted) reverse implication. We use PS (typi-
cal element -) to denote the set of non-negative valued expectations and NS
(typical element -) for the non-positive valued expectations. They are both
subsets of ES.
Lemma 3.3 For
r
9 This reduces to Kozen's definition [9] for deterministic programs. There programs
are functions from initial state to distributions, so that the minimisation ranges over a
singleton set and is thus superfluous.
10 The apparent confusion between expectations and probabilities is deliberate and harm-
less: the probability of an event A over a distribution is equal to the expected value of
(the characteristic function of) A over that same distribution.
Proof: For in MS, any s in S and - in PS we reason as follows
r
implies "(r:s) ' ''(r 0 :s) definition v S
implies
R
R
F -)
R
R
For the deferred justification we appeal to the monotonicity of the arithmetic
over non-negative arguments without subtraction: r:s differs from
"(r:s) only by the addition of 'larger elements' according to Def. 2.1, and
so the minimum selection made in Def. 3.2 cannot be increased by making
the selection over the up-closure instead.
The result now follows by generalising on s, and a similar argument justifies
the second statement (but note the reversal W). 2
Lem. 3.3 is the key to defining the expectation-transformer equivalents
to the Smyth, Hoare and Egli-Milner orders where, as usual, the Egli-Milner
order is the intersection of the Smyth and Hoare orders.
Definition 3.4 For t; t 0 in T S we define
:That the Egli-Milner order between programs is preserved under ewp now
directly.
Corollary 3.5 For
Proof: Lem. 3.3 and Def. 3.4. 2
The corollary shows only that ewp is an order-preserving mapping between
(MS;vEM ) and (T S; vEM ). The next result of this section is to show
that it is also an injection, and therefore that programs can be modelled
equivalently either as relations or as expectation transformers. To establish
that ewp is an embedding we prove the converse to Cor. 3.5.
Lemma 3.6 For
Proof: Suppose for a contradiction that ewp:r vEM ewp:r 0 but r 6v EM r 0 ,
for some in MS. Without loss of generality assume r 6v S r 0 , so that for
some distribution F and state s we have both
F 62 "(r:s) (3)
From (3), with the aid of the separating hyperplane lemma A.1, we have for
some expectation - in PS that
Z
F
Z
and thus that
R
ewp:r:-:s. From (4) however we have ewp:r 0 :-:s - R
directly, giving together
Z
F
and contradicting the hypothesis (at the state s). 2
Since in the proof of Lem. 3.6 we have actually proved the converse to
Lem. 3.3, we can now state the correspondence between the relational model
and program logic for all three orders.
Theorem 3.7 The following equivalences hold for all
r
r vEM r 0 iff ewp:r vEM ewp:r 0 :We have shown that ewp injects MS into T S - but there are many vEM -
monotonic expectation transformers that are not ewp-images of MS. The
final result of this section identifies the exact sub-space of expectation transformers
that form the image of MS through ewp. We identify 'healthiness
in the style of Dijkstra [2] - for standard programs
11 The relevance of the healthiness conditions applied to the programming logic are
treated in Sec. 5.
- and of Morgan et al. [13] - for probabilistic programs - that identify
the (images of) programs of MS within it. The importance of that result
is that theorems proved within T S about healthy expectation transformers
correspond to theorems about programs in MS.
The first healthiness condition is a slight generalisation of the sublinearity
of Morgan [13]. To state it we define, for expectations ff; fi in ES and real
non-negative scalar c, the expectations ff+fi and cff, where (as for p-averaging
of distributions) we mean a pointwise lifting of standard addition and scalar
multiplication.
Definition 3.8 An expectation transformer, t: T S is sublinear iff for all ff; fi
in ES, and a; b; c non-negative reals,
note first that sublinearity is satisfied by all images of MS under
ewp.
Lemma 3.9 Any expectation transformer ewp:r, for r in MS, is sublinear.
Proof: Def. 3.2 and properties of arithmetic. (Morgan [13] gives a more
detailed proof.) 2
For total correctness (for the Smyth C S ), sublinearity tells the whole story
[13, Thm. 8.7]; in our more general CS however, there are sublinear elements
of MS that are not ewp-images: take S to be the two-element state space
fx; yg, and consider the result set
It is convex, but not Egli-Milner closed 12 ; its associated expectation transformer
formed by ewp is sublinear, but it is not the ewp-image of any element
of MS.
The characterisation of Egli-Milner closure is captured by a second healthiness
condition - 'partial linearity' - which states that t:ff depends only
on the pre-expectations of t applied to expectations in PS [ NS.
In fact its closure is fF which it is indistinguishable using
ewp for any ff in PS [ NS.
Definition 3.10 An expectation transformer, t in T S is said to be partially
linear if for all states s in S, and all expectations ff in ES, there are expectations
- in PS and - in NS such that
:Note that the implicit existential quantification in Def. 3.10 means there may
be many decompositions of ff as a sum -. 13
We complete the correspondence between healthy expectation transformers
and MS with the next theorem, which we state only. The proof is omitted
as it is overly technical and not necessary for the rest of the paper.
Theorem 3.11 An expectation transformer t in T S is both sublinear and
partially linear if and only if there is r in MS such that
4 Probabilistic Guarded Commands
In this section we illustrate the constructions of the previous two sections by
giving equivalent relational (MS) and expectation transformer (T S) semantics
for a simple sequential programming language that extends Dijkstra's
guarded commands [2] with a binary probabilistic choice operator p \Phi. 14
The relational semantics of Fig. 1 writes s for the point distribution 15
13 A perhaps more alluring healthiness condition would be that t:ff is determined by its
positive part (ff t 0) and its negative part (ff u 0); but
does not hold for general probabilistic programs, although it does in the restricted set of
standard programs and f0; 1; \Gamma1g-valued expectations [12].
14 He et al. [4] and Morgan et al. [13] give similar presentations of semantics, but for
total rather than partial correctness, and more detailed motivation can be found there.
Note that demonic choice is retained, not replaced.
15 For state s in S we define the point distribution s: S ! [0; 1] to be
As a special case, we write ? for (-s:0), the distribution that evaluates to 0 everywhere
on S.
R
MS !MS
For p \Phi, u and sequential composition, the Egli-Milner closure should be taken of
the right-hand side.
Figure
1: Probabilistic relational semantics
concentrated on a single state s, and u for the demonic combination of pro-
grams. The symbol p \Phi is used both for probabilistic combination of programs
and for the p\Gammaaveraging explained in Sec. 2 between sets of distributions.
Because they contain no demonic nondeterminism, the three primitive
commands all yield singleton result sets. Probabilistic choice p \Phi returns
the weighted average of the result sets of its operands, a singleton set if
its arguments are. 16 The result set of the demonic nondeterministic choice
between two programs is the convex closure of the union of the results of the
operands - the closure models operationally that a demon could resolve the
choice by flipping a coin of arbitrary bias.
In sequential composition, both r and r 0 can be considered as sets of
purely probabilistic programs 'compatible' with them: such programs, say
are thus of type S ! S, and we write r 3 f for compatibility, meaning
(8s: S \Delta f:s 2 r:s). Since the effect of two purely probabilistic programs in
sequence is just a single distribution over s 00 say, defined
Z
f:s
ds 0 (5)
by 'averaging' the effects of f 0 over the intermediate distribution produced
by f:s, for the general case we vary (5) over all f; f 0 to give
In fact p can depend on the state; but for simplicity we assume here that it's constant.
F is the v T -monotonic
function such that F
Figure
2: Probabilistic ewp semantics, where oe is in PS [NS and s is in S.
That simplifies to the definition in Fig. 1 if we regard f 0 in
R
parametrised
by s 00 , abbreviating (-s 0
R
ds 0 ).
In the ewp-semantics of Fig. 2 the post-expectation oe varies only over
PS [NS; 17 we use u also for the pointwise minimum between expectations.
Here probabilistic choice p \Phi returns the weighted average of the results of
its expectation operands; and u takes the pointwise minimum, reflecting the
demon's striving for the worst (least) result. Sequential composition becomes
simple functional composition (as for standard predicate transformers [2]).
In both cases recursion is dealt with by least fixed points in the appropriate
orders: Thm. 3.7 shows that the two orders correspond.
Our concern in the next section will be to recover total and partial semantics
separately from Fig. 2: we will show how to define probabilistic
wp and wlp that generalise Dijkstra's operational interpretations and that
satisfy probabilistic versions of the standard laws for combining partial and
total correctness.
5 Partial and total correctness
Athough ewp acts over all of ES, we can extract two more-specialised logics
from it: each acts conventionally over just the non-negative expectations PS.
17 Because an Egli-Milner-closed set is determined by its Smyth- and Hoare closures,
that is sufficient.
For a total correctness logic we merely restrict to PS directly, and use
the order V.
Definition 5.1 Let r be a program in MS; then the greatest pre-expectation
of program r with respect to post-expectation - in PS, associating 0 with
non-termination, is defined:
easily from sublinearity: if - is in PS then
so that wp:r:- is in PS also. Moreover Lem. 3.3 shows that this wp semantics
of programs corresponds to a relational model with the Smyth ordering [13]
- non-termination is the worst outcome in both semantics.
For partial correctness we define a probabilistic wlp, again we restrict to
the subspace (PS; V).
Definition 5.2 r be a program in MS; then the greatest liberal pre-
expectation of program r with respect to the post-expectation - in PS,
associating 1 with non-termination, is
easily from sublinearity of ewp:r that for - in NS,
3.2 we can readily show Def. 5.2 to be identical to
Z
F
which is a demonic generalisation of the probabilistic wlp defined only for nondemonic
programs by Jones [6]. Morgan [12] shows that (6) also generalises standard wlp [2].
and thus since lies in NS, so does ewp:r:(- \Gamma 1) from which we deduce
that wlp:r is a well-defined expectation transformer in PS ! PS. Also
Lem. 3.3 implies that the wlp semantics corresponds to a relational model
with the Hoare ordering - accordingly non-termination is the best outcome.
For example, let S be some finite portion of N , and for natural number N
for the assignment taking every initial state to the final state
N . The program
illustrates the difference between wp and wlp. Writing for the expectation
that evaluates to 1 when s is N and to 0 otherwise, we have
indicating that the greatest expectation of termination in state 0 is pq, for
all initial states.
The greatest expectation of either termination at 0 or nontermination is
found with wlp; we have
Thus the wp observation gives the greatest guaranteed probability 19 of termination
at 0 - and non-termination guarantees nothing. The wlp observation
on the other hand returns the probability that either 0 is reached or the program
fails to terminate - the usual interpretation for partial correctness.
19 This follows from the usual interpretation of the expected value of a f0; 1g-valued
random variable A with respect to a probability distribution: it is the chance that the
random outcome will establish the event "A evaluates to 1".
F is the v T -monotonic
function such that F
Figure
3: Probabilistic wlp semantics, where - is in PS and s is in S. The
greatest fixed point - is used for recursion.
Similarly the wlp semantics of a looping program is calculated as the
greatest fixed point 20 of a monotonic function over PS (rather than the
least, as for wp). It is easily checked that specialising Fig. 2 to wlp produces
only the changes shown in Fig. 3. (Note that PS [NS is closed under \Sigma1.)
An alternative view of wlp and wp often put forward is to regard them
as functions on programs which satisfy certain properties - for example
Nelson [14] defines (standard) wp and wlp axiomatically by enforcing both
the coupling law (at (7) below) and conjunctivity properties (both wp and
wlp induce conjunctive predicate transformers). Thus the efficacy of the
probabilistic definitions lies in the generalisation of those properties: Lem. 5.3
and Thm. 5.4 (also below) generalise respectively conjunctivity and coupling
(7), and as such they form the main results of this section.
We state the coupling here for standard programs [5]:
where r is a program and A a predicate and we are using (though only here)
the original meanings for wp and wlp [2], with V for 'implies at all states'.
Law (7) implies that wp:r and wlp:r agree on initial states from which
termination is guaranteed, and thus it underlies the practical treatment of
looping programs - to prove total correctness of an iteration the work is
divided between ensuring partial correctness (with a loop invariant), and
an independent termination argument (with a variant). The probabilistic
coupling Thm. 5.4 will allow a similar treatment for probabilistic looping
programs.
This follows from Def. 5.2 since the least fixed point of a v T -monotonic function
becomes specialised first to W on NS ! NS which order is then shifted to PS ! PS by
applying "1+".
We consider first the appropriate generalisation of conjunctivity: conjunction
of predicates is replaced by probabilistic conjunction [17] defined for
non-negative expectations - 0
where t is pointwise maximum between expectations.
Probabilistic conjunction reduces to ordinary conjunction when specialised
to predicates. 21 Its importance in probabilistic reasoning is that it subdis-
tributes through both wp and wlp images of programs - another consequence
of sublinearity. 22
Lemma 5.3 For r in MS and - 0 in PS,
Proof: Sublinearity (with monotonicity of ewp:r, and
Defs. 5.1, 5.2. 2
Next we deal with coupling - Thm. 5.4, generalising (7), is the main
result of this section.
Theorem 5.4 For r in MS and - 0 in PS,
Proof:
take only the extreme values 0 and 1, then
22 One might have guessed that u is the appropriate generalisation of - but u does
not (even sub-) distribute [6, 17].
Having established wp:r:(- 0
taking t0 on both sides: since on the left it has no effect, we achieve our
result. 2
As a corollary we recover the standard rule (generalised) for combining
partial and total correctness.
Corollary 5.5 For r in MS and - in PS,
Proof: Thm. 5.4 and that - j - & 1 for - in PS. 2
As a special case note that the wlp result implies the wp result at those
states from which termination occurs with probability 1 - where wp:r:1
because (&1) is the identity.
6 Conclusion
The technical contribution of this paper is to extend the Plotkin construc-
tion, with its Egli-Milner order, to act over probability distributions rather
than points, then showing that the decomposition [15] into respectively the
convex Hoare (partial correcness) and convex Smyth (total correctness) pow-
erdomains continues to hold - even under taking limits.
A key feature is the imposition of convexity (linear interpolation) between
distributions.
Jones [6] defines a partial correctness logic based on expectations, but
only for non-demonic programs, and she does not discuss the healthiness
conditions on which the applicability of such logics (as calculational tools)
depends. It was the realisation [13] that adding nondeterminism to Kozen's
model corresponds to a weakening of the additive property of his logic to
sublinearity that makes proofs such as in Thm. 5.4 and those in [11] reduce
to simple arithmetic arguments. The use of general expectations (thus superseding
purely non-negative expectations [13]) leads to an even simpler
presentation of sublinearity - the more useful of the two healthiness conditions
described here.
There are two immediate applications. The first is the discovery of proof
rules for loops in which partial and total correctness are separated: with wp
and wlp together it can be shown [11] that
I preserved by loop body
is sufficient for I V wp:(do G ! body od):(I u :G) provided I V T , where
T gives for each initial state the probability of the loop's termination.
The second application is abstraction. For some programs only termination
is probabilistic, and (partial) correctness can be established informally
by considering all probabilistic choices to be demonic: algorithms falling in
to that category typically use randomization as a method for searching a
large space of potential witnesses, and examples include finding perfect hash
functions and finding irreducible polynomials (Mehlhorn, Rabin [3]).
Suppose the loop do G ! body od contains probabilistic choices in body ,
and denote by body the result of replacing them all with demonic choice.
If (the fairness of) probability was essential for the loop's termination, the
resulting standard loop
do
though probability-free, would not terminate and so a wp analysis of it would
be useless.
With a theory of partial correctness however we can reason as follows:
for a standard loop invariant I,
G u I V wlp:body:I established by standard reasoning say
hence I V wlp:loop:(I u :G) standard loop rule
hence I V wlp:loop:(I u
where loop and loop are the two loops containing body and body respectively.
Thus from standard reasoning about body we reach probabilistic conlusions
about loop.
The last step relies only on loop v loop, which by wlp-monotonicity of
do od follows from body v body ; that in turn is guaranteed by (u) v ( p \Phi)
for all p. Note that probabilistic wlp is essential - the last step would not
hold if we had written wp:body .
One would conclude the argument by determining T j wp:loop:1 sepa-
rately, then having
I
from Thm. 5.4.
A Standard results from linear programming
Lemma A.1 The separating hyperplane lemma. Let C be a convex and
(limit-) closed subset of R N , and F a point in R N that does not lie in C.
Then there is a separating hyperplane ff with F on one side of it and and all
of C on the other.
Proof: See for example Trustrum [18]. 2
Our use in Lem. 3.6 is based on interpreting distribution F as a point
in R N , and an expectation ff as the collective normal of a family of parallel
hyperplanes. The integral
R
F ff then gives the constant term for the
ff-hyperplane that passes through the point F .
More generally, for a convex region C in R N , the minimum
Z
F
gives the constant term for the ff-hyperplane that touches C, with its normal
pointing into C.
Thus when specialised for the applications in this paper, the lemma implies
that if F 62 C for some closed convex set of distributions C, then there
is an expectation ff with
Z
F
Z
Moreover if C is up-closed (down-closed) then the range of ff above specialises
to PS (NS).
Facts and definitions from domain theory
We summarise here Abramsky and Jung's presentation [1] of facts from domain
theory, giving page numbers where appropriate. Assume (D; v) is a
complete partial order - i.e. that every directed (defined below) set has a
least upper bound.
1. up-, down-closure For subset A of D we define its up-closure "A to
be the set
Similarly we define its down-closure #A to be the set
2. Smyth order (p.97) The Smyth order v S on PD, for subsets A; A 0 of
D, is given by
A
3. Hoare order (p.97) The Hoare order vH on PD, for subsets A; A 0 of
D, is given by
A vH A 0 iff A ' #A
4. Egli-Milner order The Egli-Milner order vEM between subsets combines
the Smyth (Def. B2) and Hoare (Def. B3) order. For subsets
of D we define
A vEM A 0 iff "A ' A 0 and A ' #A
5. up-directed (Def. 2.1.8, p.10) A subset A of D is up-directed (or
simply directed) iff for any u; v in A there is a w also in A such that
for the least upper bound of A (if it
exists).
6. down-directed A subset A of D is down-directed iff for any u; v in A
there is a w also in A such that w v u and w v v. We write uA for
the greatest lower bound of A (if it exists).
7. way-below (Def. 2.2.1, p.15) The way-below relation - on D is defined
as follows: for u; v in D we say u up-directed subsets A
of D with v v tA there is some w in A with u v w. We also say u
approximates v iff u - v.
8. basis (Def. 2.2.3, p.16) A basis for D is a subset B such that every
element of D is the t-limit of the elements way below it in B.
9. !-continuity (Def. 2.2.6, p.17) D is !-continuous if it has a countable
basis.
10. lens (Def. 6.2.15, p.100) We define the lenses of D, Lens(D), to be the
set of non-empty, Egli-Milner closed, limit-closed subsets of D.
11. ideal (p.10) A subset I is an ideal if it is directed and down-closed.
12. Plotkin powerdomain (Theorem 6.2.3, p.95) The Plotkin powerdo-
main of D with basis B is given by the ideal (Def. B11) completion
of
where F(B) denotes the set of finite, nonempty subsets of B.
13. topological Egli-Milner order (Def. 6.2.16, p.101) Define the topological
Egli-Milner order v TEM on the set of Egli-Milner closed subsets,
Lens(D), of D as follows:
14. Smyth-, Hoare-equivalence Two subsets A; A 0 of D are Smyth equivalent
are Hoare-equivalent if
Theorem B.1 (Theorem 6.2.19, p.101) If D is an !-continuous complete
partial order, its Plotkin powerdomain is isomorphic to (Lens(D); v TEM ). 2
--R
Domain theory.
A Discipline of Programming.
Probabilistic models for the guarded command language.
Probabilistic nondeterminism.
A probabilistic powerdomain of evaluations.
Private communication.
Semantics of probabilistic programs.
A probabilistic PDL.
Proof rules for probabilistic loops.
Unifying wp and wlp.
Probabilistic predicate transformers.
A generalization of Dijkstra's calculus.
"Pisa Notes"
An introduction to probabilistic predicate transformers.
Linear Programming.
--TR
Parallel program design: a foundation
A generalization of Dijkstra''s calculus
Probabilistic non-determinism
Building on the unity experience
On randomization in sequential and distributed algorithms
Domain theory and integration
Domain theory
Probabilistic predicate transformers
PCF extended with real numbers
Modeling and verification of randomized distributed real-time systems
Unifying <italic>wp</italic> and <italic>wlp</italic>
Probabilistic models for the guarded command language
A Discipline of Programming
A probabilistic PDL
--CTR
Joe Hurd , Annabelle McIver , Carroll Morgan, Probabilistic guarded commands mechanized in HOL, Theoretical Computer Science, v.346 n.1, p.96-112, 23 November 2005
Yuxin Deng , Rob van Glabbeek , Matthew Hennessy , Carroll Morgan , Chenyi Zhang, Remarks on Testing Probabilistic Processes, Electronic Notes in Theoretical Computer Science (ENTCS), 172, p.359-397, April, 2007 | program logic;partial correctness;probability;verification |
504618 | Loss probability calculations and asymptotic analysis for finite buffer multiplexers. | In this paper, we propose an approximation for the loss probability, PL (x), in a finite buffer system with buffer size x. Our study is motivated by the case of a high-speed network where a large number of sources are expected to be multiplexed. Hence, by appealing to Central Limit Theorem type of arguments, we model the input process as a general Gaussian process. Our result is obtained by making a simple mapping from the tail probability in an infinite buffer system to the loss probability in a finite buffer system. We also provide a strong asymptotic relationship between our approximation and the actual loss probability for a fairly large class of Gaussian input processes. We derive some interesting asymptotic properties of our approximation and illustrate its effectiveness via a detailed numerical investigation. | Introduction
loss probability is an important QoS (Quality of
Service) measure in communication networks. While
the overflow probability, or the tail of the queue length dis-
tribution, in an infinite bu#er system has been extensively
studied [1], [2], [3], [4], [5], [6], [7], there have been relatively
few studies on the loss probability in finite bu#er
systems [8], [9], [10], [11].
In this paper, we propose a simple method to estimate
the loss probability PL (x) in a finite bu#er system from
the tail of the queue length distribution (or tail probabil-
of an infinite bu#er system. We estimate
PL (x) by making a simple mapping from P{Q > x}. Hence,
we consider both a finite bu#er queueing system and an infinite
bu#er queueing system. We model both systems by
a discrete-time fluid queue consisting of a server with constant
rate c and a fluid input #n . Both queues are fed with
the same input. Let -
Qn and Qn denote the queue length
in the finite queue and in the infinite queue at time n, re-
spectively. We assume that #n is stationary and ergodic
and that the system is stable, i.e., E{#n } < c. Under this
assumption, it has been shown that Qn converges to a stationary
and ergodic process [12]. It has also been shown
that -
Qn converges to a stationary process when the system
is a GI/GI/m/x type of queue [13], [14], and when the system
is a G/M/m/x type of queue [15]. Since proving the
convergence of -
Qn is not the focus of this paper, and more-
over, practical measurements of PL (x) and P{Q > x} are
based on "time averaging" assuming ergodicity (see (1) and
(2)), we assume that both -
Qn and Qn started at
and that they are ergodic and stationary. 1 The time index
H.S.Kim and N.B.Shro# are with the School of Electrical and Computer
Engineering, Purdue University, West Lafayette, Indiana
We refer the interested reader to our technical report [17], where
we have studied the relationship between finite and infinite bu#er
queues without assuming ergodicity of -
Qn and derived similar asymp-
n is often omitted to represent the stationary distribution,
The loss probability, PL (x), for a bu#er size x is defined
as the long-term ratio of the amount of fluid lost to the
amount of fluid fed. It is expressed as
and where the second equality
is due to the ergodicity assumption. The tail probability
(or tail of the queue length distribution, also sometimes
called the overflow probability) P{Q > x} is defined as the
amount of time the fluid in the infinite bu#er system spends
above level x divided by the total time. It is expressed as:
now on, when we write "loss probability" it will only be
in the context of a finite bu#er system, and when we write
"tail probability" it will only be in the context of an infinite
bu#er system. Note that since P{Q > x} is averaged by
time, and PL (x) is averaged by the input, in general there
is no relationship between these two quantities. However,
PL (x) is often approximated as:
This approximation usually provides an upper bound
(sometimes a very poor bound) to the loss probability,
although in general this cannot be proven, and in fact
counter-examples can easily be constructed. What we have
learned from simulation studies is that the curves PL (x)
versus x and P{Q > x} versus x exhibit a similar shape
(e.g., see Fig. 1), which motivates this work. Further, it
has been shown in [16] that for M/Subexponential/1 and
GI/Regularly-varying/1 with i.i.d. interarrival times and
i.i.d. service times, P{Q > x}/PL (x) converges to a con-
stant, as x #.
Hence, it seems reasonable that if we have a good estimate
of the tail probability P{Q > x} and a way to calculate
PL (a), the loss probability for some bu#er size a, then
totic results to Equation (22) in this paper. However, this involves
mathematical technicalities that take away from the main message
in this paper, i.e., developing a simple approximation for the loss
probability.
we can calculate the loss probability PL (x) as
PL (a)
In particular, we will choose a = 0 because this allows us
to compute the loss probability (PL (0)) quite easily. This
is the basic idea that drives this paper. In addition to developing
a methodology to calculate the loss probability,
we will also show that asymptotically the loss probability
and the tail probability curves are quite similar, and if they
diverge, they do so slowly, which is an interesting result by
itself.
For our study in this paper, we focus on the case when
the aggregate tra#c can be characterized by a stationary
Gaussian process. Recently, Gaussian processes have
received significant attention as good models for the arrival
process to a high-speed multiplexer [3], [18], [19], [20],
[21], [22], [23]. There are many reasons for this. Due to
the huge link capacity of high-speed networks, hundreds
or even thousands of network applications are likely to
be served by a network multiplexer. Also, when a large
number of sources are multiplexed, characterizing the input
process with traditional Markovian models results in
computational infeasibility problems [24] that are not encountered
for Gaussian processes. Finally, recent network
tra#c studies suggest that certain types of network traffic
may exhibit self-similar or more generally asymptotic
self-similar type of long-range dependence [25], [26], and
various Gaussian processes can be used to model this type
of behavior. Hence, our motivation to study the case when
the input process #n can be characterized by a Gaussian
process.
This paper is organized as follows. In Section II, we
review the maximum variance asymptotic (MVA) results
for the infinite bu#er queue, and then demonstrate how
to obtain similar results for the loss probability. Then,
we compare our approach to an approach based on the
many-sources asymptotics. In Section III, we validate our
result with several numerical examples, including those for
self-similar/long-range dependent tra#c. In Section IV, we
find the asymptotic relationship between the loss probability
and our approximation. In Section V, we describe the
applicability of our approximation for on-line tra#c mea-
surements. We finally state the conclusions in Section VI.
II. Maximum Variance Asymptotic (MVA)
Approximation for Loss
Remember that the first component in our development
of an approximation for PL (x) is to find a good estimate
of P{Q > x}. Fortunately, this part of the problem has
already been solved in [20], [21], [27]. By developing results
based on Extreme Value Theory, it has been found
that the Maximum Variance Asymptotic (MVA) approach
(first named in [20]) provides an accurate estimate of the
tail probability. We briefly review it here. As mentioned
before, we focus on the case when the aggregate tra#c can
be characterized by a Gaussian process, hence #n , the input
process to the queue is Gaussian. Let -
The queue length Qn (or workload) at time n in the
infinite bu#er system is expressed by Lindley's equation:
We define a stochastic process Xn as
We assume that #n is stationary and ergodic and that the
system is stable, i.e., E{#n } < c. Then, it has been shown
that the distribution of Qn converges to the steady state
distribution as n # and that the supremum distribution
of Xn is the steady state queue distribution [12]:
Let C # (l) be the autocovariance function of #n . Then, the
variance of Xn can be expressed in terms of C # (l). For
each x > 0, define the normalized variance # 2
x,n of Xn as
x,n :=
where # := c- #. Let m x be the reciprocal of the maximum
of # 2
x,n for given x, i.e.,
x,n
and we define n x to be the time n at which the normalized
variance
Although the estimate
2 called the Maximum Variance Asymptotic (MVA)
approximation has been theoretically shown to be only an
asymptotic upper bound, simulation studies in di#erent papers
have shown that it is an accurate approximation even
for small values of x [27], [18], [20], [28].
Now, for some a, we need to evaluate the ratio
PL (a)/P{Q > a} given in (4). As mentioned earlier, it
is easy to find PL (a) for a = 0, hence what we need to do
is to first estimate P{Q > 0} from the MVA result. For a
given x both n x and m x in the MVA approximation cannot
generally be obtained in a simple closed form, hence
search algorithms 2 are likely to be used to evaluate them.
may not be unique especially for a small value of
x. However, when 0, we can obtain them right away
as demonstrated in the following proposition.
Proposition 1: Let n x be the value of n at which # 2
x,n
attains its maximum # 2
. (10)
Simple local search algorithms starting at #x
(2-#
are good enough
to find nx within a small number of iterations.
Proof of Proposition 1:
To prove the proposition, it su#ces to show that
sup
. (11)
. (12)
, we have (11).
Now, we show how to calculate PL (0). Since #n is assumed
Gaussian, the mean and the variance provide su#-
cient information to calculate PL (0), i.e.,
c
As long as the number of input sources
is large enough for the aggregate tra#c to be characterized
as a Gaussian process, (13) gives an accurate estimate (ex-
act for a Gaussian input) and is often called the Gaussian
approximation [29]. Note that C #
in (10). From (4), (10), and (13), we have
where
exp (c - #) 2
c
We call this above approximation the MVA approximation
for loss. The MVA approach is based on the large bu#er
asymptotics and it also applies in the context of the many-
sources asymptotics [20], [28]. We next compare this approach
with an approximation based on the many-sources
asymptotics.
The many-sources asymptotics have been widely studied
and can be found in many papers on queueing analysis
using large-deviation technique [5], [30], [31], [32]. Most
of the papers deal with the tail distribution rather than
the loss probability. In [9], the authors developed the first
result on the loss probability based on the many-sources
asymptotics. We call this the Likhanov-Mazumdar (L-M)
approximation for loss. Since the L-M result was obtained
for a fairly general class of arrival processes and is much
stronger than typical large-deviation types of results, we
feel that it is important to compare our result with the
result.
Consider N i.i.d. sources, each with input rate # (i)
It is assumed that the moment
generating function of # (1)
exists, and that the input rate
n is bounded. The L-M approximation has the following
and is theoretically justified by
where N is the number of sources, NC is the link ca-
pacity, NB is the bu#er size, - #
is a value of # such that #
#n (#)
log #n (# n ), and - n is a value of n that maximizes I n (C, B),
for a given C and B. becomes exact as N #.
Consider the numerical complexity of (16). Suppose that
we calculate (16) for given N,C,B, and # (1)
n . In general,
since there are no closed-form solutions for # n and - n, we
have to find them numerically. Two iteration loops are
nested; The inner loop iterates over # to find # n for given
n, and the outer loop iterates over n to find - n. Hence, it
can take a long time to find a solution of (16) by numerical
iteration. However, the MVA approximation requires only
an one-dimensional iteration over n to find n x at which m x
is minimized.
There is another problem in applying the L-M approximation
for control based on on-line measurements. When
the distribution of a source is not known beforehand, in the
L-M approach the moment generating function of a source
should be evaluated for the two dimensional arguments, (#,
n), whereas only the first two moments are evaluated for
the one argument, n, in the MVA approach (see Section V).
Note that one could avoid the above problems by making
a Gaussian approximation on the aggregate source
first, and then using the the L-M approximation given by
(16). Specifically, if we assume that the input process is
Gaussian, we have a closed-form solution for # n , i.e., as
k } and
k }, we have
Cn +B -m(n)
. (17)
Hence for given C and B both I n (C, B) and # 2
NB,n (the normalized
variance of Xn ) are expressed in terms of n, m(n),
and v(n), we can avoid the two dimensional evaluation of
the moment generating function.
3 This expression is just a rewriting of equation (2.6) in [9].
The only problem is that the theoretical result that says
that the L-M approximation in (16) becomes exact as the
number of sources N becomes large, is not proven for unbounded
(e.g. Gaussian) inputs. Still, since making this
approximation reduces the complexity of the search space,
it would be instructive to also investigate the performance
of such an approximation. In Section III, we will numerically
investigate our MVA approximation for loss, the L-M
approximation, and some other approximations developed
in the literature.
III. Numerical Validation of the MVA
Approximation for Loss
In this section, we investigate the accuracy of the proposed
method by comparing our technique with simulation
results. In all our simulations we have obtained 95% confidence
intervals. However, to not clutter the figures, the
error bars are only shown in the figures when they are
larger than -20% of the estimated probability. To improve
the reliability of the simulation, we use Importance
Sampling [33] whenever applicable. 4 We have attempted
to systematically study the MVA approximation for various
representative scenarios. For example, we begin our
investigation with Gaussian input processes. Here, we only
check the performance of our approximation (not compare
with other approximations in the literature), since other
approximations are not developed for Gaussian inputs. We
then consider non-Gaussian input sources and compare our
MVA approximation for loss with other approximations in
the literature. Specifically, we consider MMF sources which
have been used as representative of voice tra#c in many different
papers (e.g. [34], [35]) and also consider JPEG and
MPEG video sources that have been used in other papers
in the literature (e.g. [20], [36]).
A. Gaussian Processes
We begin by considering the simple case when the input
is a Gaussian Autoregressive (AR) process with autocovariance
(note that AR processes have
been used to model VBR video [22]). In Fig. 2 one can see
that the simulation and MVA-Loss result in a close match
over the entire range of bu#ers tested.
The next example, in Fig. 3, covers a scenario of multi-
time scale correlated tra#c. Note that multiple-time scale
correlated tra#c is expected to be generated in high-speed
networks because of the superposition of di#erent types
of sources [37]. In this case, the autocovariance function
of the Gaussian input process is the weighted sum of three
di#erent powers, i.e., C #
. One can see from Fig. 3 that because
of the multi-time scale correlated nature of the input, the
loss probability converges to its asymptotic decay rate only
at large bu#er sizes. This observation is consistent with
observations made on the tail probability when fed with
multi-time scale correlated tra#c [20]. Again, it can be
4 For interesting readers, the software used for the analysis and simulation
will be available upon request.
seen that the analytical result tracks the simulation results
quite closely.
The next example, deals with a well known input pro-
cess, the fractional Brownian motion process, which is the
classical example of a self-similar process [23]. 5 The results
are shown in Figs. 4 and 5, demonstrating the accuracy of
MVA-Loss, even for self-similar sources. Due to the di#-
culty in applying importance sampling techniques to obtain
loss probabilities for self-similar tra#c, in Figs. 4 and 5, we
show probabilities only as low as 10 -6 . In Fig. 4, the input
tra#c is characterized by a single Hurst parameter. How-
ever, even if the tra#c itself is long-range dependent, due to
the heterogeneity of sources that high-speed networks will
carry, we expect that it will be di#cult to characterize the
tra#c by simply one parameter, such as the Hurst param-
eter. Hence, we also run an experiment for a more realistic
scenario, i.e., the input process being the superposition of
fractional Brownian motion processes with di#erent Hurst
parameters. The numerical result is shown in Fig. 5. One
can see from Figs. 4 and 5 that MVA-Loss works well for
self-similar sources.
B. Non-Gaussian Processes
In this section we will compare the performance our
MVA-loss approximation with simulations and also with
other schemes in the literature. We call the Likhanov-
Mazumdar technique described earlier "L-M," or "L-
M:Gaussian" when further approximated by a Gaussian
process, the Cherno# dominated eigenvalue technique in
[38] "Cherno#-DE," the average/peak rate method in [39]
"Ave/Peak," the analytical technique developed in [24]
"Hybrid," and the famous e#ective bandwidth scheme "Ef-
fective BW" [40].
We now consider the practically important case of multiplexed
voice sources. The input MMF process, which has
widely been used to model voice tra#c source [34], [35],
has the following state transition matrix and rate vector:
Input rate vector : 0 cells/slot
cells/slot
These values are chosen for a 45 Mbps ATM link with 10
time slot and 53 byte ATM cell. In this example,
we assume that 2900 voice sources are multiplexed on a
Mbps ATM link with 10 msec time slot and 53 byte
ATM cell. As shown in Fig. 6, the MVA-Loss obtains the
loss probability calculations accurately and better than the
other techniques.
5 For computer simulations, since continuous-time Gaussian processes
cannot be simulated, one typically uses a discrete-time version.
In the case of fractional Brownian Motion, the discrete-time version
is called fractional Gaussian noise and has autocovariance function
given by:
is the Hurst parameter.
We next investigate the accuracy of our approximation
when the sources to the queue are generated from actual
MPEG video traces. The trace used to generate this simulation
result comes from an MPEG-encoded action movie
(007 series) which has been found to exhibit long-range
dependence [36]. In Fig. 7, 240 MPEG sources are multiplexed
and served at 3667 cells/slot (OC - 3 line), where
we assume 25 frames/sec and a 10 msec slot size. The loss
probability versus bu#er size result in this case is shown in
Fig. 7. Again, it can be seen that the MVA-Loss approximation
tracks the simulation results quite closely.
C. Application to Admission Control
The final numerical result is to demonstrate the utility of
MVA-Loss as a tool for admission control. We assume that
a new flow is admitted to a multiplexer with bu#er size x
if the loss probability is less than the maximum tolerable
loss probability #.
In this example, we consider multiplexed voice sources
on a 45Mbps link (Fig. 8(a)) or multiplexed video sources
(Fig. 8(b)) for an admission control type of application.
The QoS parameter # is set to 10 -6 . For each voice source
in Fig. 8(a), we use the same MMF On-O# process that
was used for Fig. 6. For each video source, we use the
same MPEG trace that was used in Fig. 7 (with start times
randomly shifted). Then, the admission policy using MVA-
Loss is the following. Let -
# and v(n) be the mean and the
variance function of a single source, i.e., Let - # := E{# (1)
and v(n) := Var{
are
currently serviced, a new source is admitted if
where # is defined as in (14). In Fig. 8(a) and (b), we
provide a comparison of admissible regions using di#er-
ent methods. It can be seen that MVA-Loss curve most
closely approximates the simulation curve in both figures.
In Fig. 8(a), the L-M approximation performs as well, and
the Cherno# DE approximation only does slightly worse.
In Fig. 8(b), however, the Cherno# DE approximation in
this case is found to be quite conservative. This is because
for sources that are correlated at multiple time-scales (such
as the MPEG-video sources in Fig. 8(b) shown here), the
loss probability does not converge to its asymptotic decay
rate quickly (even if there exists an asymptotic decay rate),
and hence approximations such as the Cherno# DE scheme
(or the hybrid scheme shown earlier) perform quite poorly.
Admission control by MVA-Loss can be extended to a
case where heterogeneous flows are multiplexed. The link
capacity is 622.02Mbps (OC - 12 line), the bu#er size x
is fixed to 20000 cells, and the QoS parameter # is 10 -6 .
In this system, the input sources are of two types; JPEG-
video and voice. As a video source, we use a generic model
that captures the multiple-time scale correlation observed
in JPEG video traces. It is a superposition of an i.i.d.
Gaussian process and 3 two-state MMF processes:
State transition matrices :
0.999 0.001
0.9999 0.0001
Input rate vectors [cells/slot] :
Mean of i.i.d. Gaussian : 82.42
Variance of i.i.d. Gaussian : 8.6336
Then, the admission policy is the following. Let -
be the mean and the variance function of a single
voice source. Let - # 2 and v 2 (n) be the mean and the variance
function of a single video source. When (N 1 -1) voice
and N 2 video flows are currently serviced, a new voice flow
is admitted if
The boundary of the admissible region is obtained by finding
maximal N 1 satisfying (19) for each N 2 .
As one can see in Fig. 9, the admissible region estimated
by simulations and via MVA-Loss is virtually indistinguish-
able. In fact, the di#erence between the two curves is less
than 1% in terms of utilization.
IV. Asymptotic Properties of the MVA
Approximation for Loss
We now find a strong asymptotic relationship between
the loss probability and the tail probability. More specifi-
cally, under some conditions (to be defined later in Theorem
5), we find that
log
means that lim sup |f/g| < #. Equation
tells us that the divergence between the approximation
#e -mx/2 given in (14), and the loss probability is slow if at
all (this may be easier to see if we rewrite (20) in the form
log PL (x) - log #e
In [27] and [28], under a set of general conditions it has
been shown for the continuous-time case that
log P{Q > x}
We will obtain (20) by finding a relationship between PL (x)
and P{Q > x}, i.e.,
log P{Q > x} - log PL
under the set of conditions given in Theorem 5 (PL (x) will
be bounded from above and below by some expressions
in terms of P{Q > x}), and then by applying (21) and
some properties of m x . Note, that finding the asymptotic
relationship (22) between P{Q > x} and PL (x) is by itself
a valuable and new contribution.
We first list a set of conditions for which (21) holds
in the discrete-time case that are equivalent to the set
of conditions in [27] defined for the continuous-time
case. Let v n :=Var{Xn}, #(n) := log v n , and # :=
lim n#(n)/ log n (assuming that the limit exists).
The notation f(n) n#
# g(n) means that lim n#
1.
The parameter # cannot be larger than 2 due to the stationarity
of #n , and # (0, 2) covers the majority of non-trivial
stationary Gaussian processes. The Hurst parameter H is
related to # by We now state the following results
that are the discrete-time versions of the results in
[27], [28], [41]. The proofs for these results are identical to
those given in [27], [28], [41], with trivial modifications accounting
for the discrete-time version, and, hence, we omit
them here. These results are stated as Lemmas here, since
we will be using them to prove our main theorem.
Lemma 2: Under hypotheses (H1) and (H2),
x#
x 2-# .
Lemma 3: Under hypotheses (H1) and (H2),
log P{Q > x}
It is easier for us to work with conditions on the autocovariance
function of the input process rather than conditions
(H1) and (H2). Hence, we first define a condition on the
autocovariance function C # (l) which guarantees (H1) and
l=-n
# S#n #-1 .
Note that condition (C1) is quite general and is satisfied
not only by short-range dependent processes but also by
a large class of long-range dependent processes including
second-order self-similar and asymptotic self-similar processes
[42].
Lemma 4: If the autocovariance function C # (l) of #n satisfies
(C1), then (H1) and (H2) hold.
Proof of Lemma 4:
Let h(n) :=
(l). Note that
and that v n+1 - v First we show condition (H2).
Since both v n and n # approach #, lim n#
vn
should be
equal to lim n#
vn+1-vn
, if it exists (this is the discrete
version of L'Hospital's rule). Hence,
lim
where lim n#
vn
we show that (H1) also follows
from (C1). Since h(n)
that Note that a function g(x) is o(x) if
lim x# g(x)/x # 0. Now,
vn
vn
(by Taylor Expansion)
#S -S
The loss probability is closely related to the shape of the
sample path, or how long Qn stays in the overflow state.
Before we give an illustrative example, we provide some
notation. We define a cycle as this period, i.e., an interval
between time instants when Qn becomes zero. We let S x
denote the duration for which Qn stays above threshold x
in a cycle to which n belongs. Formally, let:
. Un := sup{k time of the
current cycle to which n belongs).
. Vn := inf{k time of the
next cycle).
. Wn := Vn -Un (Duration of a cycle to which n belongs).
. Zn := Vn - n (Residual time to reach the end of cycle).
. S x
k=Un
(Duration for which Q k > x in a
cycle containing n).
Note that if Qn > 0, Zn is equal to the elapsed time to
return to the empty-bu#er (or zero) state. Since Qn is
stationary and ergodic, so are the above. Hence, their expectations
are equal to time averages.
Consider two systems whose sample paths look like those
in Fig. 10. The sample paths are obtained when the input
is a deterministic three-state source which generates fluid
at rate c and 0, at state 1, 2, and 3, respectively.
The duration of each state is the same, say b. Use the
superscript (1) and (2) to represent values for the upper
and the lower sample path. Set a
2b (1) . Then, both cases have the same overflow probability.
Now, consider a time interval from 0 to 3b (2) . The amount
of fluid generated for that interval is clearly the same for
both cases. But, the amount of loss in the upper case is
exactly the twice of that in the lower case, hence, the upper
case has the larger loss probability. We can infer from this
that the loss probability is closely related to the length of
n and the slope of the sample path. Since loss happens
only when Qn is greater than the bu#er size x, we consider
the condition that Qn > x. Since it is di#cult to know
the distribution of S x
n , and since S x
n is determined by the
sample path, we use a stochastic process defined as
Here we have chosen 0 as the origin, but, because of sta-
tionarity, the distribution of Yn does not depend on the
origin. Note that if Q 0 > 0, Yn will be identical to Qn till
the end of cycle. We want to know the distribution of Yn
given is Gaussian, the distribution of
Yn can be characterized by the mean and the variance of
Yn . However, since Q 0 is the result of the entire history
up to time 0 and the future is correlated with the past, it
is di#cult to find an explicit expression of the mean and
the variance of Yn given Q 0 > x. Hence, we introduce
upper-bound types of conditions on the mean and the variance
of Yn as (26) and (27). For notational simplicity, let
be the
expectation and the variance under P x , respectively.
We now state our main theorem.
Theorem 5: Assume condition (C1). Further assume
that for any # > 0, there exist x 0 , K,M, and # such that
for all x # x 0 and n # Mx # . Then,
-# < lim inf
Though the conditions of Theorem 5 look somewhat
complex, they are expected to be satisfied by a large class
of Gaussian processes. If the input process is i.i.d. with
it can be easily checked
that
and (C1), (26), and (27) are satisfied with
# , and 1. It has been shown that Gaussian
processes represented by the form of finite-ordered Autoregressive
Moving Average (ARMA) satisfy (26) and (27)
[17]. Since the autocovariance function of a stable ARMA
process is in the form of C #
it satisfies (C1) with 1. So Theorem 5 is applicable to
Gaussian ARMA processes.
More generally, E{ P n
# Sn # under (C1). Thus, for each x,
#n and Var x {Yn } n#
and we can find
#, x) as small as
possible. If sup x K(#, x), sup x M(#, x), and sup x #, x) are
finite, then (26) and (27) hold. We conjecture that they are
all finite for a large class of stationary Gaussian processes,
and we are trying to show it.
Note that the rightmost inequality (limsup part) in (28)
holds without conditions (26) and (27), and it agrees with
empirical observations that the tail probability curve provides
an upper bound to the loss probability curve.
Before we prove the theorem, we first define the derivative
of m x with respect to x, m #
x . Recall (9), or m
vnx
. Since n x is an integer value, m x is di#erentiable
except for countably many x at which n x has a jump. Let
x is not di#erentiable}. Note that D has measure
zero, and that the left and right limits of m #
x
exist for all x # D. For simplicity, abuse notations by setting
z and m #
z for x # D. The
reason we set the (right) limit is that we will find the similarity
relation (29) in Lemma 6, which is useful in proving
Theorem 5. In fact, we may take the left limit to have the
same asymptotic behavior. By building m #
x in this
way, it directly follows from Lemma 2 that m #
x # bx -# for some constants a > 0 and b.
We now state three lemmas which are useful in proving
the theorem (Their proofs are in Appendix).
Lemma Under hypotheses (H1) and (H2),
x
y
dy x#
2x K
x
where K is a constant.
Lemma 7: If P{Q > x} > 0 and E{Z|Q > x} < # for
all x,
#PL (x). (30)
Lemma 8: Under conditions (26) and (27), E{Z|Q >
Now, we are ready to prove Theorem 5.
Proof of Theorem 5:
First of all, we find expressions in terms of P{Q > x}
which are greater than or less than -
#PL (x). If P{Q >
would contradict the asymptotic
relation in Lemma 3. Hence, P{Q > x} > 0 for all x.
If E{Z|Q > would contradict the
asymptotic relation in Lemma 8. Hence, E{Z|Q > x} < #
for all x. Thus, by Lemma 7 we have (30). Now, since
x
By Lemma 4, (C1) implies (H1) and (H2). Hence, by
Lemma 3, we have (21). Equation (21) means that there
are x 0 , K 1 and K 2 such that
Note that since E{Z|Q >
we can choose K 3 > 0 such that E{Z|Q > x} # K 3 x # for
all x # x 0 . Combining with (30) and (31), integrate all
sides of (32) to get
dy #PL (x)
x
with the constant a > 0, by Lemma 6,
there exist x 1 # x 0 , K 4 > 0 and K 5 > 0 such that
dy,
and
x
From (33), (34) and (35),
Take logs and rearrange to get
log K 4
Divide by log x and take x #. Then, the theorem follows
V. Applications to On-line Measurements
In this section, we describe how to apply the MVA approach
for the estimation of the loss probability, based on
on-line measurements. In many practical situations, the
characteristics of a flow may not be known beforehand or
represented by a simple set of parameters. Hence, when
we use a tool for the estimation of the loss probability,
parameter values such as the moment generating function
and the variance function should be evaluated from on-line
measurements. Then, the question is what range of
those parameters should be evaluated. If an estimation
tool needs, for example, the evaluation of the moment generating
function for the entire range of (#, n), the tool may
not be useful. This is fortunately not the case for the MVA
approximation for loss.
Note that the MVA result has the form #e -mx/2 . The
parameter m x is a function of c, -
#, x, and v(n), where -
and v(n) is the mean and the variance of the input, i.e.,
Hence, by measuring
only the first two moments of the input we can estimate
the loss probability. Recall that m
that v(n)
(c-
is maximized at This means that the
result only depends on the value of v(n) at This
value of n x corresponds to the most likely time-scale over
which loss occurs. This is called the dominant time scale
(DTS) in the literature [43], [20]. Thus, the DTS provides
us with a window over which to measure the variance func-
tion. It appears at first, however, that this approach may
not work because the DTS requires taking the maximum
of the normalized variance over all n, which means that we
would need to know v(n) for all n beforehand. Thus, we
are faced with a chicken and an egg type of problem, i.e.,
which should we do first: measuring the variance function
v(n) of the input, or estimating the measurement window
n x . Fortunately, this type of cycle has recently been broken
and a bound on the DTS can in fact be found through
on-line measurements (see Theorem 1 and the algorithm
in [44]). Thus, since our approximation is dependent on
the DTS, we only need to estimate v(n), for values of n up
to a bound on the DTS (given in [44]), thereby making it
amenable for on-line measurements.
VI. Concluding Remarks
We have proposed an approximation for the loss probability
in a finite queue by making a simple mapping from
the MVA estimate of the tail probability in the corresponding
infinite queue. We show first via simulation results that
our approximation is accurate for di#erent input processes
and a variety of bu#er sizes and utilization. Since the loss
probability is an important QoS measure of network tra#c,
this approximation will be useful in admission control and
network design. Another feature of the approximation is
that it is given in a single equation format and hence can
easily be implemented in real-time. We have compared
our approximation to existing methods including the effective
bandwidth approximation, the Cherno# dominant
eigenvalue approximation, and the many-sources asymptotic
approximation of Likhanov and Mazumdar.
In this paper we also study the theoretical aspects of our
approximation. In particular, we provide a strong asymptotic
result that relates our approximation to the actual
loss probability. We show that if our approximation were
to diverge (with increasing bu#er size) from the loss prob-
ability, it would do so slowly. For future work we plan
on simplifying the conditions given in Theorem 5 and to
extend the approximation result to a network of queues.
VII.
Appendix
Proof of Lemma
x
x#
x /2. Hence, to prove the lemma, it
su#ces to show that
x
e -f(y) dy x#
e -f(x) . (36)
x is not di#erentiable}. For x # D,
d
dy
e -f(y) y=x
e -f(x) . (37)
Since D has measure zero, R [x,#)-D
(-)dy and
we may assign any values to f # (x) and f # (x) for all x # D.
Recall
z and m #
z for x # D. Set
let x be any value. Integrating both sides of (37)
from x to #, we have
e -f(x)
x
e -f(y) dy - Z #
x
e -f(y) dy. (38)
Note that m #
a > 0 and b. Since f #
x
# (0, 1). We can find x 0 such that f # (x)
x
e -f(y) dy # Z #
x
e -f(y) dy
x
e -f(y) dy
x
which means that1
x
e -f(y) dy
and the result follows.
Proof of Lemma 7:
Recall the notations:
. Un := sup{k time of the
current cycle to which n belongs).
. Vn := inf{k time of the
next cycle).
. Wn := Vn -Un (Duration of a cycle to which n belongs).
. Zn := Vn - n (Residual time to reach the end of cycle).
. S x
k=Un
(Duration for which Q k > x in a
cycle containing n).
one more:
. R x
k=n
1 {Qk>x} . (Residual duration for which
x in a cycle containing n)
Since Qn is stationary and ergodic, so are the above.
Hence, their expectations are equal to time averages. Since
we are interested in the behavior of Qn after loss happens,
we consider the conditional expectations:
R x
Clearly, E{R x
And it can also
be easily checked that 2E{R x
where the inequality is due to that n is discrete. 6 Since
< c, there are infinitely many cycles for a sample
path. Index cycles in the following manner:
. A (i)
. S (i)
. -
x
Now, we prove the lemma in two steps:
. 1) Derive
x-
.
The amount of loss in cycle i is greater than or equal
to the di#erence between the maximum value of the queue
level Qn in cycle i and the bu#er size x of the finite bu#er
queue, i.e.,
k#A (i)
x
k#A (i)
x
I(S (i)
y > 0)dy.
Take summation over i and divide by the total time,
denotes the number of elements
6 Since n is discrete, for given n such that Qn > x, R x
n and S x
(positive) integer values. If Sn is, for example 2, Rn can be either 1
or 2, and its expectation is 1.5 which is greater than 2/2.
of A (i) . Then,
x
I(S (i)
|A (i) |
dy
x
y
y
dy
x
sup
l#m
y
y
dy.
Recalling (1) and (2),
-# PL (x),
- #,
y
sup
l#m
y
as m #. Since all components are nonnegative, by
Fatou's Lemma, (44) becomes
x-
Step 2) For better understanding, we first show
lim sup
m#m
x
Note that all components are nonnegative. Let am :=2m+1 P m
lim sup am , and b . For any # > 0, we can choose
M such that aM - a # and |b # - b M | < #. Then,
since
x
x
x
Since # is arbitrary, we have b # a # .
Now, we will verify that
x
Construct a new sequence {T (i)
x } by removing zero-valued
elements of {S (i)
x }. Then, as in 45,
lim sup
m#m
x
Note that
lim sup
m#m
x
x
x
for all
x and |B (i)
x ,
xk
x
x |
x
x
x
x
Combining (47), (48) and (49), we have (46).
At last, we have
from which (30) follows.
Proof of Lemma 8:
. #(x, n) := P x {Yn > 0},
. V
The proof will be done in two steps:
. 1) Find x 1 > x 0 such that #(x, n) # n -2 for all (x, n) #
. 2) Using 1), show that E{Z|Q 0 > x} is O(x # ).
# be so small that -# < 0. Then, we
choose x 0 , M , and # satisfying (26) and (27). Let m(n) :=
Then, the moment generating
function of Gaussian Yn is given by e #m(n)+ 1
From (26) and (27), m(n) #)n and v(n) # Kn #
for all (x, n) # V
where #. Let
for all (x, n) # V
Kn 2#-(2-#) . (52)
Note that # > 2# - (2 - #). Since the coe#cient of the
leading term, -# , is negative and its order, #, is positive,
we have for all (x, n) # V
Kn 2#-(2-# 0. (53)
Note that in (53) # , K, and # are fixed constants for all
there exists x 2 such that
Now, we choose x 1 > x 0 such that Mx #
Step
From the definition of Z, Z > n implies Yn > 0. Thus,
Therefore, we have
or
Obviously
as shown in Step 3), #(x, n) # n -2 for all n # Mx # .
Applying this and (55),
#(x, n)
#(x, n)
x#
where #x# denotes the smallest integer which is greater
than or equal to x. Since E x {Z} is nonnegative, E x
--R
"Stochastic Theory of a Data Handling System with Multiple Sources,"
"Asymptotics for steady-state tail probabilities in structured Markov queueing models,"
"An Approximation for Performance Evaluation of Stationary Single Server Queues,"
"Stability, Queue Length, and Delay of Deterministic and Stochastic Queueing Networks,"
"Large deviations and overflow probabilities for the general single server queue, with appli- cation,"
"Squeezing the Most Out of ATM,"
"Logarithmic asymptotics for steady-state tail probabilities in a single-server queue,"
"Loss Performance Analysis of an ATM Multiplexer loaded with High Speed ON-OFF Sources,"
"Cell-loss asymptotics in bu#ers fed with a large number of independent stationary sources,"
"Improved Loss Calculations at an ATM Multiplexer,"
"Investigation of Cell Scale and Burst Scale E#ects on the Cell Loss Probability using Large Deviations,"
"The Stability of a Queue with Non-independent Inter-arrival and Service Times,"
The Single Server Queue
"Limits for Queues as the Waiting Room Grows,"
Stationary Stochastic Mod- els
"A fluid queue with a finite bu#er and subexponential input,"
"On the Asymptotic Relationship between the Overflow Probability in an Infinite Queue and the Loss Probability in a Finite Queue,"
"A Central Limit Theorem Based Approach to Analyze Queue Behavior in ATM Networks,"
"A New Method to Determine the Queue Length Distribution at an ATM Multiplexer,"
"A Central Limit Theorem Based Approach for Analyzing Queue Behavior in High-Speed Networks,"
"On the supremum distribution of integrated stationary Gaussian processes with negative linear drift,"
"Performance Models of Statistical Multiplexing in Packet Video Communication,"
"On the Use of Fractal Brownian Motion in the Theory of Connectionless Networks,"
"Improved Loss Calculations at an ATM Multiplexer,"
"Long- range dependence in variable-bit-rate video tra#c,"
"On the Self-Similar Nature of Ethernet Tra#c (Extended Version),"
"Queueing Analysis of High-Speed Multiplexers including Long-Range Dependent Arrival Processes,"
"Use of Supremum Distribution of Gaussian Processes in Queueing Analysis with Long-Range Dependence and Self-Similarity,"
"Multiplexing gains in bit stream mul- tiplexors,"
"Large deviations, the shape of the loss curve, and economies of scale in large multiplexers,"
"Economies of scale in queues with sources having power-law large deviation scaling,"
"Large deviations approximation for fluid queues fed by a large number of on-o# sources,"
"E#ective bandwidth and fast simulation of ATM intree net- works,"
"Models for Analysis of Packet Voice Communication Systems,"
"Characterizing Superposition Arrival Processes in Packet Multiplexer for Voice and Data,"
"Second Moment Resource Allocation in Multi-Service Networks,"
"Statistical Multiplexing of Multiple Time-scale Markov Streams,"
"Fundamental bounds and approximations for ATM multiplexers with applications to video teleconferencing,"
"Design of a real-time call admission controller for ATM networks,"
"E#ective Bandwidth of General Markovian Tra#c Sources and Admission Control of High Speed Networks,"
Queueing Analysis of High-Speed Networks with Gaussian Tra#c Models
"Self-similar processes in communications networks,"
"On the Relevance of Time Scales in Performance Oriented Tra#c Characterization,"
"The measurement-analytic frame-work for QoS estimation based on the dominant time scale,"
--TR
Limits for queues as the waiting room grows
Effective bandwidth of general Markovian traffic sources and admission control of high speed networks
On the self-similar nature of Ethernet traffic (extended version)
Effective bandwidth and fast simulation of ATM intree networks
Multiplexing gains in bit stream multiplexors
Design of a real-time call admission controller for ATM networks
Second moment resource allocation in multi-service networks
Improved loss calculations at an ATM multiplexer
A central-limit-theorem-based approach for analyzing queue behavior in high-speed networks
--CTR
Han S. Kim , Ness B. Shroff, The notion of end-to-end capacity and its application to the estimation of end-to-end network delays, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.48 n.3, p.475-488, 21 June 2005
Mahmoud Elhaddad , Rami Melhem , Taieb Znati, Analysis of a transmission scheduling algorithm for supporting bandwidth guarantees in bufferless networks, ACM SIGMETRICS Performance Evaluation Review, v.34 n.3, p.48-63, December 2006
Aimin Sang , San-qi Li, Measurement-based virtual queue (VQ): to estimate the real-time bandwidth demand under loss constraint, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.46 n.4, p.519-539, 15 November 2004
Ness B. Shroff, A predictive flow control scheme for efficient network utilization and QoS, IEEE/ACM Transactions on Networking (TON), v.12 n.1, p.161-172, February 2004
Jzsef Br, Loss ratio approximations in buffered systems with regulated inputs, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy | maximum variance asymptotic;asymptotic relationship;queue length distribution;loss probability |
504633 | QoS provisioning and tracking fluid policies in input queueing switches. | The concept of tracking fluid policies by packetized policies is extended to input queueing switches. It is considered that the speedup of the switch is one. One of the interesting applications of the tracking policy in TDMA satellite switches is elaborated. For the special case of 2 2 switches, it is shown that a tracking nonanticipative policy always exists. It is found that, in general, nonanticipative policies do not exist for switches with more than two input and output ports. For the general case of N N switches, a heuristic tracking policy is provided. The heuristic algorithm is based on two notions: port tracking and critical links. These notions can be employed in the derivation of other heuristic tracking policies as well. Simulation results show the usefulness of the heuristic algorithm and the two basic concepts it relies on. | INTRODUCTION
One of the main issues in the design of integrated service
networks is to provide the service performance requirements
for a broad range of applications. Applications requirements
are translated into network quantitative parameters. The most
common performance measures are packet loss probability,
delay, and jitter. The delay and jitter characteristics at each
switch of the network is determined by the scheduling algorithm
used in the switch and the incoming traffic pattern. On
the other hand, the network should also be capable to analyze
the amount of resources that each particular application
requires. Based on this analysis a connection request is admitted
or rejected. It is therefore, very important for the network
designer to understand the effect a scheduling policy has on
the connection performance and on the usage of network resources
In many cases, it is easier to perform the analysis and design
of scheduling policies under the modeling assumption that the
traffic arrives and is treated as a fluid, i.e., the realistic case
where information is organized into packets is not taken into
account, [3],[4],[11],[12],[6]. Under the fluid policy, we assume
that at every time instant arbitrary fractions of the link
capacity can be shared among different applications. Although
in most of the practical situations this is an idealistic assump-
tion, it enables us to analyze the effect of scheduling policy
on the network resources as well as the major performance pa-
rameters, and therefore to design the scheduling policies more
conveniently. One approach to the design of packetized policies
is to first find an appropriate fluid policy, and then to derive
a packetized policy that resembles or tracks the fluid policy
in a certain sense.
Existence of packetized tracking policies is a well established
fact in the single link case. In fact, several tracking policies
are suggested and their performance and efficiency are analyzed
[11], [12], [6], [5], [9]. However, the existence of such
policies in input queueing switches is still an open problem.
This is the main subject of this paper.
The research on scheduling N \Theta N switches is mainly concentrated
on output queueing switches. In an N \Theta N switch, it
is possible that all N inputs have packets for the same output
at the same time. In order to accommodate such a scenario in
an output queueing switch, the switch fabric should work N
times faster than the line rates. This might be acceptable for
moderate size switches working on moderate line rates, but as
the capacity of the lines as well as the switch sizes increase,
memories with sufficient bandwidth are not available and input
queueing is becoming a more attractive alternative.
One way to circumvent this problem is to have Combined
Input Output Queueing (CIOQ) switches with limited speed
up that matches the output sequence of a purely output queueing
switch. In fact, it is shown in [2] that speed up of 2 is
sufficient to resemble the output pattern of any output queueing
switch. However, the scheduling algorithm proposed to
do that is fairly complicated, and the arbiter still requires to
receive information from the input ports of the switch with
speed up of N .
In this paper we consider an input queueing switch, where
every input and output port can service 1 packet per time unit
(All packets are considered to have equal size). In a fluid policy
model, at every time slot every input (output) port can be
connected to several output (input) ports, however the total
service rate of any port should not exceed its capacity. Under
a packetized policy, every input (output) port can at most be
connected to one output (input) port at every time slot, i.e.,
there is no speed up in the switch fabric. Under these circum-
stances, our objective is to find a packetized policy that tracks
a given fluid policy in an appropriate manner. For the special
case of 2 \Theta 2 switches the existence of tracking policies is
proved and a non-anticipative tracking policy is provided. For
the general case a heuristic algorithm with good, but not perfect
tracking properties is proposed. In fact, in the simulations
done, less than 1 percent of the packets lost track of the fluid
policy, when the utility of the switch is around 92%.
Another interesting application of the tracking policies is
in the scheduling of TDMA Satellite Switches (TDMA-SS)
with multi-periodic messages. In this problem the objective
is to schedule a packet during every period of a connection
stream, and before arrival of the next packet. Since it is not
usually possible to queue the packets in the satellite switches,
an input queueing model is more appropriate in this case. The
fluid policy that accomplishes the specified TDMA objective
is trivial. The original problem is then solved by specifying a
packetized policy that tracts that fluid policy.
The organization of paper is as follows. In the next sec-
tion, we will review the concepts of fluid and tracking poli-
cies, and provide the feasibility condition for both cases.The
problem of scheduling multi-periodic messages in TDMA-SS
is explained and elaborated in section III. We will show that
this problem is essentially a special case of the posed input
queueing scheduling problem. In section IV, we show that for
the 2 \Theta 2 switches a tracking policy always exist, and we provide
a non-anticipative algorithm to find the tracking policy. In
this section, we also address the problem of providing a packetized
policy that satisfies pre-specified packet deadlines. In
section V, some useful ideas regarding the design of heuristic
tracking policies are given. Based on these concepts a heuristic
simulation algorithm is proposed. The heuristic algorithm
is applied to the scheduling of a multi-periodic TDMA-SS and
the simulation results are given.
II. FLUID AND PACKETIZED TRACKING POLICIES
We consider input queueing switches that serve fixed size
packets. Each input and output port has the capacity of serving
1 packet per time unit. Since queues exist only at the input
ports, the latter assumption implies that traffic of at most 1
packet per unit of time can be transferred from the input ports
to a given output port.
We assume that the time is slotted and the length of each slot
is equal to the length of a packet. Slots are numbered starting
from 1, 2, . Slot k is taking the time interval
is the beginning (end) of time slot k: Packets
arrive at the beginning of each time slot. Packets with origin
input port i and destination output port j are served FCFS.
Two broad classes of policies are considered, the fluid and
the packetized policies. During time slot k a fluid policy trans-
units of information from input port i to
output port j. w ij (k) is a nonnegative real number and is measured
in units of packets. Since at most one unit of work can
be transferred from a given input port to the output ports, and
since no queueing is permitted at the output ports, the w ij (k)'s
must satisfy the following inequalities.
(1)
A packetized policy is based on the assumption that during a
time slot an input port can transmit a single packet to any one
of the output ports. Therefore, for a packetized policy we have
that J ij (k), the number of packets transmitted from port i to
port j during slot k, is either 0 (no packet transmission during
slot single packet transmission during slot k). A
packetized policy is feasible if at every time slot k we have,
(2)
Note that the conditions in (2) imply that for any k, there can
be at most a single 1 in each column of row of the matrix
That is, the matrix J ij (k) is a sub-permutation matrix
Usually fluid policies cannot be applied directly in a net-work
since mixing of traffic belonging to different packets is
not allowed. However, they are considered in this paper, because
the performance analysis and the scheduling policy design
is often more convenient for fluid policies. An approach
to the design of packetized policies is to first design and analyze
a fluid policy, and then implement a packetized policy
that resembles in a certain sense the departure process of the
fluid policy. Such a packetized policy is called a tracking pol-
icy. More precisely, for our purposes, we use the following
definition:
Definition I: Given a fluid policy , we say that a packetized
policy is tracking if every packet departs under the packetized
policy at the latest by the end of the time slot at which
the same packet departs under the fluid policy.
A basic question is if tracking policies exist for a given fluid
policy. This question is answered positively for the single link
case, where different sessions share a single link [11], [6].
In that case, perhaps the most well known fluid policies are
GPS and rate controlled service disciplines. Several tracking
policies are suggested for the single link case [11], [6], [9],
[1]. The same concepts of GPS and rate controlled schedulers
can be extended to the multi-input,multi-output input queueing
switches. However, the existence of tracking policies for
these switches is still an open question.
Searching for a tracking policy can be converted to another
scheduling problem, scheduling of packets with dead-
lines. Suppose that a set of packets are given, every packet has
two associated time stamps. The first time stamp is the eligible
time, which is the time after which we can schedule the packet.
For instance, this can be the arrival time of the packet to the
switch. The second time stamp is the deadline of the packet.
The objective is to schedule a packet inside the time frame of
eligibility time and deadline time. Obviously, if a packetized
scheduling policy satisfies all the deadlines induced by a fluid
policy, it is a tracking packetized policy by our definition.
In section IV, we study the special case of 2\Theta2 switches. We
will prove that for the special case of 2 \Theta 2 switches, for every
feasible fluid policy there exists a feasible packetized policy.
In fact, our proof is constructive and will provide an algorithm
to derive a tracking policy. We will also show that a natural
extension of the earliest deadline first scheduling can be used
to solve the problem of scheduling packets with deadline. The
general case of an N \Theta N input queuing switch is currently
under investigation. For the latter case, we provide in this paper
a heuristic algorithm which shows very good performance
under a number of simulation studies.
III. MULTI-PERIODIC TDMA SATELLITE SWITCHES
One of the potential applications of tracking policies is in
the scheduling of TDMA Satellite Switches (TDMA-SS). The
conventional method to do the scheduling is based on the
Inukai method [10]. This method is based on the assumption
that all messages have the same period. The scheduling
is done for a frame length equal to the period of the messages
and it is repeated periodically thereafter. Let L be equal
to the maximum number of packets that can be serviced by
an input/output port during one period. A set of messages is
schedulable if for every port the total number of packets that
should be serviced is not more than L. Inukai has provided a
scheduling algorithm for any set of schedulable messages.
The Inukai algorithm does not work appropriately when
messages have different periods. Let message m from input
port s m to output port dm has period pm . To apply the Inukai
method the frame length should be set to the LCM of all message
periods, say L. For each message m, L=pm unit length
packets are scheduled in the frame. Each of these packets is
associated to one period of the original message. Then, we
can use the Inukai method to allocate these packets inside the
frame length L. The problem is that there is no control over the
place of packets inside the frame in the Inukai method. Thus, it
is possible that all packets attributed to a single periodic message
are placed next to each other. Such an assignment suffers
from high jitter. Moreover, the delay of a packet can be equal
to L, which can be very large.
Suppose that the objective is to schedule every packet in
the time frame of its period. Thus, every packet can tolerate
a delay up to its period. The question then arises whether it
is possible to provide a schedule under these constraints. A
necessary condition for schedulability, is that the utilization of
every input port i and output port j should not be greater than
unity, i.e.,
If one considers fluid policies, then it is easy to provide a
schedule provided that (3), is satisfied. Specifically, consider
the fluid policy that assigns the fix service rate of 1=pm to
every message m. Under this policy the switch starts servicing
every packet immediately after its arrival, and it takes pm
time units to complete its service. This means that the target
deadlines are all accomplished. Therefore, if we can provide
a packetized policy that tracks the fluid policy, then this packetized
policy will satisfy the delay constraints as well, and (3)
becomes the necessary and sufficient condition for the schedulability
of packetized policies under the specified constraints.
In [13] Philp and Liu conjectured that (3) is the necessary
and sufficient condition for schedulability under the specified
delay constraints. Giles and Hajek [8] have proved this conjecture
for a special case. In their model the messages are sorted
based on their period, such that,
Moreover, for every two subsequent messages we have,
where k is an integer. Unfortunately, their algorithm does not
work well in the general case.
As we saw, the correctness of the conjecture is trivial for
the fluid policies, but it has not been proved for packetized
policies. In the next section, we show the existence of tracking
policies for the special case of 2 \Theta 2 switches. Thus, the
conjecture is proved for the special case of 2 \Theta 2 switches.
IV.
In this section, we consider a 2 \Theta 2 input queueing switch
and provide an algorithm for designing a packetized policy t
that tracks a given fluid policy .
be the total amount of traffic
transferred under the fluid policy, from input port i to output
port j by the end of slot k. Thus, we have,
conditions (1).
In the following, we address the following problem.
Problem I: Find a sequence of sub-permutation matrices
such that for all k we have
where
I ij
If a solution to Problem I can be found, then the packetized
policy that uses the sub-permutation matrix J(k) to schedule
packets during slot k is tracking the given fluid policy. To
see this, notice first that according to the leftmost inequality
in (5), by the end of slot k, if a (integer) number of packets
completes transmission between input port i and output port j
under the fluid policy, then at least the same number of packets
completes transmission between the same input and output
port under the packetized policy: This and the fact that packets
between ports i and j are served FCFS under both policies,
imply that:
If a packet completes transmission in slot l under the fluid pol-
icy, it will also complete transmission at the latest by the end
of slot l under the packetized policy.
We note next that a realizable packetized policy should serve
packets after their arrival to the switch. This is ensured by
the rightmost inequality in (5). To see this, notice that according
to this inequality, the number of packets that complete
transmission between input i and output port j under the packetized
policy cannot exceed the number of packets that complete
transmission between i and j under the fluid policy by
more than 1. Moreover, if W ij (k) is integer then necessarily,
This and the fact that packets are served
FCFS under both policies, imply that:
A packet cannot complete transmission in slot k under the
packetized policy unless part of this packet has already been
transmitted up to the end of slot k by the fluid policy.
Since the fluid policy is feasible and never begins transmission
of a packet before its arrival, the same holds for the packetized
policy.
Note that if we solve Problem I, we in fact have a packetized
policy that not only tracks the finishing times of the packets
under the fluid policy, but also the times when the fluid policy
starts transmission of these packets.
Before we proceed we need another definition.
Definition II: An integer valued matrix I is called a u-
neighbor of a matrix W if
I ij for all j; (7)
I ij for all i: (8)
We now address a stricter version of Problem I.
Problem II: Find a sequence of sub-permutation matrices
such that for all k, I(k) is a u-neighbor of
I ij
We now proceed to provide a solution for Problem II. The
proof is by induction. At the beginning of slot 0 no traffic has
been processed under either of the policies (fluid and packe-
tized), and we set
Assume now that we have found appropriate sub-permutation
matrices until slot k. We show how to construct the sub-
permutation matrix for slot k +1; J(k +1); based on I(k) and
so that the matrix I(k)
of
J(k+1) is a sub-permutation matrix. To see this note first that
The second inequality holds because
or 1. Next we have to show that there
can be no more than a single 1 in each column or row of
To see this, note that if
Hence, if there is more than a single 1 in, say, column 1 of
I i1 (k) 2 \GammaX
sinceX
But the last inequality contradicts the fact
I i1
We now show that I(k + 1) satisfies (6). From (9) we have
that
Also,
0:
while
It remains to prove (7) and (8) for the matrix I(k +1): Con-
sider, say, column 1. We distinguish the following cases.
Case 1. J i1
and henceX
I
The third relation above holds because of (7). The fourth relation
is correct sincebffc integer, and the
fifth relation follows from
Case 2. J 0: Then by
I i1 (k) bW 2. Since we also have
I conclude that the
difference I i1 only the values 0
and 1. We now need to distinguish the following sub-cases.
Sub-case 2.1 I i1
I
I i1 (k)
The last relation is correct since 1
1. Sub-case 2.2 I i1
If
I i1 (k)
I
The last relation follows since J i1 2. It
remains to consider the case where W i1 non-integer
for all i. We may then redefine J
still have a sub-permutation matrix. To see this, consider the
following two cases
(a) In column 2 there is already J
say a sub-permutation matrix,
we know that J 22 (k Therefore, setting J 21
1 we still have a sub-permutation matrix J(k 1).
(b) J k+1
Then we can set J k+1
i1 for one of the
s and still have a sub-permutation matrix J(k 1).
In order to show that (6) still holds with the modified J(k+1),
note that since I i1 1)c and W i1 (k +1) is non-integer
I
and clearly,
To show that (8) holds, note that since for the modified matrix
we now have J i1 (k can apply the same argument
as in case 1 above.
We may continue in this fashion and examine the rest of the
columns and rows, and update, if necessary, the matrix J
We eventually will have that the matrix
I
is a u-neighbor of a k+1 .
According to the procedure above, we have the following
algorithm for creating the tracking policy, i.e., for generating
the sub-permutation matrix J(k 1).
ALGORITHM
1. At the beginning of slot create the matrix elements
2. If for some column or row, say column 1, it holds
I i1
non-integer for all
then redefine J i1 so that the modified matrix J(k+
1) is still a sub-permutation matrix.
Note that the policy obtained in this way is non-anticipative,
i.e., it does not depend on future arrivals and therefore, can be
implemented on-line.
So far, we have shown that the tracking policy exists, and
a procedure to convert a fluid policy to a packetized tracking
policy is given.
In the previous section, it was mentioned that to obtain
a tracking policy, we can convert the problem to a deadline
scheduling problem and solve that problem. In the following
a simple extension of the EDF algorithm for a 2 \Theta 2 switch
is given. This policy, which we call it the EDF2 always come
up with an admissible schedule if such a schedule exists. A
schedule is admissible if it satisfies all the deadlines.
A. Per-flow Tracking
So far, we have considered the per-link tracking policies,
and proved their existence for 2 \Theta 2 switches. Basically, we
showed that for every fluid policy there exists a packetized
policy that tracks the aggregate traffic going from every input
port i to every output port j. In many circumstances this is not
quite enough.
The aggregate traffic between any pair of input/output nodes
consists of several distinct flows or sessions. Generally, to provide
per-flow QoS, the fluid policy should work on the granularity
of flows and guarantees the service rate given to every
flow individually. This should be also reflected in the packetized
tracking policy. More specifically, the corresponding
packetized policy should track the service given to every flow
under the fluid policy.
Let W l
ij (k) and I l
ij (k) be the total amount of flow l traffic
transferred from port i j up to time k under the fluid policy,
and the packetized policy, p respectively. Then, the packetized
policy p is a per-flow tracking policy of f if and only
if,
bW l
ij (k)c I l
Obviously, this implies a stricter definition for tracking poli-
cies. Recently, we were able to prove this notion of tracking
policy for 2 \Theta 2 switches as well. The result is given without
proof in the following theorem.
Theorem 1: Consider any arbitrary fluid policy f for input
queueing 2 \Theta 2 switches. There exists a non-anticipative
packetized policy p that tracks the individual services given
to every flow under the fluid policy f .
B. QoS provisioning in 2 \Theta 2 switches
An alternative approach to the QoS provisioning problem is
to view directly the requirements of the real-time traffic and to
attempt to satisfy them. A natural framework for this approach
is deadline satisfaction. Every packet presents upon arrival
the maximum waiting time that may tolerate and a deadline is
determined by which it should depart. The scheduler attempts
to satisfy as many deadlines as possible. For a single link it
is known that the Earliest Deadline First policy satisfies the
packet deadlines if those are satisfiable. We show here that the
same effect is achievable in 2 \Theta 2 switches if the same notion
is generalized.
For each link (i; j) at every slot t let D ij (t) be the earliest
deadline among the backlogged packets. There are two service
configurations 1)g. Let
be the sorted deadline vectors associated to
the two configurations, such that the first component always
be the minimum deadline, i.e.,
We say that the deadline vector D i is lexicographically smaller
than D j if and only if,
(D i
(D i
1 and D i
Definition III: The scheduling policy EDF2 is the policy that
at every time slot selects the configuration with minimum lexicographical
deadline vector.
EDF2 is the natural extension of the EDF to the 2 \Theta 2 case.
Suppose that a sequence of packets with deadlines are given,
and it is known that the deadlines are satisfiable, i.e., there
is a feasible scheduling that meet all the deadlines. Then, the
scheduling policy EDF2 also satisfies the deadlines. The proof
of this claim is straightforward and is similar to the proof of
same result for the EDF policy in the single link case.
The EDF2 policy can be applied to obtain a tracking pol-
icy. In that case, at every time slot k, deadline of packets are
set to the end of the time slot that they depart the switch under
the fluid policy. The departing times are calculated based
on the assumption that there is no future arrivals. Note that
the crucial information for scheduling is the relative order of
packets departure times not the departing times themselves. If
we assume that the future arrivals will not change the departure
order of the packets, then the departure orders obtained
under the no future arrival assumption will be correct, regardless
of future arrivals. Therefore, the EDF2 policy would be a
non-anticipative tracking policy.
As we will illustrate in the next section, the basic assumption
regarding the independence of future arrivals and the departure
order of backlogged packets is not a reasonable assumption
for the general case of N \Theta N switches.
V. HEURISTIC ALGORITHMS
Let F be a feasible fluid policy that at every time slot k
specifies the appropriate fluid scheduling matrix w(k). The
scheduling matrix w(k) is an N \Theta N matrix indicating the rate
of transmission from every input port to every output port, and
is a function of arrivals, (a k ) and back logged traffic (q k ) at
time k.
In general, the arrival process is non-anticipative, and therefore
the scheduling function is not known in advance. Under
these circumstances, it is impossible to track the fluid policy
perfectly for N ? 2. We will illustrate this by an example.
ffl Example: Consider a 4 \Theta 4 switch. Let
0:25 0:25 0:25 0:25
0:25 0:25 0:25 0:25
Without loss of generality, assume that the tracking policy selects
the following two sub-permutation matrices for the two
first time slots.
Now assume that the fluid policy, because of arrival of some
packets with higher priority, takes the following rate matrix for
the next time slot,
It is clear that the fluid policy finishes four packets on the links
(1,2), (1,4), (2,1), (2,3) in the third time slot, and none of them
is scheduled in the first two time slots by the packetized policy.
Thus, at the end of third time slot, the tracking policy will, at
least miss deadlines of 2 packets.
It is worth mentioning that one of the crucial assumptions
in the single link sharing case is that the departure order of the
packets present in the node is independent of future arrivals.
This is not a valid and justifiable assumption in the N \Theta N
case. The properties of non-anticipative tracking algorithms
for single link case is due to this assumption, so one should
not expect that similar results regarding the non-anticipative
tracking policies hold for N \Theta N switches. Recall that we
have already provided a non-anticipative tracking policy for
the 2 \Theta 2 case. The above example shows that result can not
be generalized to the N \Theta N case.
This fact motivates us to seek for heuristic algorithms with
good but not perfect tracking properties. We are not intending
to provide a complete efficient tracking policy here, but
we want to illustrate what can be the appropriate approach to
design the tracking policies. Basically, we introduce two main
concepts that could be employed in the design of tracking poli-
cies. We then illustrate the simulation results of a simple tracking
policy implemented based on these two concepts.
The two concepts that we elaborate on them in this section
are port based tracking and critical links.
A. Based Tracking
One way to implement a packetized tracking policy can be
based on the weighted matching in the bipartite graphs. In this
method, the weights of the links are the difference between the
fluid policy and the tracking policy total job done on that link.
We call these weights the tracking weights, t ij (k):
We add up all the positive weights of links associated with
every input and output port and assign this weight to the corresponding
vertex in the bipartite graph. We show the vertex
weights by v i and v j ,
If we had added up weight of all links, regardless of their
sign, the ones with negative weight would have lessen the total
weight of the vertex and this will reduce the chance of those
with positive weights (the lagging ones) to be scheduled.
Next, we have to select a criterion function of weights of
nodes in the matching, or perhaps, the ones left out of the
matching that we attempt to optimize. We propose two possible
candidates here.
Summation Criterion: One possible candidate is to maximize
the summation of all scheduled vertices weights. In this
way, at every step we select the set of the nodes with over-all
maximum weights, and therefore the overall lagging of the
tracking policy is minimized.
Prioritization Criterion: The other possible choice would
be a prioritization criterion. Nodes are prioritized based on
their weights, such that the nodes with greater weight have
higher priority. Any node is excluded from the matching if and
only if including it necessitates removing a node with higher
priority from the matching. In this approach the absolute lagging
value of a node is considered essential and the ones with
greater lagging weight are scheduled.
It is not hard to show that both of the above mentioned criteria
are equivalent. This is due to special structure of bipartite
graphs. In the next lemma we prove the equivalence of the two
criteria.
represents the optimal matching graph
based on the summation (priority) criterion. Then, M 1 is optimal
based on the priority (summation) criterion too.
proof: Suppose that M 1 represents the optimal matching
graph based on the summation criterion, but it is not optimal
based on the priority criterion. Let M 2 be the optimal matching
graph based on the priority criterion. Let i 1 be a node in
1 but not in M 2 : Consider the graph
graph consists of links that are in one and only one of the two
graphs M 1 and M 2 ). The maximum degree of a vertex in G is
two, therefore it consists of a union of distinct paths and loops.
should be the first vertex in a path in G, because its degree
is one. We focus on this path.
Two alternative cases are possible, number of nodes in the
path are either even or odd. If it is even the last node in the path
is also a member of M 1 , while all intermediate nodes belong
to both matching. Thus, if we replace the alternative set of
links in the path belonging to M 2 with those belonging to M 1 ,
two more vertices will be included to M 2 , and this contradicts
with the optimality assumption of M 2 .
If number of the vertices in the path is odd, the last vertex
is a member of M 2 only. The last node should not have less
weight than the first vertex i 1 , otherwise M 2 is not the optimal
graph based on the priority criterion. It should not have greater
weight, because this contradicts with the optimality of
Thus, its weight should be equal to i 1 's.
Therefore, for every node in M 1 there is a node in M 2 with
equal weight and vice versa. This means that both graphs are
optimal according to both criteria.
The above argument also provides an algorithm to derive the
optimal matching graph. The algorithm is based on exploring
the augmenting path similar to the case of maximum matching
algorithm. The only difference is that in the maximum matching
case the search results a better matching, whenever a free
node is detected. Here, the search results a better matching, if
or a node with smaller weight than the first node
of the augmenting path is detected. The following algorithm
finds the maximum weighted vertex matching.
Matching Algorithm:
1. Sort the nodes according to their weights.
2. Select the highest weight node not in the matching that is
not selected yet.
3. Search for an augmenting path started from the selected
node in step 2. Search ends successfully either if a free vertex
is detected (an augmenting path is found), or when a node
with smaller weight than the first node is found. Search ends
unsuccessfully if all possible paths are searched and none of
the above mentioned cases are occurred.
4. Repeat steps 2 and 3 until all nodes are selected.
Although bipartite matching algorithms have been extensively
used in scheduling of the switches, they are either based
on the maximum link (edge) weighted matching or maximum
matching algorithms. The problem with maximum link
weighted matching algorithms are their complexity, and the
problem with maximum matching algorithms are their poor
performance. The vertex based matching can be considered as
an intermediate solution.
We can enhance the performance of a scheduling algorithm
based on vertex maximum matching by selecting an appropriate
initial set of eligible links, which will be provided to the ar-
biter, and by modifying the weights of the nodes that are more
urgent to be scheduled. Basically, we consider a two stage
scheduling. At the first step a set of eligible links for scheduling
and weights of vertices are derived. In the next step, the
arbiter select the links in the matching based on the optimum
vertex weighted matching algorithm described above.
One way to select an eligible set of links in the tracking
problem is based on the notion of critical links. The critical
links are those that are urgent to be scheduled, and if they are
not scheduled the tracking policy will loose the track of the
fluid policy. If such a link is detected all non-critical links
sharing same input or output port with the critical link will be
excluded. The precise definition of critical nodes is given in
the next section.
B. Critical Ports and Links
A critical port is a port that should be scheduled in the next
time slot, in order not to miss a deadline in the future. As
an example suppose that we are at the beginning of k \Gamma th
time slot. Let say that there are two packets one from node i
to j 1 and the other from node i to j 2 both with deadline k
1. Note that if we do not schedule any of these packets, no
deadline will be missed in the k \Gamma th time slot. However,
we will definitely miss a deadline at the subsequent time slot,
1. We say that node i is a critical node, and links (i;
and are associated critical links. In general, sufficient
condition for a port to be critical at time k is to have at least
packets with deadlines less than or equal to k + p. Denote
that we are stating a sufficient condition. In other words, there
might be some critical ports that can not be detected well in
advance using this criterion.
In case of the tracking policy, the deadline of packets are
implicitly given, and are equal to the end of the time slot that
the packet departs the switch under fluid policy. We may not
know the deadlines well in advance, since the future rate of every
link under the fluid policy depends on the future arrivals,
which are non-anticipative. Nevertheless, we may have an
approximate deadline for every packet based on back-logged
traffic or the average arrival rate of the links (for instance,
based on a (oe; ae) flow model for ATM traffic). Also, in some
approaches, a constant rate is assigned to a session, regardless
of the arrival process, such as in multi-periodic messages in
TDMA-SS or rate controlled service disciplines. In the case
of networks if no packet from the assigned session is available
at time of scheduling, packets from other kind of traffic (for
instance, the best effort traffic) can use the available slot [14].
The next issue is to set an appropriate inspection horizon.
The inspection horizon is the number of time slots in future
that we inspect to detect the critical nodes. Based on our ex-
periments, we found that inspection horizon around five is ade-
quate. In fact, there is a trade-off involved here, increasing the
inspection horizon helps us in detecting more critical links, but
it increases the complexity of the algorithm as well.
After detecting a critical port, we know that we have to
schedule one of the critical links associated with that, otherwise
we will miss the deadlines. Thus, we remove all links associated
with critical port, which are not critical. In this way,
the chance for the scheduler to arbitrate one of the critical links
is increased. We also increase the weight of the critical ports
with a constant, so that their weight become greater than all
non-critical ports. Therefore, these nodes are prioritized by
the scheduler.
The critical nodes detecting algorithm can be described as
Critical Node Detecting Algorithm:
1. For every node i and j do steps 2 and 3;
2. Calculate number of Packets that should be sent in the next
time slots.
3. If the number of packets that should be sent in the next
l time steps is equal to l; the corresponding node is critical.
Moreover, the associated links that should at least send one
packet in the next l time slots are critical links.
The whole scheduling process of a switch can be divided
into two stages. In the first stage, weight of the ports are calculated
and the criticality of the ports are investigated. These
computation can be done in parallel for different ports of the
switch, and no interaction between them is necessary. In the
next stage the computed weights for the nodes and the eligible
links for every node are provided to the arbiter processor.
One of the main concerns in high speed switches is the volume
of information required to be exchanged between different
cards of the switch, namely the signaling scalability of the
algorithm. In the link weighted matching arbiters, weights associated
to every link should be sent to the arbiter. Thus, the
exchanging information is in the order of N 2 . In our approach
the weights of the nodes and the eligible links (one bit information
per link) should be sent to the arbiter, which is in the
order of N .
Finally we provide a simple scheduling algorithm based on
the algorithms described above.
Heuristic Scheduling Algorithm:
1. At every time slot k do the following steps.
2. Calculate weights of the nodes using (10),(11).
3. Insert all links with positive weights in eligible links set.
4. Check for critical nodes and their associated critical links.
5. Increase weight of every critical node, such that its weight
exceed all non-critical node's weights. Moreover, remove all
non-critical links of the critical nodes from the eligible links
set.
6. Pass weights of the nodes and the critical links set to the
matching algorithm. The result of the matching is the schedule
for time slot k.
This algorithm will be used in next section for scheduling
in a TDMA-SS, and the simulation results will be illustrated.
C. Simulation Results
In this section we provide some primary results regarding
the heuristic algorithm. The thorough investigation and analysis
of the heuristic algorithms are still continuing. How-
ever, the primary results appear promising. We have used the
heuristic algorithm for scheduling of a TDMA-SS with multi-
periodic messages. Suppose that messages are all periodic,
and the objective is to schedule every period of a message before
the next one arrives. Thus, if the period of message M i ; is
slots to schedule every packet of that mes-
sage. The fluid policy that assigns the constant rate of 1=p i to
message M i accomplishes this task.
The messages are generated randomly between input and
output pairs with equal probability. The period of the message
is selected uniformly in the range of f3; :::; 8g. The messages
are selected such that in every experiment the utility of every
node is in [0:90; 0:97]. In all experiments the average utility
I
utility l=0 l=1 l=2 late %
is around 0.92. The critical nodes are inspected based on the
constant rate, and the inspection horizon is set to N . We have
done the simulation for different switch sizes, and for every
switch size we generated 100 different message sets and investigate
the performance of the algorithm.
In our model there can be different messages from the same
input to the same output, but the scheduler works with the aggregate
rate of all these messages. This is important, because
the complexity of the arbiter does not depend on the number
of messages, which can be large. To schedule different messages
from the same input to output an EDF scheduler is maintained
in every input port. A packet is considered to be on time
(zero latency) if it is scheduled within a period interval after
its arrival. The latency of a packet, l is equal to k ? 0 if it is
scheduled k time units after its deadline. The result of the simulations
are given in the table below. As far as the scheduler is
concerned, it is working with constant rate traffics. Therefore,
the results of this simulation are also applicable in the case of
fixed rate scheduler for the network switches
The number of packets with different latencies are indicated
in the table. The percentage of late packets is also shown in
the last column. Every row correspond to a different switch
size. The number of packets missing their deadlines are less
than one percent, and the maximum latency is two time units
in all cases. We believe that in many of the applications this
is an acceptable performance. In many applications the excessive
delay imposed by the arbiter is tolerable. In fact, in
some of the applications we are allowed to miss some packets,
so we can neglect the packets that miss their deadline. This
in return aids us in on time scheduling of the other packets.
The other important issue is the size of the switch. Many of
the heuristic algorithms proposed degrades as the size of the
switch increases [13]. In our case, we did not observe any
such degradation.
VI.
AND CONCLUSION
In this paper the notion of fluid policies and tracking policies
are extended to the N \Theta N switches. These concepts
are both useful in the high speed networks, where they aid us
in providing guaranteed service to different applications, and
in TDMA-SS with multi-periodic messages. The existence of
tracking policy is proved for the special case of 2 \Theta 2 switches.
For the general case of N \Theta N switches a heuristic algorithm
is provided. The heuristic algorithm is mainly presented to
confirm the validity of two important notions and measures in
tracking policies, the port tracking and critical node concepts.
The existence of tracking policy for the N \Theta N is still an
open question. However, based on the results provided here
for the special cases and the heuristic algorithm, we think that
such a tracking policy always exist. However, we have shown
that it is impossible to have a perfect tracking policy when
the fluid policy is non-anticipative. This fact together with
the complexity issue justify the need for better and less complicated
heuristic tracking policies, with good but not perfect
performances.
--R
A calculus for network delay.
A calculus for network delay.
Analysis and simulation of a fair queueing algorithm.
Optimal multiplexing on single link: Delay and buffer requirements.
Efficient network QoS provisioning based on per node traffic shaping.
Scheduling multirate periodic traffic in a packet switch Shaping.
A generalized processor sharing approach to flow control in integrated services networks: The single node case.
A generalized processor sharing approach to flow control in integrated services networks: The multiple node case.
Scheduling real-time messages in packet-switched networks
--TR
Data networks
Analysis and simulation of a fair queueing algorithm
Introduction to algorithms
A generalized processor sharing approach to flow control in integrated services networks
A generalized processor sharing approach to flow control in integrated services networks
Efficient network QoS provisioning based on per node traffic shaping
EDD Algorithm Performance Guarantee for Periodic Hard-Real-Time Scheduling in Distributed Systems
Scheduling real-time messages in packet-switched networks
--CTR
Yong Lee , Jianyu Lou , Junzhou Luo , Xiaojun Shen, An efficient packet scheduling algorithm with deadline guarantees for input-queued switches, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.212-225, February 2007 | QoS provisioning;input-queued switching;scheduling |
504641 | performance over end-to-end rate control and stochastic available capacity. | Motivated by TCP over end-to-end ABR, we study the performance of adaptive window congestion control, when it operates over an explicit feedback rate-control mechanism, in a situation in which the bandwidth available to the elastic traffic is stochastically time varying. It is assumed that the sender and receiver of the adaptive window protocol are colocated with the rate-control endpoints. The objective of the study is to understand if the interaction of the rate-control loop and the window-control loop is beneficial for end-to-end throughput, and how the parameters of the problem (propagation delay, bottleneck buffers, and rate of variation of the available bottleneck bandwidth) affect the performance.The available bottleneck bandwidth is modeled as a two-state Markov chain. We develop an analysis that explicitly models the bottleneck buffers, the delayed explicit rate feedback, and TCP's adaptive window mechanism. The analysis, however, applies only when the variations in the available bandwidth occur over periods larger than the round-trip delay. For fast variations of the bottleneck bandwidth, we provide results from a simulation on a TCP testbed that uses Linux TCP code, and a simulation/emulation of the network model inside the Linux kernel.We find that, over end-to-end ABR, the performance of TCP improves significantly if the network bottleneck bandwidth variations are slow as compared to the round-trip propagation delay. Further, we find that TCP over ABR is relatively insensitive to bottleneck buffer size. These results are for a short-term average link capacity feedback at the ABR level (INSTCAP). We use the testbed to study EFFCAP feedback, which is motivated by the notion of the effective capacity of the bottleneck link. We find that EFFCAP feedback is adaptive to the rate of bandwidth variations at the bottleneck link, and thus yields good performance (as compared to INSTCAP) over a wide range of the rate of bottleneck bandwidth variation. Finally, we study if TCP over ABR, with EFFCAP feedback, provides throughput fairness even if the connections have different round-trip propagation delays. | Introduction
bottleneck link
xmitter
recvr
ABR
recvr
end-to-end ABR
source
ABR
Figure
1: The network under study has a large round trip delay and a single congested
link in the path of the connection. The ABR connection originates at the host and ends
at the destination node. We call this scenario "end-to-end" ABR.
The Available Bit Rate (ABR) service in Asynchronous Transfer Mode (ATM)
networks is primarily meant for transporting best-effort data traffic. Connections that
use the ABR service (so called ABR sessions) share the network bandwidth left over after
serving CBR and VBR traffic. This available bandwidth varies with the requirements of
the ongoing CBR/VBR sessions, hence the switches carrying ABR sessions implement
a rate-based feedback control for congestion avoidance. This control causes the ABR
sources to reduce or increase their cell transmission rates depending on the availability
of bandwidth in the network. As the ABR service does not guarantee end-to-end reliable
transport of data to the applications above it, an additional protocol is needed between
the application and the ATM layer to ensure reliable communication. In most deploy-
IP
host
IP
host
ATM
wide area network
router/switch router/switch
IP
IP
ABR
IP
IP
ABR
Figure
2: TCP/IP over edge-to-edge ATM/ABR, with the TCP connection split into one
edge-to-edge TCP over ABR connection, and two end-to-edge TCP over IP connections.
The edge switch/router regulates the flow of TCP acknowledgements back to the TCP
senders.
ments of ATM networks, the Internet's Transport Control Protocol (TCP) is used to
ensure end-to-end reliability for data applications.
TCP, however, has its own adaptive window based congestion control mechanism
that serves to slow down sources during network congestion. Hence, it is very important
to know whether the adaptive window control at the TCP level, and the rate-based
control at the ABR level interact beneficially from the point of view of application level
throughput. In this paper, we consider the situation in which the ATM network extends
upto the TCP endpoints; i.e., end-to-end ABR (as opposed to edge-to-edge ABR), see
Figure
1. Our results also apply to TCP over edge-to-edge ABR if the end-to-end TCP
connection comprises a tandem of an edge-to-edge TCP connection, and two edge-to-end
connections, with TCP spoofing being done at the edge-devices (see Figure 2).
Consider the hypothetical situation in which the control loop has zero delay. In
such a case, the ABR source of a session (i.e., the ATM network interface card (NIC)
at the source node) will follow the variations in the bandwidth of the bottleneck link
without delay. As a result, no loss will take place in the network. The TCP window will
grow, and once the TCP window size exceeds the window required to fill the round trip
pipe, the packets will be buffered in the source buffer. Hence, we can see that congestion
is effectively pushed to the network edge. As the source buffer would be much larger than
the maximum window size, the TCP window will remain fixed at the maximum window
size and congestion control will become a purely rate based one. If ABR service was not
used, however, TCP would increase its window, overshoot the required window size, and
then due to packet loss, would again reduce the window size. Hence, it is clear that, for
zero delay in the control loop, end-to-end ABR will definitely improve the throughput of
TCP.
When variations in the bottleneck bandwidth do occur, however, and there is
delay in the ABR control loop, it is not clear whether there will be any improvement.
In this paper, we study TCP over end-to-end ABR, and consider a TCP connection in a
network with large round-trip delay; the connection has a single bottleneck link with time
varying capacity. Current trends seem to indicate that at least in the near future, ATM
will remain only a WAN transport technology. Hence ATM services will only extend
to the network edge, whereas TCP will continue to be used as the end-to-end protocol,
with interLAN IP packets being transported over ATM virtual circuits. When designing
a wide-area intranet based on such IP over ATM technology one can effectively control
congestion in the backbone by transporting TCP/IP traffic over ABR virtual circuits. A
connection between hosts on two LANs would be split into a TCP over ABR wide-area
edge-to-edge connection, and two end-to-edge connections over each of the LANs;
see
Figure
2. The edge devices would then control the TCP end-points on their respective
LANs by regulating the flow of acknowledgements (ACKs) back to the senders. In this
framework, our results would apply to the edge-to-edge TCP over ABR connection.
Many simulation studies have been carried out to study the interaction between
the TCP and ATM/ABR control loops. In [7], the authors study the effect of running
large unidirectional file transfer applications on TCP over ABR. An important result
from their study is that cell loss ratio (CLR) is not a good indicator of TCP perfor-
mance. They also show that when maximum throughput is achieved, the TCP sources
are rate limited by ABR rather than window limited by TCP. Reference [8] reports a
study of the buffering requirements for zero cell loss for TCP over ABR. It is shown that
the buffer capacity required at the switch is proportional to the maximum round trip
time of all the VCs through the link, and is independent of the number of sources (or
VCs). The proportionality factor depends on the switch algorithm. In further work, in
[9], the authors introduce various patterns of VBR background traffic. The VBR background
traffic introduces variations in the ABR capacity and the TCP traffic introduces
variations in the ABR demand.
In [3], the authors study the effect of ATM/ABR control on the throughput and
fairness of running large unidirectional file transfer applications on TCP-Tahoe and TCP-Reno
(see [15]) with a single bottleneck link with a static service rate. The authors in
[12] study the performance of TCP over ATM with multiple connections, but with a
static bottleneck link. The paper reports a simulation study of the relative performances
of the ATM ABR and UBR service categories in transporting TCP/IP flows through an
edge-to-edge ATM (i.e., the host nodes are not ATM endpoints) network. Their summary
conclusion is that there does not seem to be strong evidence that for TCP/IP workloads
the greater complexity of ABR pays off in better TCP throughputs. Their results are,
however, for edge-to-edge ABR; they do not comment on end-to-end (i.e., the hosts have
an ATM NIC) ATM which is what we study in this paper.
All the studies above are primarily simulation studies, and analytical work on
TCP over ABR does not seem to exist in the literature. In this paper, we make the
following contributions:
(i) We develop an analytical model for a TCP connection over explicit rate ABR
when there is a single bottleneck link with time varying available capacity. In the
analytical model we assume that the explicit rate feedback is based on the short
term average available capacity; we think of this as instantaneous capacity feedback,
and we call the approach INSTCAP feedback.
(ii) We use a test-bed to validate the analytical results. This test-bed implements
a hybrid simulation comprising actual Linux TCP code, and a network emula-
tion/simulation implemented in the IP loopback code in the Linux kernel.
(iii) We then develop an explicit rate feedback that is based on a longer term history
of the bottleneck rate process. The computation is motivated from (though it
is not the same as) the well known concept of effective capacity (derived from
large deviations analysis of the bottleneck queue process). We call this EFFCAP
feedback. EFFCAP is more effective in preventing loss at the bottleneck buffers.
Since the resulting model is hard to analyse, the results for EFFCAP feedback are
all obtained from the hybrid simulator mentioned above. Our results show that
different types of bottleneck bandwidth feedbacks are needed for slowly varying
bottleneck bandwidth, rapidly varying bottleneck bandwidth and the intermediate
regime. EFFCAP feedback adapts itself to the rate of bandwidth variation. We then
develop guidelines for choosing two parameters that arise in the on-line calculations
of EFFCAP.
(iv) Finally, we study the performance of two TCP connections that pass through the
same bottleneck link, but have different round trip propagation delays. Our objective
here is to determine whether TCP over ABR is fairer than TCP alone, and
under what circumstances. In this study we only use EFFCAP feedback.
The paper is organized as follows. In Section 2, we describe the network model
under study. In Section 3 we develop the analysis of TCP over ABR with INSTCAP
Segmentation
Buffer
Rate Feedback
HOST COMPUTER
ABR
adaptive rate
server bottleneck link
Figure
3: The segmentation buffer of the system under study is in the host NIC card
and extends into the host's main memory. The rate feedback from the bottleneck link is
delayed by one round trip delay.
feedback, and of TCP alone. In Section 4, we develop the EFFCAP algorithm; TCP
over ABR with EFFCAP feedback is only amenable to simulation. In Section 5, we
present analysis results for INSTCAP feedback, and simulation results for INSTCAP
and EFFCAP. The performance of INSTCAP and EFFCAP feedbacks are compared. In
Section 6, we study the choice of two parameters that arise in EFFCAP feedback. In
Section 7 we provide simulation results for two TCP connections over ABR with EFFCAP
feedback. Finally, in Section 8, we summarise the observations from our work.
2 The Network Model
Consider a system consisting of a TCP connection between a source and destination node
connected by a network with a large propagation delay as shown in Figure 1. The TCP
congestion control does not implement fast-retransmit, and hence must time out for loss
recovery. We assume that only one link (called the bottleneck link) causes significant
queueing delays in this connection, the delays due to the other links being fixed (i.e.,
only fixed propagation delays are introduced due to the other links). A more detailed
model of this is shown in Figure 3. The TCP packets are converted into ATM cells and
are forwarded to the ABR segmentation buffer. This buffer is in the network interface
card (NIC) and extends into the main memory of the computer. Hence, we can look upon
this as an infinite buffer. The segmentation buffer server (also called the ABR source)
gets rate feedback from the network. The ABR source service rate adapts to this rate
feedback.
The bottleneck link buffer represents either an ABR output buffer in an ATM
switch (in case of TCP over ABR), or a router buffer (in case of TCP alone). The
network carries other traffic (CBR/VBR) which causes the bottleneck link capacity (as
seen by the connection of interest) to vary with time. The bottleneck link buffer is finite
which can result in packet loss due to buffer overflow when rate mismatch between the
source rate and the link service rate occurs. In our model, we will assume that a portion
of the link capacity is reserved for best-effort traffic, and hence is always available to the
TCP connection. In the ATM/ABR case such a reservation would be made by using the
Minimum Cell Rate (MCR) feature of ABR, and would be implemented by an appropriate
link scheduling mechanism. Thus when guaranteed service traffic is backlogged at this
link, then the TCP connection gets only the bandwidth reserved for best-effort traffic,
otherwise it gets the full bandwidth. Hence a two state model suffices for the available
link rate.
In the first part of our study, we assume that the ABR feedback is an instantaneous
rate feedback scheme; i.e., the bottleneck link periodically feeds back its short term
to the ABR source. This feedback reaches after one round trip
propagation delay. The ABR source adapts to this value and transmits the cells at this
rate.
3 TCP/ABR with Instantaneous Capacity Feedback
At time t, the cells in the ATM segmentation buffer at the source are transmitted at a time
dependent rate S \Gamma1
which depends on the ABR rate feedback (i.e., S t is the service time
of a packet at time t). The bottleneck has a finite buffer B max and has time dependent
service rate R \Gamma1
t packets=sec which is a function of an independent Markov chain. In our
analysis, we assume that there is a 2 state Markov chain modulating the channel. In each
state, the bottleneck link capacity is deterministic. If the buffer is full when a cell arrives
to it, the cell is dropped. In addition, we assume that all cells corresponding to that TCP
packet are dropped. This assumption allows us to work with full TCP packets only; it
is akin to the Partial Packet Discard proposed in [13]. If the packet is not lost, it gets
serviced at rate R \Gamma1
(assumed constant over the service time of the packet), and reaches
the destination after some deterministic delay. The destination ATM layer reassembles
the packet and delivers it to the TCP receiver. The TCP receiver responds with an ACK
TCP/ABR Transmitter Bottleneck Link Propagation Delay
Figure
4: Queueing model of TCP over end-to-end ABR
(acknowledgement) which, after some delay (propagation processing delay) reaches the
source. The TCP source responds by increasing the window size.
The TCP window evolution can be modeled in several ways (see [11], [10]). In
this study, we model the TCP window adjustments in the congestion avoidance phase
(for the original TCP algorithm as proposed in [4] by Van Jacobson) probabilistically
as follows: every time a non-duplicate ACK (an acknowledgement that requests for a
packet that has not been acknowledged earlier) arrives at the source, the window size W t
increases by one with probability 1
On the other hand, if a packet is lost at the bottleneck link buffer, the ACK
packets for any subsequently received packets continue to carry the sequence number of
the lost packet. Eventually, the source window becomes empty, timeout begins and at the
expiry of the timeout, the threshold window W th
t is set to half the maximum congestion
window achieved after the loss, and the next slow start begins.
3.1 Queueing Network Model
Figure
4 is a closed queueing network representation of the TCP over ABR session. We
model the TCP connection during the data transfer phase; hence the data packets are
assumed to be of fixed length. The buffer of the segmentation queue at the source host
is assumed to be infinite in size. There are as many packets in this buffer as the number
of untransmitted packets in the window. The service time at this buffer models the time
taken to transmit an entire TCP packet worth of ATM cells. Owing to the feedback
rate control, the service rate follows the rate of the bottleneck link. We assume that
the rate does not change during the transmission of the cells from a single TCP packet.
The service time (or equivalently, the service rate) follows the bottleneck link service rate
with a delay of \Delta units of time, \Delta being the round trip (fixed) propagation delay.
The bottleneck link is modeled as a finite buffer queue with deterministic packet
service time with the service time (or rate) Markov modulated by an independent Markov
chain on two states 0 and 1; the service rate is higher in state 0. The round trip propagation
delay \Delta is modeled by an infinite server queue with service time \Delta. Notice that
various propagation delays in the network (the source-bottleneck link delay, bottleneck
link-destination delay and the destination-source return path delay) have been lumped
into a single delay element (See Figure 4). This can be justified from the fact that even
if the source adapts itself to the change in link capacity earlier than one round trip time,
the effect of that change will be seen only after a round trip time at the bottleneck link.
With "packets" being read as "full TCP packets", let
A t be the number of packets in the segmentation buffer at the host at time t
t be the number of packets in the bottleneck link buffer at time t
D t be the number of packets in the propagation queue at time t
R t be the service time of a packet at the bottleneck link; R t 2 fr g. We take
. Thus, all times are normalized to the bottleneck link
packet service time at the higher service rate.
S t be the service time of a packet at the source link. S t follows R t with delay \Delta, the
round trip propagation delay, i.e., S g. Since the instantaneous
rate of the bottleneck link is fed back, we call this the instantaneous rate
feedback scheme. (Note that, in practice, the instantaneous rate is really the average
rate over a small window; that is how instantaneous rate feedback is modelled
in our simulations to be discussed later; we will call this feedback INSTCAP.)
3.2 Analysis of the Queueing Model
Consider the vector process
Slow start phase
no loss Window reaches w & ceases to grow
Coarse timeout occurs
log wLoss epoch
round trip propagation delay
Figure
5: The embedded process
This process is hard to analyze directly. Instead, we study an embedded process, which
with suitable approximations, turns out to be analytically tractable.
consider the embedded process
f ~
with ~
use the obvious notation ~
In the following analysis, we will make the following assumptions :
(i) We assume that the rate modulating Markov chain is embedded at the epochs
(ii) The source adapts immediately to the explicit rate feedback that it receives. This
is true for the actual ABR source behaviour (as specified by the ATM Forum [1]) if
the rate decreases. In the actual ABR source behaviour, an increase in the explicit
rate results in an exponential growth of the source rate and not a sudden jump.
We, however, assume that even an increase in the source rate takes place with a
sudden jump.
(iii) There is no loss in the slow start phase of TCP. In [11], the authors show that
loss will occur in the slow start phase if Bmax \Delta
1even if no rate change occurs in
the slow start phase. However, for the case of TCP over ABR, as the source and
bottleneck link rates match, no loss will occur in this phase as long as rate changes
do not occur during slow-start. Hence, this assumption is valid for the case of TCP
alone only if Bmax \Delta
Observe that packets in the propagation delay queue (see Figure 4) at t k will have
departed from the queue by t k+1 . This follows as the service time is deterministic, equal
to \Delta, and t \Delta. Further, any new packet arriving to the propagation delay
queue during still be present in that queue at t k+1 . On the other hand,
if loss occurs due to buffer overflow at the bottleneck link in (t k ; t k+1 ), we proceed as
follows. Figure 5 shows a packet loss epoch in the interval This is the first
loss since the last time that TCP went through a timeout and recovery. At this loss
epoch, there are packets in the bottleneck buffer, and some ACKs "in flight" back to the
transmitter. These ACKs and packets form an unbroken sequence, and hence will all
contribute to the window increase algorithm at the transmitter (we assume that there is
no ACK loss in the reverse path). The transmitter will continue transmitting until the
window is exhausted and then will start a coarse timer. We assume that this timeout will
occur in the interval (t k+2 ; t k+3 ) (see Figure 5), and that recovery starts at the embedded
epoch t k+3 . Thus, when the first loss (after recovery) occurs in an interval then, in our
model, it takes two more intervals to start recovery.
s). Note that, since no loss has occurred (since last
recovery) until t k , therefore, the TCP window at t k is a d. Now, given ~
assuming that
(i) packet transmissions do not straddle the embedded epochs, and
(ii) packets arrive back-to-back into the segmentation buffer during any interval (t
(This leads to a conservative estimate of TCP throughput. See the discussion following
Figure 8 below.)
we can find the probability that a loss occurs during (t k ; t k+1 ), and the distribution of
the TCP window at the time that timeout starts. Suppose this window is w, then the
congestion avoidance threshold in the next recovery cycle will be m := d we. It will
take approximately dlog 2 me round trip times (each of length \Delta) to reach the congestion
avoidance threshold. Assuming that no loss occurs during the slow start phase (this
is true if B max is not too small [11]), at k me, we can determine the
distribution of ~
. With the above description in mind, define
For k 1,
loss occurs
in (T
loss occurs
in (T
the loss window is w
and
. For a particular realization of X k , we will write
(a; b; d;
loss occurs during (T k ;
and
for k 0 (7)
Recalling the evolution of fT
We now proceed to analyze the evolution of fX k ; k 0g.
The bottleneck link modulating process, as mentioned earlier, is a two state
Markov chain embedded at taking values in fr g. Let p 01 be the
transition probabilities of the Markov chain. Notice that S
also a Discrete time Markov chain (DTMC). Let Q be the transition probability matrix
for (R k
g.
As explained above, given X
is
For particular can be determined using the probabilistic
model of window evolution during the congestion avoidance phase. Consider the evolution
of A k , the segmentation buffer queue process. If no loss occurs in (T k ; T k+1 ),
s
where N k is the increment in the TCP window in the interval, and is characterized as
follows: During (T k ; T k+1 ), for each ACK arriving at the source (say, at time t), the
window size increases by one with probability 1
. However, we further assume that
the window size increases by one with probability 1
(where
the probability does not change after every arrival but, instead, we use the window at
Then, with this assumption, due to d arrivals to the source queue, the window size
increases by the random amount N k . We see that for d ACKs, the maximum increase
in window size is d. Let us define ~
N k such that ~
a+b+d ). Then ,
d)). We can similarly get recursive relations for B k+1 and
Consider an example for explaining the evolution of X k to X k+1 . Let X
(a; b; d; 2; 1), i.e., the source service rate is twice that of the bottleneck link server, and
loss can take place. Further, d \Delta packets are in flight. These d packets arrive at the
source queue, increase the window size by N k , and hence, min(a+d+N k ; \Delta) packets are
transmitted into the bottleneck buffer (at most \Delta packets can be transmitted at rate 1
during an interval of length \Delta). If
loss will occur. For a given b and d, we can compute the range of N k for which
Equation 11 is satisfied. Suppose that loss occurs for N k x( 0). Then,
ix
Let us define
Prfwindow achieved is w loss occurs in (T k
We can compute this quantity in a manner similar to that outlined for the computation
of p(x).
When no loss occurs, U k is given by Equation 8. When loss occurs, given X
the next cycle begins after the recovery from loss which includes the
next slow start phase. Suppose that the window was 2m when loss occured. Then, the
next congestion avoidance phase will begin when the TCP window size in the slow start
phase after loss recovery reaches m. This will take dlog 2 me cycles. At the end of this
period, the state of various queues is given by m). The channel state
at the start of the next cycle can be described by the transition probability matrix of the
modulating Markov chain. Hence,
me with probability p(x):ff(x; 2m) (14)
and
From the above discussion, it is clear that given X k , the distribution of X k+1 can
be computed without any knowledge of its past history. Hence, fX k ; k 0g is a Markov
chain. Further, given T k and X k , the distribution of T k+1 can be computed without any
knowledge of its past history. Hence, the process f(X k ; T k ); k 0g is a Markov Renewal
Process (MRP) (See [17]). It is this MRP that is our model for TCP/ABR.
3.3 Computation of Throughput
Given the Markov Renewal Process f(X we associate with the kth cycle
that accounts for the successful transmission of packets. Let (x)
denote the stationary probability distribution of the Markov chain fX k ; k 0g. Denote
by fl TCP=ABR , the throughput of TCP over ABR. Then, by the Markov renewal-reward
theorem ([17]), we have
denotes the expectation w.r.t. the stationary distribution (x).
The distribution (x) is obtained from the transition probabilities in Section 3.2.
We have
x
is the expected reward in a cycle that begins with
B(x) and D(x) the values of A, B and D in the state x. Then, in an interval (T k
where no loss occurs, we take
Thus for lossless intervals the reward is the number of acknowledgements returned to the
source; note that this actually accounts for packets successfully received by the receiver
in previous intervals.
Loss occurs only if the ABR source is sending at the high rate and the link is
transmitting at the low rate. When loss occurs in (T k \Delta), we need to account
for the reward in the interval starting from T k until T k+1 when slow-start ends. Note
that at T k the congestion window is A(x) D(x). The first component of the
reward is D(x); all the B(x) buffered packets will result in ACKs, causing the left edge
of the TCP window to advance. Since the link rate is half the source rate, loss will
occur when 2(B packets enter the link buffer from the ABR source; these
packets succeed and cause the left edge of the window to further advance. Further, we
assume that the window grows by 1 in this process; hence, following the lost packet,
at most A(x) packets can be sent. Thus we bound the reward
before timeout occurs by D(x)
loss and timeout, the ensuing slow-start phase successfully
transfers some packets (as described earlier). Hence, an upper bound on the "reward"
when loss occurs is A(x)
the summation index w being over all window sizes. Actually, this is an optimistic reward
as some of the packets will be transmitted again in the next cycle even though they have
successfully reached receiver. We could also have a conservative accounting, where we
assume that if loss occurs, all the packets transmitted in that cycle are retransmitted in
future cycles. In the numerical results, we shall compare the throughputs with these two
bounds. It follows that
x
Similarly we have
x
where U(x) is the mean cycle length when x at the beginning of the cycle. From
the analysis in Section 3.2, it follows that
Hence,
x
3.4 TCP without ATM/ABR
Without the ABR rate control, the source host would transmit at the full rate of its
link; we assume that this link is much faster than the bottleneck link and model it as
Constant Rate
Arrival Process
Figure
server queue with time varying service capacity, being fed by a constant
rate source.
infinitely fast. The system model is then very similar to the previous case, the only
difference being that we have eliminated the segmentation buffer. The assumptions we
make in this analysis, however, lead to an optimistic estimate of the throughput. The
analysis is analogous to that provided above.
4 TCP/ABR with Effective Capacity Feedback
We now develop another kind of rate feedback. To motivate this approach, consider a
finite buffer single server queue with a stationary ergodic service process (see Figure 6).
Suppose that the ABR source sent packets at a constant rate. Then, we would like to
find that rate which maximizes TCP throughput. Hence, let the input process to this
queue be a constant rate deterministic arrival process. Given the buffer size B max and a
desired Quality of Service (QoS) (say a cell loss probability ffl), we would like to know
the maximum rate of the arrival process such that the QoS guarantee is met.
We look at a discrete time approach to this problem (see [16]); in practice, the
discrete time approach is adequate as the rate feedback is only updated at multiples of
some basic measurement interval. Consider a slotted time queueing model where we can
service C i packets in slot i and the buffer can hold B max packets. fC i g is a stationary and
ergodic process; let EC be the mean of the process and C min be the minimum number of
packets that can be served per slot. A constant number of packets (denoted by fl) arrive
in each slot. We would like to find fl max such that the desired QoS (cell loss probability
ffl) is achieved. In [16], the following asymptotic condition is considered. If X is a
random variable that represents the stationary queue length, then, with
lim
log
i.e., for large B max the loss probability is better then e \GammaffiB max . It is shown that this
All logarithms are taken to the base e
performance objective is met if
lim
log Ee \Gammaffi
For the desired QoS we need
. Let us denote the expression on the right hand
side of Equation 25 as \Gamma eff . Then, \Gamma eff can be called the effective capacity of the server.
which is what we intuitively
expect. For all other values of ffl, \Gamma eff 2 (C min ; EC).
Let us apply this effective capacity approach to our problem. Let the ABR source
(see
Figure
adapt to the effective bandwidth of the bottleneck link server. In our anal-
ysis, we have assumed a Markov modulated bottleneck link capacity, changes occurring
at most once every \Delta units of time, \Delta being the round trip propagation delay. Hence,
we have a discrete time model with fl being the number of packet arrivals to the bottle-neck
link in \Delta units of time and C i being the number of packets served in that interval.
We will compute the effective capacity of the bottleneck link server using Equation 25.
However, before we can do this, we still need to determine the desired QOS, i.e, ffl or
equivalently, ffi.
To find ffi, we conduct the following experiment. We let the ABR source transmit
at some constant rate, say ; For a given Markov modulating process, we
find that which maximizes TCP throughput. We will assume that this is the effective
capacity of the bottleneck link. Now, using Equation 25, we can find the smallest ffi that
results in an effective capacity of this . If the value of ffi so obtained turns out to be
consistent for a wide range of Markov modulating processes, then we will use this value
of ffi as the QoS requirement for TCP over ABR.
The above discrete time queueing model for TCP over ABR can be analyzed in
a manner analogous to that in Section 3.2. We find from the analysis that for several
sets of parameters, the value of ffi which maximizes TCP throughput is consistently very
large (about 60-70). This is as expected since TCP performance is very sensitive to loss.
4.1 Algorithm for Effective Capacity Computation
In practice, we do not know a priori the statistics of the modulating process. Hence,
we need an on-line method of computing the effective bandwidth. In this section, we
develop an algorithm for computing the effective capacity of a time varying bottleneck
link carrying TCP traffic. The idea is based on Equation 25, and the observation at the
end of the previous section that ffi is very large.
Averages
time
Figure
7: Schematic of the windows used in the computation of the effective capacity
bsed rate feedback.
We take the measurement interval to be s time units; s is also the update interval
of the rate feedback. We shall approximate the expression for effective bandwidth in
Equation 25 by replacing n !1 by a large finite M .
log Ee \Gammaffi
What we now have is an effective capacity computation performed over Ms units of time.
We will assume that the process is ergodic and stationary. Hence, we approximate the
expectation by the average of N sets of samples, each set taken over Ms units of time.
Note that since the process is stationary and ergodic, the N intervals need not be disjoint
for the following argument to work. Then, denoting C ij as the ith link capacity value
in the jth block of M intervals (j 2 f1; Ng), we have
logN
e \Gammaffi
log 1
log
e \Gammaffi
As motivated above, we now take ffi to be large. This yields
log e
\Gammaffi(min
We notice that this essentially means that we average capacities over N sliding blocks,
each block representing Ms units of time, and feed back the minimum of these values (see
Figure
7).
The formula that has been obtained (Equation 31) has a particularly simple form.
The above derivation should be viewed more as a motivation for this formula. The
formula, however, has independent intuitive appeal; see below. In the derivation it was
required that M and N should be large. We can, however, study the effect of the choice
of M and N (large or small) on the performance of effective capacity feedback. This
is done in Section 6, where we also provide guidelines for selecting values of M and N
under various situations.
The formula in Equation 31 is intuitively satisfying; we will call it EFFCAP
feedback. Consider the case when the network changes are very slow. Then, all N values
of the average capacity will be the same, and each one will be equal to the capacity
of the bottleneck link. Hence, the rate that is fed back to the ABR source will be
the instantaneous free capacity of the bottleneck link; i.e., in this situation EFFCAP
is the same as INSTCAP. When the network variations are very fast, EFFCAP will
be the mean capacity of the bottleneck link which is what should be done to get the
best throughput. Hence, EFFCAP behaves like INSTCAP for slow network changes and
adapts to the mean bottleneck link capacity for fast changes. For intermediate rates of
changes, EFFCAP is (necessarily) conservative and feeds back the minimum link rate.
There is another benefit we could get by using EFFCAP. As EFFCAP assumes
a large value of ffi, this means the the cell loss probability ffl is very small. This implies
that the TCP throughput will essentially be the ABR throughput. Thus EFFCAP when
used along with the Minimum Cell Rate feature of the ABR service can guarantee a
minimum throughput to TCP connections. Some of our simulation results to be presented
demonstrate this.
5 Numerical and Simulation Results
In this section, we first compare our analytical results for the throughput of TCP, without
ABR and with ABR with INSTCAP feedback, with simulation results from a hybrid TCP
simulator involving actual TCP code, and a model for the network implemented in the
loopback driver of a Linux Pentium machine. We show that the performance of TCP
improves when ABR is used for end-to-end data transport below TCP. We then study
the performance of the EFFCAP scheme and compare it with the INSTCAP scheme.
Efficiency
Mean time per state (rtd)
"Conservative Analysis, 10 packets"
"Conservative Analysis, 12 packets"
"Optimistic Analysis, 12 packets"
"Testbed results, 10 packets"
"Testbed results, 12 packets"
Figure
8: Analysis and Simulation results: INSTCAP feedback. Throughput of TCP over
ABR: The round trip propagation delay is 40 time units. The bottleneck link buffers are
either 10 or 12 packets. Notice that, for / ? 80, the lowest two curves are test-bed
results, the uppermost two curves are the optimistic analysis, and the middle two curves
are the conservative analysis.
5.1 Instantaneous Rate Feedback Scheme
We recall from the previous section that the bottleneck link is Markov modulated. In
our analysis, we have assumed that the modulating chain has two states which we call
the high state and the low state. In the low state, with some link capacity being used by
higher priority traffic, the link capacity is some fraction of the link capacity in the high
state (where the full link rate is available). In the set of results that we present in this
section, we will assume that this fraction is 0.5. Further, we will also assume that the
mean time in each state is the same, i.e., the Markov chain is symmetric. We denote the
mean time in each state by , and denote the mean time in each state normalized to \Delta
by /, i.e., / :=
. For example, if \Delta is 200msec, then means that the mean time
per state is 400msec. Note that our analysis only applies to / ? 1. A large value of /
means that the network changes are slow compared to the round trip propagation delay
(rtd) , whereas / !! 1 means that the network transients occur several times per round
trip time. In the Linux kernel implementation of our network simulator, the Markov
chain can make transitions at most once every 30msec. Hence we take this also to be the
measurement interval, and the explicit rate feedback interval (i.e.,
We denote one packet transmission time at the bottleneck link in the high rate
state as one time unit. Thus, in all the results presented here, the packet transmission
time in the low rate state is 2 time units.
We plot the bottleneck link efficiency vs. mean time that it spends in each state
(i.e. We define efficiency as the throughput as a fraction of the mean capacity of
the bottleneck link. We include the TCP/IP headers in the throughput, but account for
ATM headers as overhead. We use the words throughput and efficiency interchangeably.
With the modulating Markov chain spending the same time in each state, the mean
capacity of the link is 0.75.
Figure
8 shows the throughput of TCP over ABR with the INSTCAP scheme 2 .
Here, we compare an optimistic analysis, a conservative one (see Section 3.3), and the
test-bed (i.e., simulation) results for different buffer sizes. In our analysis, the processes
are embedded at multiples of one round trip propagation delay, and the feedback from
the bottleneck link is sent once every rtd. This feedback reaches the ABR source after
one round trip propagation delay. In the simulations, however, feedback is sent to the
ABR source every 30msec. This reaches the ABR source after one round trip propagation
delay.
In
Figure
8, we can see that, except for very small /, the analysis and the simulations
match to within a few percent. Both the analyses are less than the observed
throughputs by about 10-20% for small /. This can be explained if we note that in our
model, we assume that packets arrive (leave) back to back to (from) the ABR source.
When a rate change occurs at the bottleneck link, as the packets arrive back to back, and
the source sends at twice the rate of the bottleneck link (in our example); for every two
packets arriving to the bottleneck link, one gets queued. However, in reality, the packets
need not arrive back to back and hence, the queue buildup is slower. This means that
the probability that packet loss occurs at the bottleneck link buffer is actually lower than
in our analytical model. This effect becomes more and more significant as the rate of
bottleneck link variations increase. However, we observe from the simulations that this
effect is not significant for most values of /.
Figure
9 shows the throughput of TCP without ABR. We can see that the simulation
results give a throughput of upto 20% less than the analytical ones. This occurs
due to two reasons.
Even if / !1, the throughput of TCP over ABR will not go to 1 because of ATM overheads. For
every 53 bytes transmitted, there are 5 bytes of ATM headers. Hence, the asymptotic throughput is
approximately 90%.
Efficiency
Mean time per state (rtd)
Figure
9: Analysis and Simulation results; throughput of TCP without ABR : The round
propagation delay is 40 time units. The bottleneck link buffers are either 10 or 12
packets. We observe that TCP is sensitive to bottleneck link buffer size changes.
(i) We assumed in our analysis that no loss occurs in the slow-start phase. It has been
shown in [11] that if the bottleneck link buffer is less than 1of the bandwidth-delay
product (which corresponds to about 13 packets or 6500 byte buffer), loss will occur
in the slow-start phase.
(ii) We optimistically compute the throughput of TCP by using an upper bound on
the "reward" in the loss cycle.
We see from Figures 8 and 9 that ABR makes TCP throughput insensitive to
buffer size variations. However, with TCP alone, there is a significant worsening of
throughput with buffer reduction. This can be explained by the fact that once the ABR
control loop has converged, the buffer size is immaterial as no loss takes place when
source and bottleneck link rate are the same. However, without ABR, TCP loses packets
even when no transients occur.
It is useful to observe that since the times in the above curves are normalized
to the packet transmission time (in the high rate state), results for several different
ranges of parameters can be read off these curves. To give an example, if the link has
a capacity of 155Mbps during its high rate state, and TCP packets have a size of 500
bytes each, then one time unit is 25:8sec. The round trip propagation delay (\Delta) is
means that changes in link bandwidth occur
Efficiency
Mean time per state (rtd)
Alone, 8 packets"
Alone, 12 packets"
Figure
10: Simulation results for TCP with and without ABR (INSTCAP feedback) for
small values of /. The rtd is 40 time units.
on an average, once every 103.2msec. Consider another example where the link capacity
is 2Mbps during the high rate period. Let the packet size by 1000bytes. Then, the delay
here corresponding to 40 time units is 160msec. here corresponds to changes
occurring once every 16 seconds. These two examples illustrate the fact the the curves
are normalized and can be used to read off numbers for many scenarios. 3
From
Figures
8 and 9, we can see that the performance of TCP improves by
about 20% when ABR is employed for data transport, INSTCAP feedback is used, and
the changes in link rate are slow. This improvement in performance with ABR is due to
the fact that ABR pushes the congestion to the network edge. After the TCP window
grows beyond the point where the pipe gets full, i.e., W t
r , the packets start
queueing in the ABR segmentation buffer, which being on the end system, is very large.
The window size increases till the maximum window size advertised by the receiver is
reached. No loss occurs in the network as the ABR source sends at just the right rate to
the bottleneck link. Hence, we end up with the TCP window size fixed at the maximum
window size and the pipe is always full.
The assumptions in our analysis render it inapplicable for very small /. Figure 10
compares the simulation results for TCP with ABR (INSTCAP feedback) and without
ABR for various buffer sizes. These results are for / starting with going
3 Note however that \Delta is an absolute parameter in these curves, since it governs the round trip "pipe".
Thus, although / is normalized to \Delta, the curves do not yield values for fixed / and varying \Delta.
down to We note that even though the throughput improvement due to ABR
is not great for small /, the performance of TCP does not significantly worsen due to
ABR. In the next section, we will see how a better rate feedback in ABR can result in a
distinct improvement of TCP/ABR throughput even for this range of /.
We see from Figure 10 that when / becomes less than 1, the throughput of TCP
increases. This can be explained by the fact that the rate mismatch occurs for an interval
of time less than one round trip propagation delay. As a result, the buffer size required
to handle the overload becomes less. As / becomes very small, each packet is sent at a
different rate and hence, the ABR source effectively sends at the mean capacity. Then
loss very rarely occurs as the buffers can handle almost all rate mismatches and hence,
the throughput increases.
5.2 Comparison of EFFCAP and INSTCAP Performance: Simulation
Efficiency
Mean time per state (rtd)
"Effective Capacity, 8 packets"
"Effective Capacity, 10 packets"
"Effective Capacity, 12 packets"
"Instantaneous rate feedback, 8 packets"
"Instantaneous rate feedback, 10 packets"
"Instantaneous rate feedback, 12 packets"
Figure
11: Simulation results; Comparison of the EFFCAP and INSTCAP feedback
schemes for TCP over ABR for various bottleneck link buffers (8-12 packets). As before,
the rtd is 40 time units. Here, Figure 7). In this figure, we
compare their performances for relatively large /.
In
Figure
11, we use results from the test-bed to compare the relative performances
of the EFFCAP and INSTCAP feedback schemes for ABR. Recall that the EFFCAP
algorithm has two parameters, namely M , the number of samples used for each block
average, and N , the number of blocks of M samples over which the minimum is taken.
Efficiency
Mean time per state (rtd)
"Effective Capacity, 8 packets"
"Effective Capacity, 10 packets"
"Effective Capacity, 12 packets"
"Instantaneous rate feedback, 8 packets"
"Instantaneous rate feedback, 10 packets"
"Instantaneous rate feedback, 12 packets"
Figure
12: Simulation results; Comparison of the EFFCAP and INSTCAP feedback
schemes for TCP over ABR for various bottleneck link buffers (8-12 packets). As before,
the rtd is 40 time units. Here, Figure 7). In this figure, we
compare their performances for small values of /.
In this figure, the EFFCAP scheme uses i.e, we average over one round trip
propagation delay 4 worth of samples. We also maintain a window of 8 rtd worth of
averages, i.e, we maintain averages over which the bottleneck link
returns the minimum to the ABR source. The source adapts to this rate. In the case of
the INSTCAP scheme, in the simulation, the rate is fed back every 30msec.
We can see from Figure 11 that for large /, the throughput with EFFCAP is worse
than that with the INSTCAP scheme by about 3-4%. This is because of the conservative
nature of the EFFCAP algorithm (it takes the minimum of the available capacity over
several blocks of time in an interval).
However, we can see from Figure 12 that for small /, the EFFCAP algorithm
improves over the INSTCAP approach by 10-20%. This is a significant improvement and
it seems worthwhile to lose a few percent efficiency for large / to gain a large improvement
for small /.
To summarize, in Figures 13 and 14, we have plotted the throughput of TCP over
ABR using the two different feedback schemes. We have compared these results with
the throughput of TCP without ABR. We can see that the throughput of TCP improves
4 A new sample is generated every 30msec. The rtd is 200msec in this example. Hence,
6.667 which we round up to 7.
Efficiency
Mean time per state (rtd)
"Effective Capacity, 10 packets"
"Instantaneous rate feedback, 10 packets"
Figure
13: Simulation results; Comparison of throughput of TCP over ABR with effective
capacity scheme, instantaneous rate feedback scheme and TCP without ABR for a buffer
of 10 packets, the other parameters remaining the same as in other simulations.0.50.60.70.80.9
Efficiency
Mean time per state (rtd)
"Effective Capacity, 10 packets"
"Instantaneous rate feedback, 10 packets"
Figure
14: Simulation results; Comparison of throughput of TCP over ABR with effective
capacity scheme, instantaneous rate feedback scheme and TCP without ABR for a buffer
of 10 packets, the other parameters remaining the same as in other simulations.
if ABR is employed for link level data transport. These plots clearly brings out the
merits of the effective capacity scheme. We can see that for all values of /, the EFFCAP
scheme performs considerably better than TCP alone. For large /, we have a throughput
improvement of about 30% while for small /, the improvement is of the order of 10-15%.
Further, while being adaptive to /, EFFCAP succeeds in keeping the TCP throughput
better than the minimum link rate, which INSTCAP fails to do (for small /). Thus an
MCR in the ABR connection may be used to guarantee a minimum TCP throughput.
6 Choice of M and N for EFFCAP
6.1 Significance of M and N
We begin by recalling the results shown in Figures 11 and 12. From these figures, we can
identify three broad regions of performance in relation to /.
For / very large (/ ? 50), the rate mismatch occurs for a small fraction of .
Also the rate mismatches are infrequent, implying infrequent losses, thereby increasing
the throughput. Hence, it is sufficient to track the instantaneous available capacity by
choosing small values of M and N . This is verified from Figure 11 which shows that the
INSTCAP feedback performs better in this region.
On the other hand, when is a small fraction of \Delta (/ ! 0:2) there are frequent
rate mismatches but of very small durations as compared to \Delta. This reduces the buffer
requirement, and hence losses occur rarely. Because of the rapid variations in the capacity,
even a small M provides the mean capacity. Also all the N averages roughly equal the
mean capacity. Thus, the source essentially transmits at the mean capacity in EFFCAP
as well as INSTCAP feedback. Hence a high throughput for both feedbacks is seen from
Figure
12.
For the intermediate values of / (0:5 ! / ! 20), the throughput drops substantially
for both types of feedback. For these values of /, is comparable to \Delta. Hence rate
mismatch is frequent, and persists relatively longer causing the buffer to build up to a
larger value. This leads to frequent losses. Because of frequent losses the throughput is
adversely affected by TCP's blind adaptation window control. In this range, we expect
to see severe throughput loss for sessions with large \Delta. Therefore, in this region, we
need to choose M and N properly; essentially, to avoid rate mismatch and hence loss,
the capacity estimate should yield the minimum capacity, implying the need for small
M and large N . A small M helps to avoid averaging over many samples, and a large N
helps to pick out the minimum.
The selection of M and N cannot be based on the value of / alone however.
\Delta is an absolute parameter in TCP window control and has a major effect on TCP
throughput, and hence on the selection of M and N . This can be seen from the results
in Section 6.2.
6.2 Simulation Results and Discussion
Simulations were carried out on the hybrid simulator that was also used in Section 5. As
before, the capacity variation process is a two state Markov chain. In the high state, the
capacity value is 100KB/sec (KB= Kilo Bytes) while in the low state it is 50KB/sec. The
mean capacity is thus 75KB/sec. In all the simulations, the measurement and feedback
30ms. Throughput is measured for data transfer of a 10MB file; the average
throughput for 4 file transfers is reported. The TCP packet size is 500 bytes and the
maximum TCP window is 32KB.
M denotes the number of measurement samples in the averaging window. The
effective capacity method uses N such overlapping windows (see Figure 7) to determine
the minimum average. Thus N corresponds to the 'memory' of the algorithm. We
introduce the following notation in the simulation results. means that each average
is calculated over measurement intervals corresponding to the round trip propagation
delay, that is, d \Delta
means that
e averages are
compared (or the memory of the algorithm is k round trip times). For example, let
200ms and
(i.e., minimum of 49 averages).
6.2.1 Study of N
In this section, we study the effect of N on the throughput by carrying out two sets of
simulations.
Case 1: Fixed \Delta; varying
Figure
15 shows the effect of N on the throughput of a TCP session with a
given \Delta, when (or equivalently the rate of capacity variation) is varying. These results
corroborate the discussion at the beginning of Section 6.1 for fixed \Delta; for large / (/ ? 60),
a small value of N performs slightly better, whereas when / is very small (/ ! 0:3),
where the throughput increases steeply, there is negligible throughput gain by increasing
N . When 0:3 ! / ! 1, as expected, an improvement is seen for larger N . But for
only a slight improvement is seen with varying N .
Efficiency
Mean Time per State (rtd)
Efficiency
Mean Time per State (rtd)
Figure
15: Efficiency variation with / for increasing values of N .
is varied from 32ms
to 40s. The link buffer is 10 pkts or 5000KB. The advantage of choosing a larger value
of N in the intermediate range of / (0:2 clearly seen.
Case 2: Fixed ; varying \Delta0.30.40.50.6
Efficiency
Mean Time per State (rtd)
Figure
Efficiency vs. is fixed to
1000ms. \Delta is varied (right to left) from
50ms to 500ms. M : \Delta; link buffer=10
Efficiency
Mean Time per State (rtd)
Figure
17: Efficiency vs. is fixed to
100ms. \Delta is varied (right to left) from
50ms to 500ms. M : \Delta, link buffer=10
pkts.
Figures
and 17 show the Efficiency variation with / for different values of N
when is fixed and \Delta is varied. We note that, according to the notation, N is different
for different \Deltas on a N : k\Delta curve. For example, N on the N : 4\Delta curve for (i.e., the
memory of the algorithm is 4 rtds) 100ms is respectively 6 and 12.
We notice that compared to Figure 15, Figures 16 and 17 show different Efficiency
variations with /. This is because, in the former case is varied and \Delta kept constant,
whereas in the latter case is fixed and \Delta varied. As indicated in Section 6.1, \Delta is
an absolute parameter which affects the throughput Figure 16 corresponds to
\Delta=500ms, and Figure 17 corresponds to 50ms). The considerable throughput
difference demonstrates the dependence on the absolute value of \Delta. It can be observed
that the throughput for larger values of \Delta is lower than that for small values of \Delta. This
difference is because a TCP session with larger \Delta needs a larger window 5 to achieve the
desired throughput. A single packet loss causes the TCP window to drop. After a packet
loss, it takes a longer time for a session with larger \Delta to rebuild the window than a
session with a smaller \Delta. In the intermediate range of /, as explained in Section 6.1,
losses are frequent and high. Hence, the throughput of a session with large \Delta is severely
affected.
In
Figure
17, a substantial improvement in the throughput is seen as the memory
of the EFFCAP algorithm increases. A larger memory gives better throughput over a
wider range of \Delta as compared to lower values. The reason for these observations is
as follows. For a given \Delta, as N increases we are able to track the minimum capacity
value better. The minimum capacity is 50KB/sec, which is 66% of the mean capacity
75KB/sec. Hence, as N increases we see Efficiency increasing above 0.6.
that for small / , the average over \Delta yields the average rate, whereas for large / the
average over \Delta yields the peak or minimum rate. Thus for large /, the minimum over
just few \Deltas (4 to 6) is adequate to yield a high throughput, whereas for small / many
more averages need to be minimized over to get the minimum rate.
Figure
shows that for / ! 8, larger values of N improve the throughput
according to the argument given above. When we see that smaller N performs
better, but the improvement is negligible.
The conclusions that can be drawn from above results are as follows. The choice
of N is based on / and \Delta. For very large / (/ ? 20) N should be small (along with
M ); the limiting case being 1. For very small / (/ ! 0:2) N does not matter
much. In the intermediate range, a large N is better.
6.2.2 Study of M
In this section we study the performance of M , the averaging parameter. We have already
seen from Figure 11 that for / ? 60, a small value of M should be selected. We now
study the lower ranges of /.
5 Actually TCP window depends directly on the bandwidth delay product. In all the numerical results
however the link speed is fixed, hence the window depends simply on \Delta.
Efficiency
Window Size (measurement intervals)
RTT 50ms
RTT 100ms
Efficiency
Window Size (measurement intervals)
RTT 50ms
RTT 100ms
RTT 200ms
Figure
Efficiency variation with varying M . is set to 1000ms, and three \Delta values-
50ms, 100ms and 200ms are considered. In the left-hand graph, N is set according to
and in the right-hand N : 12\Delta. M is varied from 1 to 10, i.e., the averaging
interval is varied from 30ms to 300ms. The link buffer is 10 pkts. / ranges from 5 (for
Efficiency
Window Size (measurement intervals)
RTT 50ms
RTT 100ms
RTT 200ms0.50.70.9
Efficiency
Window Size (measurement intervals)
RTT 50ms
RTT 100ms
RTT 200ms
Figure
19: Efficiency variation with varying M . is set to 100ms, and three \Delta values-
50ms, 100ms and 200ms are considered. In the left-hand graph, N is set according to
and in the right-hand N : 12\Delta. M is varied from 1 to 10. The link buffer is 10
pkts. / ranges from 0.5 (for
To study the effect of M , we vary the window size for two N settings, so that the
effect of N could be differentiated. M is varied from 1 to 10 measuring intervals. The
results are shown in Figure Figure 19 100ms). The values
of \Delta are 50ms, 100ms and 200ms. Thus the range of /(=
under consideration is 0.5
to 20. We have already found that M needs to be small for / ? In the
range under consideration, however, we observe that the throughput is not very sensitive
to M .
In
Figure
18, a clear advantage of increasing N is seen for 200ms. For
ms in this figure, that is, we see a slight trend in decreasing throughput with
increasing M . This is because \Delta is small and with small M it is possible to track the
instantaneous capacity better for larger . For larger \Delta values and 1000ms the above
mentioned trend is greatly reduced making the throughput insensitive to M . However,
a slight decrease in the throughput is seen in case of ms when M takes larger
values. The reason is as follows. As discussed in Section 6.1, larger value of M makes
the response of the algorithm sluggish. Hence to track the minimum in the intermediate
range of /, a larger N is needed. In this case N is fixed, hence the decrease for larger
M . A similar effect is seen in the Figure 19 for \Delta=50ms and 100ms.
In
Figure
19, / is in the range 0.5 to 2. For the throughput is insensitive
to the variation in M . Insensitivity is observed in the case of N : small / but for
larger /, that is, \Delta=50ms and 100ms, a 10-15% decrease in the throughput is seen. The
reason, as explained above, is the inability to track the minimum because of the smaller
value of N .
We conclude that in the intermediate range of /, the throughput is not very
sensitive to M . For small \Delta and larger / (e.g. performs
better since it is possible to track the instantaneous rate. In general, a small value of M
improves the throughput in the intermediate range. For larger values of M , N needs to
be increased to enable tracking of the minimum.
7 TCP/ABR with EFFCAP Feedback:
Multiple Sessions Sharing a Bottleneck Link
Throughput fairness is a major issue for multiple sessions sharing a bottleneck link. It
is seen that TCP alone is unfair towards sessions that have larger round trip times.
It may be expected however, that TCP sessions over ABR will get a fair share of the
available capacity. In [14], the fairness of the INSTCAP feedback was investigated and
it was shown that for slow variations of the available capacity, TCP sessions over ABR
employing the INSTCAP feedback achieve fairness. In this section we study the fairness
of TCP sessions over ABR with the EFFCAP feedback scheme
In the simulations, we use 240ms as the round-trip time for Session 1 and 360ms
for Session 2. The link buffer size is bytes. Denote by \Delta 1 and \Delta 2 , the
round-trip times for Session 1 and Session 2 respectively. Other notations are as described
earlier (subscripts denote the session number). In the following graphs / is (mean time
per state of the Markov chain) divided by larger 360ms. Simulations are
carried out by calculating the EFFCAP by two different ways as explained below.
Case 1: Effective Capacity with
In this case, we calculate the EFFCAP for each session independently. This is done by
selecting M i proportional to \Delta i , that is (with a 30ms update interval) we select
for Session 1 and 2. We take
132 (see section 6). EFFCAP i is computed with M i and N i ; session i is fedback 1of
Figure
20 shows the simulation results. We see that for very small values of /
0:3), the sessions receive equal throughput. However, for 0:3 unfairness is
seen towards the session with larger propagation delay. This can be explained from the
discussion in Section 6.1. In this range of /, due to frequent rate mismatches and hence
losses, TCP behavior is dominant. A packet drop leads to greater throughput decrease
for a session with larger \Delta than for a session with smaller \Delta.0.250.350.450.550
Efficiency
Mean time per state (larger rtd)
session 1:240ms
session 2:360ms0.250.350.450.550
Efficiency
Mean time per state (larger rtd)
session 1:240ms
session 2:360ms
Figure
20: Efficiency variation with / (mean time per state normalized to larger
in the case of two sessions sharing a link. \Delta 1 is 240ms and \Delta 2 is 360ms. Link buffer
is 18pkts. Each session is fed back the fair share (half) of the
EFFCAP calculated.
7.2 Case 2: Effective Capacity with simulation M corresponds to the average of \Delta 1 and \Delta 2 , i.e., 300ms or 10 measurement
intervals. Correspondingly, with choosing
Efficiency
Mean time per state (larger rtd)
session 1:240ms
session 2:360ms0.250.350.450.550
Efficiency
Mean time per state (larger rtd)
session 1:240ms
session 2:360ms
Figure
21: Efficiency variation with / (mean time per state normalized to larger
in the case of two sessions sharing a link. \Delta 1 is 240ms and \Delta 2 is 360ms. Link buffer is
18pkts. averages. Each session is
fed back the fair share (half) of the EFFCAP calculated.
M and N this way, we are making rate calculation independent of individual round-trip
times.
We observe from Figure 21 that the EFFCAP calculated in this way yields somewhat
better fairness than the scheme used in Case 1. We also see that better fairness
is obtained even in the intermediate range of /. However, there is a drop in the overall
efficiency. This is because the throughput of the session with smaller \Delta is reduced.
Figures
22 and 23 show the comparison of TCP alone with TCP over ABR with
EFFCAP feedback for a longer range of /. These curves include results from Figures 20
and 21. We see that for / ? 20, EFFCAP gives fairness to the sessions whereas TCP
is grossly unfair to the session with larger \Delta. There is a slight decrease in the overall
efficiency with TCP over ABR; but note that with TCP over ABR the link actually
carries 10% more bytes (the ATM overhead) than with TCP alone! We also see from
Figure
even when / ! 20 which is not observed in the
case of INSTCAP in [14].
Conclusions
In this paper, we first developed an analytical model for a wide-area TCP connection
over end-to-end ABR with INSTCAP feedback, running over a single bottleneck link with
time-varying capacity. We have compared our analytical results for the performance of
TCP without ABR, and with ABR (INSTCAP rate feedback) with results from a hybrid
Efficiency
Mean time per state (larger rtd)
session 1: EFFCAP
session 2: EFFCAP
session 1: TCP alone
session 2: TCP alone
Figure
22: Comparison between Efficiency of sessions with TCP alone and TCP over
ABR employing EFFCAP feedback (Case 1). \Delta 1 is 240ms and \Delta 2 is 360ms. Both the
cases use a link buffer of pkts.
simulation. We have seen that the analysis and simulation results for TCP over ABR
match quite well, whereas the analysis overestimates the performance of TCP without
ABR. Our results show that the throughput improvement by running TCP over ABR
depends on the relative rate of capacity variation with respect to the round trip delay in
the connection. For slow variations of the link capacity the improvement is significant
(25% to 30%), whereas if the rate variations are comparable to the round trip delay then
the TCP throughput with ABR can be slightly worse than with TCP alone.
We have also proposed EFFCAP, an effective capacity based algorithm for rate
feedback. We have simulated TCP over ABR with EFFCAP feedback and shown that,
unlike INSTCAP feedback, EFFCAP succeeds in keeping the TCP throughput higher
than the minimum bandwidth of the bottleneck link; see Figure 12. The EFFCAP
computation involves two parameters M and N .
The throughput variation with /, of a TCP session over ABR employing EFFCAP
feedback, can be broadly divided into three regions. The selection of parameters M and
thus, depends on the range of / as well as on the round-trip propagation delay.
When / is very large (? 60) M and N need to be small. Ideally
Efficiency
Mean time per state (larger rtd)
session 1: EFFCAP
session 2: EFFCAP
session 1: TCP alone
session 2: TCP alone
Figure
23: Comparison between Efficiency of sessions with TCP alone and TCP over
ABR employing EFFCAP feedback (Case 2). \Delta 1 is 240ms and \Delta 2 is 360ms. Both the
cases use a link buffer of pkts.
INSTCAP) performs the best in this region. When / is small (! 0:3), it is sufficient for
the source to send at the mean bottleneck rate; for such /, because of rapid capacity
variations, the mean capacity can be measured by selecting a large M . However, as /
becomes very small, the choice of M and N does not matter much. When / is in the
intermediate range (0:5 ! / ! 20), the throughput decreases due to frequent losses,
hence M and N need to be selected carefully. In this region a large value of N improves
the throughput whereas the performance is insensitive towards M . In general, a small
value of M performs better for a given value of N . The throughput drop in this region
can be compensated by choosing a large buffer, thereby, reducing the losses.
In summary, as a broad guideline, for the buffer sizes that we studied, using
provides good throughput performance for TCP
over ABR over a wide range of / and \Delta values.
In the case of multiple sessions, EFFCAP feedback provides fairness over a wider
range of / then INSTCAP. EFFCAP feedback based on the average round-trip times
of the sessions is seen to provide fairness even when the capacity variations are in the
intermediate range. This is an advantage over INSTCAP which is fair only when the
rate variations are slow compared to \Delta [14].
--R
The ATM Forum Traffic Management Specification Version 4.0
"A Simulation of TCP Performance in ATM Networks"
" Impact of ATM ABR Control on the Performance of TCP-Tahoe and TCP-Reno"
"Congestion avoidance and control"
"Modified TCP Congestion Avoidance Algorithm"
" The ERICA Switch Algorithm for ABR Traffic Management in ATM Networks,"
" Performance of TCP/IP over ABR Service on ATM Networks"
" Buffer Requirements for TCP/IP over ABR"
" Performance of TCP over ABR on ATM Backbone and with Various VBR Traffic Patterns"
"Comparative performance analysis of versions of TCP in a local network with a lossy link"
"The Performance of TCP/IP for Networks with High Bandwidth Delay Products and Random Loss, "
"TCP over ATM: ABR or UBR"
"Dynamics of TCP Traffic over ATM Networks"
"TCP over End-to-End ABR: A Study of TCP Performance with End-to-End Rate Control and Stochastic Available Capacity"
Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms"
"Effective Bandwidths: Call Admission, Traffic Policing and Filtering for ATM Networks"
Stochastic Modeling and the Theory of Queues Prentice Hall
--TR
The performance of TCP/IP for networks with high bandwidth-delay products and random loss
Comparative performance analysis of versions of TCP in a local network with a lossy link
Analysis of source policy and its effects on TCP in rate-controlled ATM networks
TCP over wireless with link level error control
The ERICA switch algorithm for ABR traffic management in ATM networks
Modeling TCP Reno performance
A new approach for asynchronous distributed rate control of elastic sessions in integrated packet networks
--CTR
Aditya Karnik , Anurag Kumar, Performance of TCP congestion control with explicit rate feedback, IEEE/ACM Transactions on Networking (TON), v.13 n.1, p.108-120, February 2005
Jung-Shian Li , Chuan-Gang Liu , Cheng-Yu Huang, Achieving multipoint-to-multipoint fairness with RCNWA, Journal of Systems Architecture: the EUROMICRO Journal, v.53 n.7, p.437-452, July, 2007
Ahmed E. Kamal, Discrete-time modeling of TCP Reno under background traffic interference with extension to RED-based routers, Performance Evaluation, v.58 n.2+3, p.109-142, November 2004 | TCP over ABR;congestion control;TCP performance |
504650 | Jitter control in QoS networks. | We study jitter control in networks with guaranteed quality of service (QoS) from the competitive analysis point of view: we propose on-line algorithms that control jitter and compare their performance to the best possible (by an off-line algorithm) for any given arrival sequence. For delay jitter, where the goal is to minimize the difference between delay times of different packets, we show that a simple on-line algorithm using a buffer of B slots guarantees the same delay jitter as the best off-line algorithm using buffer space B/2. We prove that the guarantees made by our on-line algorithm hold, even for simple distributed implementations, where the total buffer space is distributed along the path of the connection, provided that the input stream satisfies a certain simple property. For rate jitter, where the goal is to minimize the difference between inter-arrival times, we develop an on-line algorithm using a buffer of size 2B + h for any h 1, and compare its jitter to the jitter of an optimal off-line algorithm using buffer size B. We prove that our algorithm guarantees that the difference is bounded by a term proportional to B/h. | Introduction
The need for networks with guaranteed quality of service (QoS) is widely recognized today
(see, e.g., [8, 11]). Unlike today's ``best effort'' networks such as the Internet, where the user
has no guarantee on the performance it may expect from the network, QoS networks guarantee
the end-user application a certain level of performance. For example, ATM networks
support guaranteed QoS in various parameters, including end-to-end delay and delay jitter
(called Cell Transfer Delay and Cell Delay Variation, respectively [5, 12]).
Jitter measures the variability of delay of packets in the given stream, which is an important
property for many applications (for example, streaming real-time applications). Ideally,
packets should be delivered in a perfectly periodic fashion; however, even if the source generates
an evenly spaced stream, unavoidable jitter is introduced by the network due to the
variable queuing and propagation delays, and packets arrive at the destination with a wide
range of inter-arrival times. The jitter increases at switches along the path of a connection
due to many factors, such as conflicts with other packets wishing to use the same links, and
non-deterministic propagation delay in the data-link layer.
Jitter is quantified in two ways. One measure, called delay jitter, bounds the maximum
difference in the total delay of different packets (assuming, without loss of generality, that the
abstract source is perfectly periodic). This approach is useful in contexts such as interactive
communication (e.g., voice and video tele-conferencing), where a guarantee on the delay
jitter can be translated to the maximum buffer size needed at the destination. The second
measure, called rate jitter, bounds the difference in packet delivery rates at various times.
More precisely, rate jitter measures the difference between the minimal and maximal inter-arrival
times (inter-arrival time between packets is the reciprocal of rate). Rate jitter is a
useful measure for many real-time applications, such as a video broadcast over the net: a
slight deviation of rate translates to only a small deterioration in the perceived quality.
Another important reason for keeping the jitter under control comes from the network
management itself, even if there are no applications requiring jitter guarantees. For example,
it is well known that traffic bursts tend to build in the network [8, 15]. Jitter control provides
a means for regulating the traffic inside the network so that the behavior of internal traffic
is more easily manageable. A more subtle argument in favor of jitter control (given by [17])
proceeds as follows. When a QoS network admits a connection, a type of "contract" is agreed
upon between the network and the user application: the user is committed to keeping its
traffic within certain bounds (such as peak bandwidth, maximal burst size etc.), and the
network is committed to providing certain service guarantees (such as maximal delay, loss
rate etc. Since the network itself consists of a collection of links and switches, its guarantees
must depend on the guarantees made by its components. The guarantees made by a link or a
switch, in turn, are contingent on some bounds on the locally incoming traffic. As mentioned
above, unless some action is taken by the network, the characteristics of the connection may
in fact get worse for switches further down the path, and thus they can only commit to lower
QoS. Jitter control can be useful in allowing the network to ensure that the traffic incoming
into a switch is "nicer," and get better guarantees from the switch.
Jitter control implementation is usually modeled as follows [17, 8]. Traffic incoming into
the switch is input into a jitter-regulator, which re-shapes the traffic by holding packets in an
internal buffer. When a packet is released from the jitter-regulator, it is passed to the link
scheduler, which schedules packet transmission on the output link. In this work we focus on
studying jitter-regulators.
Nature of our results. Before we state concrete results, we would like to explain the
insight we seek. Prior to our work, performance of jitter control algorithm was measured
either by worst-case behavior, or under statistical assumptions. Thus the properties of the
algorithms were either deterministic (given deterministic worst-case assumptions on the input
stream), or probabilistic (given stochastic assumptions on the input stream). In this work,
we prove relativistic guarantees: we compare the performance of the algorithm in question to
the performance of the best possible algorithm, which we treat as an adversary we compete
against. The adversary algorithm is not assumed to be constrained by the on-line nature
of the problem: it is assumed to produce the best possible output for the given input, even
if the best output may be computable only in hindsight (hence the adversary algorithm is
sometimes called the off-line algorithm). Algorithms whose performance can be bounded
with respect to the performance of an off-line adversary are called competitive [10, 7, 1]. We
argue that proving that an algorithm is competitive is meaningful, and sometimes superior,
to proving deterministic or stochastic guarantees: first, deterministic or stochastic guarantees
say nothing about the case where the underlying assumptions do not hold for some reason
(even worse, the underlying assumptions-in particular, tractable stochastic assumptions-
are notoriously hard to justify). On the other hand, a competitive algorithm does not
assume anything about the input, and therefore its guarantees are more robust in this sense.
Secondly, worst-case guarantees usually do not say much about individual cases: for example,
an algorithm may be called deterministically optimal even if it performs always as bad as
the worst case; competitive algorithms, by contrast, are guaranteed to do relatively well on
each and every instance. Thirdly, if we add an assumption about the input sequence, the
relativistic guarantee would immediately translate to a specific deterministic guarantee.
We remark that unlike conventional competitive analysis, in most cases we shall compare
the performance of our on-line algorithms to the performance of an (optimal, off-line) adversary
which is restricted to use less buffer space. For example, we prove statements such
as "an algorithm Z using space B produces jitter which never more than the jitter produced
by an optimal algorithm, for the given arrival sequence, using space B=2." One possible
interpretation for this result is that algorithm Z always uses at least half of its buffer space
optimally-as if it knew the future in advance.
Our Results. We consider both delay- and rate-jitter. For delay-jitter, we give a very
simple on-line algorithm, and prove that the delay-jitter in its output is no more than the
delay-jitter produced by an optimal (off-line) algorithm using half the space. We give a
lower bound on delay-jitter showing that doubling the space is necessary. We also consider
a distributed implementation of our algorithm, where the total space of 2B is distributed
along a path. We prove that the distributed algorithm guarantees the same delay-jitter of a
centralized, off-line algorithm using space B, provided that an additional condition on the
beginning of the sequence is met. To complete the picture, we also describe an efficient
optimal off-line algorithm. For all our delay-jitter algorithms, we assume that the average
inter-arrival time of the input stream (denoted X a ) is given ahead of time.
One way to view the relativistic gurantee of our algorithm is the following. Assume that
the specific arrival sequence is such that using a buffer of size B one can reduce the jitter
comletely (i.e. zero jitter). In such a case, our online algorithm, using space 2B would also
output a completely periodic sequence (i.e. zero jitter).
For rate jitter, we assume that the on-line algorithm receives, in addition to X a , two
parameters denoted I max and I min , which are a lower and an upper bound on the desired
time between consecutive packets in the output stream. The on-line algorithm we present
uses a buffer of size 2B + h where h 1 is a parameter, and B is such that an off-line
algorithm using buffer space B can release the packets with inter-departure times in the
interval [I min ; I max ] (but the optimal jitter may be much lower). The algorithm guarantees
that the rate-jitter of the released sequence is at most the best off-line jitter plus an additive
term of 2(B We also show how can the algorithm adapt to unknown
X a . Finally, we prove that on-line algorithms using less than 2B buffer space are doomed to
have trivial rate-jitter guarantees with respect to an off-line algorithm using space B.
Related Work. QoS has been the subject of extensive research in the current decade,
starting with the seminal work of Ferrari [2] (see [16] for a comprehensive survey). A number
of algorithms has been proposed for jitter control. Partridge [9] proposed to time-stamp each
message at the source, and fully reconstruct the stream at the destination based on a bound
on the maximal end-to-end delay. Verma et al. [13] proposed the jitter-EDD algorithm, where
a jitter controller at a switch computes for each packet its eligibility time, before which the
packet is not submitted for to the link scheduler. The idea is to set the eligibility time to
the difference between maximum delay for the previous link and the actual delay for the
packet: this way the traffic is completely reconstructed at each jitter node. Note that jitter-
EDD requires nodes to have synchronized clocks. The Leave-in-Time algorithm [3] replaces
the synchronized clocks requirement of jitter-EDD with virtual clocks [19]. Golestani [4]
proposed the Stop-and-Go algorithm, which can be described as follows. Time is divided to
frames; all packets arriving in one frame are released in the following frame. This allows for
high flexibility in re-shaping the traffic. Hierarchical Round-Robin (HRR), proposed in [6],
guarantees that in each time frame, each connection has some predetermined slots in which
it can send packets. A comparative study of rate-control algorithms can be found in [18]. A
new jitter control algorithm was proposed in [14].
Paper Organization. In Section 2 we give the basic definitions and notations. In Section
3 we study delay jitter for a single switch. In Section 4 we extend the results of Section 3 to
a distributed implementation. In section 5 we study rate jitter.
Model
jitter-control
algorithm
packet arrival sequence
FIFO buffer
packet release sequence
Figure
1: Abstract node model. The jitter control algorithm controls packet release from the
buffer, based on the arrival sequence.
We consider the following abstract communication model for a node in the network (see
Fig. 1). We are given a sequence of packets denoted 0; arrives
at time arrival(k). Packets are assumed to have equal size. Each packet is stored in the buffer
upon arrival, and is released some time (perhaps immediately) after its arrival. Packets are
released in FIFO order. The time of packet release (also called packet departure or packet
send) is governed by a jitter control algorithm. Given an algorithm A and an arrival time
sequence, we denote by send A (k) the time in which packet k is released by A.
We consider jitter control algorithms which use bounded-size buffer space. We shall
assume that each buffer slot is capable of storing exactly one packet. All packets must be
delivered, and hence the buffer size limitation can be formalized as follows. The release
time sequence generated by algorithm A using a buffer of size B must satisfy the following
condition for all 0 k n:
where we define The lower bound expresses the fact that a packet
cannot be sent before it arrives, and the upper bound states that when packet k +B arrives,
packet k must be released due to the FIFOness and the limited size of the buffer. We call
a sequence of departure times B-feasible for a given sequence of arrival times if it satisfies
Eq. (1), i.e., it can be attained by an algorithm using buffer space B. An algorithm is called
on-line if its action at time t is a function of the packet arrivals and releases which occur
before or at t; an algorithm is called off-line if its action may depend on future events too.
A times sequence is a non-decreasing sequence of real numbers. We now turn to define
properties of times sequences, which are our main interest in this paper. Given a times
sequence
i=0 , we define its average, minimum, and maximum inter-arrival times as
follows.
ffl The average inter-arrival time of oe is X oe
n .
ffl The minimum inter-arrival time of oe is X oe
ng.
ffl The maximum inter-arrival time of oe is X oe
ng.
We shall omit the oe superscript when the context is clear. The average rate of oe is simply
1=X oe
a .
We shall talk about the jitter of oe. We distinguish between two different kinds of jitter.
The delay jitter, intuitively, measures how far off is the difference of delivery times of different
1 Note that our definition allows for 0-length intervals where more than B packets are in the system. This
formal difficulty can be overcome by assuming explicitly that each event (packet arrival or release) occurs in
a different time point. For clarity of exposition, we prefer this simplified model, although our results hold in
both models.
packets from the ideal time difference in a perfectly periodic sequence, where packets are
spaced exactly X a time units apart. Formally, given a times sequence
i=0 , we define
the delay jitter of oe to be
0i;kn
We shall also be concerned with the rate jitter of oe, which can be described intuitively as the
maximal difference between inter-arrival times, which is equivalent to the difference between
rates at different times. Formally, we define the rate jitter of oe to be
0i;j!n
The following simple property shows the relationship between delay and rate jitter.
Lemma 2.1 Let oe be a times sequence.
(1) The delay jitter of oe equals 0 if and only if the rate jitter of oe equals 0.
(2) If the delay jitter of oe is J, then the rate jitter of oe is at most 2J.
(3) For all ffl ? 0, and M , there exists a sequence oe ffl;M with rate jitter at most ffl and
delay jitter at least M .
Proof: Suppose that
i=0 .
1. The delay jitter of oe is 0 iff for all 0 i n we have t a , which is true iff
the rate jitter of oe is 0.
2. If the delay jitter of oe is J , then for all 0
and by the triangle inequality we have that the rate jitter of oe is at
most 2J .
3. Let ffl Choose an even number n ?
i=0 be
defined inductively as follows. For
Clearly, the resulting oe is a times sequence
with average inter-arrival rate X a and rate jitter at most ffl 0 ffl. However, we have
that hence the delay jitter is at least nffl 0? M by choice of
n.
Our means for analyzing the performance of jitter control algorithms is competitive analysis
[1]. In our context, we shall measure the (delay or rate) jitter of the sequence produced
by an on-line algorithm against the best jitter attainable for that sequence. As expected,
finding the release times which minimize jitter may require knowledge of the complete arrival
sequence in advance, i.e., it can be computed only by an off-line algorithm. Our results are
expressed in terms of the performance of our on-line algorithms using buffer space B on as
compared to the best jitter attainable by an off-line algorithm using space B off , where usually
. We are interested in two parameters of the algorithms: the jitter (guaranteed
by our on-line algorithms as a function of the best possible off-line guarantee) and the buffer
size (used by the on-line algorithm, as a function of the buffer size used by an optimal off-line
algorithm).
3 Delay-Jitter Control
In this section we analyze the best achievable delay-jitter. We first present an efficient off-line
algorithm which attains the best possible delay jitter using a given buffer with space B.
We then proceed to the main result of this section, which is an on-line delay-jitter control
algorithm which attains the best jitter guarantee that can be attained by any (off-line)
algorithm which uses half the buffer space. Finally, we present a lower bound which shows
that any on-line algorithm whose jitter guarantees are a function of the jitter guarantees of
an off-line algorithm, must have at least twice the space used by the off-line algorithm.
3.1 Off-line Delay-Jitter Control
We start with the off-line case. Suppose we are given the complete sequence farrival(k)g n
of packet arrival times. We wish to find a sequence of release times fsend off (k)g n
k=0 which
minimizes the delay jitter, using no more than B buffer space. The off-line algorithm is
defined as follows.
Algorithm A: off-line delay-jitter control.
1. For each 0 k n, define the interval
where we define
2. Find a minimal interval M which intersects all intervals E k .
3. For each packet k, let
Theorem 3.1 The sequence fsend off (k)g n
k=0 is a non-deceasing, B-feasible sequence with
minimal delay jitter.
Proof: It is straightforward to see from the definitions that send off
[arrival(k); arrival(k +B)] and hence the resulting sequence is B-feasible. Proving FIFOness
is done as follows. By definitions, it is sufficient to prove that P k P k+1 +X a . To see this,
first note that by definition,
We distinguish between two cases now. If min(M)
Eq. (2) we have that P k+1 P k , and we are done. The second case is that min(M) !
In this case Eq. (2) implies that
and the proof of correctness is complete. The optimality of the solution follows immediately
from the minimality of M .
3.2 On-line Delay-Jitter Control Algorithm
We now turn to our main result for delay-jitter control: an on-line algorithm using 2B buffer
space, which guarantees delay-jitter bounded by the best jitter achievable by an off-line
algorithm using B space. The algorithm is simple: first the buffer is loaded with B packets,
and when the (B+1)-st packet arrives, the algorithm releases the first buffered packet. From
this time on, the algorithm tries to release packet k after kX a time. Formally, the algorithm
is defined as follows.
Algorithm B: on-line delay-jitter control. Define send
on a for all
n. The release sequence is defined by
send on
send
on (k); if arrival(k) send
on
on
on
Clearly, Algorithm B is an on-line algorithm. We prove its jitter-control property.
Theorem 3.2 If for a given arrival sequence, an off-line algorithm using space B can attain
delay jitter J , then the release sequence generated by Algorithm B has delay-jitter at most J
using no more than 2B buffer space.
Proof: Obviously, the buffer space used by Algorithm B is at most 2B. The bound on the
delay-jitter follows from Lemma 3.4 and Lemma 3.6 proved below.
time
packet
number
Figure
2: An example for oriented jitter bounds. A point at coordinates (x; y) denotes that
packet y is released at time x. The slope of the dashed lines is 1=X a .
The following definition is useful in the analysis (see Fig. 2).
Definition 3.1 Let
k=0 be a time sequence. The oriented jitter bounds for packet k
are
0in
a g
0in
a g
Intuitvely, J oe (k) says by how much packet k is late compared to the earliest packet, and
J oe (k) says by how much k is premature comapred to the latest packet. We have the following
immediate properties for oriented jitter bounds.
Lemma 3.3 Let
k=0 be a time sequence with average inter-arrival times X a and
delay jitter J. Then
(1) For all k, J(k) 0 and J(k) 0.
(2) For all k,
(3) There exist k and k 0 such that
Proof:
1. Follows by choosing 3.1.
2. Let i be such that j. Rearranging, we have that
Assume w.l.o.g. that t i 0
. From the
definition it follows that t i 0
g.
Therefore,
0in
and
0in
Summing Eqs. (3,4), the result follows.
3. Follows by choosing Eqs. (3,4), respectively.
The following lemma shows that the deviation of the actual release time generated by
Algorithm B from the ideal 0-jitter sequence of fsend
on (k)g k is bounded. Somewhat surpris-
ingly, it is bounded by the oriented jitter bounds of two specific packets in any B-feasible
sequence.
Lemma 3.4 Let
k=0 be any B-feasible sequence for a given arrival sequence.
Then for all 0 k n, we have \GammaJ oe (0) send
on
Proof: We proceed by case analysis. If send on
on (k) then we are done by Lemma
3.3 (1). If send on (k) ? send
on (k), then by the specification of Algorithm B, we have that
send on arrival(k). In this case the lemma is proved by the following inequality.
send on
send off (k)
by definition of J oe (0)
send
on since send
on (0) send off (0)
on
The last case to consider is send on
on (k). In this case, by the specification of
Algorithm B, we have that send on 2B). The lemma in this case is proved
by the following inequality.
send on
send off (k +B) by B-feasibility of off-line
by definition of J oe (B)
send
on
on (0) send off (B)
on
The reader may note that since Lemma 3.3 (1,2) implies that J oe (0); J oe (0) J oe , Lemma
3.4 can be used to easily derive a bound of 2J oe on the delay-jitter attained by Algorithm
B. Proving the promised bound of J oe requires a more refined analysis of the oriented jitter
bounds. To facilitate it, we now introduce the following concept.
Definition 3.2 Let
k=0 be a times sequence. Let t t k . The times sequence oe
perturbed at k to t is
Intuitively, is the sequence obtained by assigning release time t to packet k, and
changing the times of other packets to preserve the FIFO order (see Fig. 3 for an example):
if packet k is to be released earlier than t k , then some packets before k may be moved as
well; and if packet k is to be released later than t k , then some packets after k may be moved.
The following properties for perturbed sequences are a direct consequence of the definition
k=0 be a times sequence, let k be any packet, and let t be any time
point.
packet
number
time
packet
number
time
Figure
3: An example of perturbation. Left: A sequence oe. Right: oe(5 : t). Note that in
were moved with respect to oe.
Proof: The simplest way to verify these claims is geometrical: Consider Figure 3, which
corresponds to the case of t Assertion (B1) says that if point k is not
moved left to the left diagonal line, then all points remain between the two diagonal lines,
and that there are points which lie on the diagonal lines. Assertion (B2) states that the
horizontal distance between point k and left diagonal line strictly decreases, and Assertion
states that for points below point k, the horizontal distance to the left diagonal line
does not increase. The case of t analogous.
To prove Theorem 3.2, we prove an interesting property of oriented jitter bounds in
optimal sequences. Intuitively, the lemma below says the following. Fix an arrival sequence,
and consider all optimal release sequences using B buffer space. Fix any two packets at
most B apart. Then it cannot be the case that in all optimal release sequences both the first
packet is too early and the second packet is too late. Formally, we have the following.
Lemma 3.6 Let J be the minimal delay jitter for a given arrival sequence using space B,
and let 0 i j n be packets such that j i +B. Then there exists a B-feasible sequence
oe for the given arrival sequence with delay jitter J such that J oe (i)
Note that Lemma 3.6 with combined with Lemma 3.4, completes the proof
of Theorem 3.2. We shall use the general statement in Section 4.
Proof: Let oe be an optimal release sequence attaining jitter J for the given arrival se-
quence, in which J oe (i) + J oe (j) is minimal among all optimal sequences. First, note that if
either J oe then we are done since by Lemma 3.3 (1,2) we have that
J oe (i); J oe (j) J . So assume from now on that J oe (i) ? 0 and J oe (j) ? 0. We claim that
in this case, t are released together (and hence all packets
are released together). We prove this claim by contradiction: suppose that t
Then it must be the case that either (i) t i ! arrival(j) or (ii) t j ? arrival(j), or both (i)
and (ii) hold. If case (i) holds, let t
and consider the
perturbed sequence oe(i : t) in which packet i is released at time t. By choice of t, we have
that (i). The perturbed sequence oe(i : t) has the following properties.
is B-feasible, since it may differ from oe at most by packets These
packets are held a little longer in oe(i : but they are released at time t ! arrival(j)
arrival(i +B).
The claim now follows for case (i), since Properties (1,2) imply that oe(i : t) is a sequence
using B buffer space which attains jitter J , but Properties (3,4) contradict the assumed
minimality of J oe (i) (j). A similar argument shows that if case (ii) holds, then for
arrival(j)g, the perturbed sequence oe(j : t) contradicts the minimality
of J oe (i)
Thus we have proved that for an optimal sequence, either J oe
which cases the lemma is proved), or else, for a sequence minimizing J oe (i) must
be the case that t . We now proceed to bound J oe (i) using the fact that t
First, note that since by definition there exists a packet k 1 such that t k 1
(j), and since by definition we get from the fact that
Similarly, we have that
Adding Equations (5,6), we get that
we conclude that
as required.
3.3 A Lower Bound for On-line Delay-Jitter Control Algorithms
We close this section with a lower bound for on-line delay-jitter control algorithms. The
following theorem says that any on-line algorithm using less than 2B buffer space pays
heavily in terms of delay jitter when compared to an off-line algorithm using space B.
Theorem 3.7 Let 1 ' ! B. There exist arrival sequences for which an off-line algorithm
using space B gets jitter 0, and any on-line algorithm using 2B \Gamma ' buffer space gets delay-jitter
at least 'X a . Moreover, there exist arrival sequences for which an off-line algorithm
using space B gets 0-jitter, and no on-line algorithm using less than B buffer space can
guarantee any finite delay jitter.
Proof: Consider the following scenario. At time 0, packets arrive, and at time
arrive. First, note that there is an off-line algorithm attaining 0
jitter by releasing each packet k at time k \Delta X a . Consider now any on-line algorithm Z. We
first claim that Z cannot release packet 0 before packet B arrives: otherwise, packet B may
arrive arbitrarily far in the future, making the delay jitter of the on-line algorithm arbitrarily
large. Hence, at time B \Delta X a , when B+1 new packets arrive, algorithm Z still stores the first
packets, and since it has buffer space 2B \Gamma ' by assumption, it is forced to release at least
immediately. Since the delays of packets 0 and ' are equal, it follows from the
definition of delay-jitter that the delay-jitter of the release sequence is at least 'X a .
For the case of an on-line algorithm with less than B space, consider the scenario where
a batch of B packet arrive together at time 0, and then a batch of B more packets arrive at
time T for some very large T . Since the on-line algorithm has to release packet 0 at time 0,
we have that its delay jitter is at least T=(B \Gamma 1), which can be arbitrarily large.
4 Distributed Delay-Jitter Control
In Section 3 we have considered a single delay-jitter regulator. In this section we prove an
interesting property of composing many delay-jitter regulators employing our Algorithm B.
Specifically, we consider a path of m links connecting nodes v is the
source and v m is the destination. We make the simplifying assumption that the propagation
delay in each link is deterministic. We denote the event of the arrival of packet k at node j
by arrival(k; j), and the release of packet k from node v j by send(k; j). The input stream,
generated by the source, is fsend(k; 0)g k (or farrival(k; 1)g k ), and the output stream is
fsend(k; m)g k . Each node has 2B=m buffer space, and for simplicity we assume that m
divides B. The distributed algorithm is the following.
Algorithm BD: distributed on-line delay-jitter control. For each 1 j m, node
buffer space 2B=m. Specifically, node j sets send
on (k;
a , and it releases packet k as close as possible to send
on (k; j) subject to
2B=m-feasibility (see Algorithm B).
We prove that the jitter control capability of Algorithm BD is the same as the jitter
control capability of a centralized jitter control algorithm with B total buffer space, under a
certain condition for the beginning of the sequence (to be explained shortly). Put differently,
one does not lose jitter control capability by dividing the buffer space along the path. The
precise result is given in the theorem below.
Theorem 4.1 Suppose that for a given arrival sequence
k=0 , there exists
a centralized off-line algorithm attaining jitter J using space B, with packet 0 released before
time arrival(B=m). Then if oe is the release sequence of node v 0 , the release sequence
fsend on (k; m)g k generated by Algorithm BD at node v m has delay jitter at most J.
Intuitively, the additional condition is that there is a way to release the first packet
relatively early by a centralized optimal algorithm. This condition suffices to compensate for
the distributed nature of Algorithm BD. The condition is also necessary for the algorithm to
are input into the system at the start of the algorithm, then an off-line
algorithm can still wait arbitrarily long before starting to release packets, while Algorithm
BD is bound to start releasing packets even if only 2B
The proof is essentially adapting the proofs of Algorithm B in Section 3 to the distributed
setting. We highlight the distinguishing points.
Let the propagation delay over link (v j
the total
delay of links on the path.
The first lemma below bounds the desired release times of all packets at one node in
terms of the desired release times in upstream nodes.
Lemma 4.2 For all nodes 1 j i m and all packets k,
send
on (k;
on (k; i) send
on
Proof: Consider the lower bound first. By the algorithm, we have that for all ', send
on (0;
send on (0; '). Since for all ' ? 1, send
on (0; ') send on (0; we obtain by induction
on
on (k; i) send
on (k;
proving the lower bound.
We now prove the upper bound. First, we claim that for all 1 i m,
send on (k; ') send
on (k; ') for 0 k
Eq. (7) follows from the fact that by the specification of Algorithm B, a node starts releasing
packets only if all first B
are in its buffer, and therefore none of the first B
packets is released too late in any node. We now prove the upper bound by induction on
j. The base case, trivial. For the inductive step, fix j and consider i
have
send
on
a by algorithm
send
on (0; i)
m )X a by (7)
send
on (0;
)X a by induction
on (k;
rearranging
on
For the case of underflow, we argue that if a packet is "late" in the output node v n , then
it was late in all nodes on its way.
Lemma 4.3 If send on (k; m) ? send
on (k; m), then send on (k; m)
Proof: First, we show that for any node v j , if send on (k;
on (k; j), then send on (k;
on 1). This is true since by the specification of Algorithm B, at time
send
on (k; j) the buffer at node j is empty, and hence node v j \Gamma1 has not sent packet k by time
send
on (k;
on
on (k; this implies that send on (k;
send
on 1). Therefore, for all nodes v j , we have that send on (k; and by
summation we obtain that send on (k; m) = send on (k;
For the case of overflow, we show the analogous property: if there is an overflow in the
output node, then it is the result of a "chain reaction" of overflows in all nodes.
Lemma 4.4 If send on (k; m) ! send
on (k; m), then send on (k; m)
Proof: We prove that if send on (k; i) ! send
on (k; i) then send on
send on
by the bound on the buffer size
send
on (k; i) \Gamma d i by our assumption
send
on
send
In other words, if packet k is overflowing at node m, then packet k
m is overflowing
in node i, for each 1 i m. Hence for each i, we have send on
. The lemma follows.
The lemmas above are used in the proof of the following variant of Lemma 3.4.
Lemma 4.5 Let
k=0 be any B-feasible sequence for a given arrival sequence
such that send off (0) arrival(B=m; 1). Then for all 0 k m, we have \GammaJ oe (0)
send
on (k; m) \Gamma send on (k; m) J oe (B=m).
Proof: If send
on (k; m) = send on (k; m) we are done by Lemma 3.3 (1). If send on (k; m) ?
send
on (k; m), then
send on (k; m) = send on (k; 0) +D by Lemma 4.3
send off (k) +D since send on (k;
send
on (0; since send off (0) arrival(
send
on (0; m)
on (k; m)
If send on (k; m) ! send
on (k; m), then
send on (k; m)
send off (k +B) +D since off-line has B space
send off ( B
arrival(
send
on (0;
send
on
Theorem 4.1 follows from Lemma 4.5, when combined with Lemma 3.6 (which is independent
of the on-line algorithm), with
5 Rate-Jitter Control
In this section we consider the problem of minimizing the rate-jitter, i.e., how to keep the
rate at which packets are released within the tightest possible bounds. We shall use the
equivalent concept of minimizing the difference between inter-departure times. We present
an on-line algorithm for rate-jitter control using space 2B compare it to an off-line
algorithm using space B and guaranteeing jitter J . Our algorithm guarantees rate jitter
at most J constant c. We also show how to obtain rate jitter which
is a multiplicative factor from optimal, with a simple modification of the algorithm. The
algorithm can work without knowledge of the exact average inter-arrival time: in this case,
jitter guarantees will come into effect after an initial period in which packets may be released
too slowly. We also show that without doubling the space, no guarantees in terms of the
optimal rate-jitter can be made. As an aside, we remark that off-line rate-jitter control can
be solved optimally using linear-programming technique.
5.1 On-line Rate-Jitter Control Algorithm
We now turn to describe the main result for this section: an on-line algorithm for rate-jitter
control. The algorithm is specified with the following parameters:
ffl B, the buffer size of an off-line algorithm, i.e. B
space parameter for the on-line algorithm, such that B on
ffl I min ; I bounds on the minimum and maximum inter-departure time of an off-line
algorithm.
ffl X a , the average inter-departure time in the input (and also the output) sequence.
The parameters I min and I max can be thought of as requirements: these should be the worst
rate jitter bounds the application is willing to tolerate. The goal of a rate-jitter control
algorithm is to minimize the rate jitter, subject to the assumption that space B is sufficient
(for an off-line algorithm) to bound the inter-departure times in the range [I min ; I max ]. A
trivial choice for I min and I max is X min and X max , which are the minimal and maximal inter
arrival times in the input sequence. However, using tighter I min and I max , one may get a much
stronger guarantee. The jitter guarantees will be expressed in terms of B; h; I
and J , the best rate jitter for the given arrival sequence attainable by an off-line algorithm
using space B.
Note that for an on-line algorithm, even achieving rate jitter I max \GammaI min may be non-trivial.
These are bounds on the performance of an off-line algorithm, whose precise specification
may depend on events arbitrarily far in the future.
The basic idea in our algorithm is that the next release time is a monotonically decreasing
function of the current number of packets in the buffer. In other words, the more packets
there are in the buffer, the lower the inter-departure time between the packets (and thus the
higher the release rate).
Algorithm C: on-line rate-jitter control. The algorithm uses B on
space. With each possible number 0 j 2B + h of packets in the buffer, we associate an
time denoted IDT(j), defined as follows. Let
I
I
I
Note that IDT(j) is a monotonically decreasing function in j. The algorithm starts with
a buffer loading stage, in which packets are only accumulated (and not released) until the
first time that the number j of packets in the buffer satisfies IDT(j) X a . Let
a g, and let T denote the first time in which the number of packets in
the buffer reaches S. At time T , the loading stage is over: the first packet is released
and the following rule governs the remainder of the execution of the algorithm. A variable
last departure is maintained, whose value is the time at which the last packet was sent. If
at time t, we have t last departure+IDT(j), where j is the number of packets currently
in the buffer, then we deliver a packet and update last departure.
The rate-jitter bound of Algorithm C is given in the following theorem.
Theorem 5.1 Let J be the best rate-jitter attainable (for an off-line algorithm) using buffer
space B for a given arrival sequence. Then the maximal rate-jitter in the release sequence
generated by Algorithm C is at most J
h , and never more than I
The idea in the proof of Theorem 5.1 is that the number of packets in the buffer is never
more than slots away from the slots which correspond to rates generated by an
optimal off-line algorithm. We now formally analyze Algorithm C. Fix an optimal execution
of the off-line algorithm. Let us denote the maximum and minimum inter-departure times
of the off-line execution by Y max and Y min , respectively. (Hence the jitter attained by the
off-line algorithm is Y these quantities, we also define the following terms.
I
I
Note that L S U . We shall also use the following shorthand notation. Let B on (t) and
denote the number of packets stored in time t in the buffers of the Algorithm C and of
the off-line algorithm, respectively, and let i.e., how many packets
does the Algorithm C has more than the off-line algorithm at time t. We use extensively the
following trivial property of the difference.
Lemma 5.2 For all t, \GammaB diff(t) B on (t).
Proof: Immediate from the fact that 0 B off (t) B.
Let S on the number of packets sent by Algorithm C in the time interval
analogously for the off-line algorithm. The following lemma
states that the difference is modified according only to the difference in the packets released.
Lemma 5.3 For any two time points
Proof: Consider the events in the time interval arrival increases the number
of stored packets for both the off-line and Algorithm C, and hence does not change their
difference. It follows that diff(t 2 exactly the difference in the number of packets
sent by the two algorithms in the given interval.
The significance of Lemma 5.3 is in that it allows us to ignore packet arrivals when
analyzing the space requirement of an algorithm: all we need is to consider the difference
from the space requirement of the off-line algorithm. The following lemma, which bounds
the minimal inter-departure time of Algorithm C, is an example for that.
Lemma 5.4 For all times t, diff(t) U + 1.
Proof: Let t be any point in time. If B on (t) U + 1, the lemma follows immediately. So
assume that B on (t) ? U be a point such that B on (t 0 ) U and B on (t 0 ) U
for all t Such a point exists since B on (T Consider the time interval
t]: in this interval, at most
Y min
were released by the off-line algorithm,
while Algorithm C has released at least
I U
Y min
, and hence S on
Since by Lemma 5.2 we have that diff(t 0 ) U , the result follows from Lemma 5.3.
Similarly, we bound the difference from below.
Lemma 5.5 For all times t ? T , diff(t)
Proof: Let t ? T be a point in time. The case of B on (t)
be a point such that B on (t 0 ) L and B on (t 0 ) L for all
t]. The point t 0 must exist since B on (T L. For the time interval
we have that S off
, and S on
I L
B, the result follows from Lemma 5.3.
We now prove Theorem 5.1.
Proof of Theorem 5.1: By Lemma 5.4, at all times t, B on (t) hence the
minimal inter-departure time of Algorithm C is smaller than Y min by less than (B+2) Imax \GammaI min
h .
By Lemma 5.5, for all times t ? T , the maximal inter-departure time of Algorithm C is
larger than Y max by less than (B
h . Since since no packets
is released before time T , the theorem follows.
It is worthwhile noting that doubling the space is mandatory for on-line rate-jitter control
(as well as for delay-jitter control), as the following theorem implies.
Theorem 5.6 Let 1 ' ! B. There exist arrival sequences for which an off-line algorithm
using space B gets 0-jitter, and any on-line algorithm using 2B \Gamma ' buffer space gets rate-jitter
at least X a
The proof of Theorem 5.6 is similar to the proof of Theorem 3.7, and we therefore omit it.
5.2 Adapting to Unknown X a
We can avoid the need of knowing X a in advance, if we are willing to tolerate slow rate in
an initial segment of the online algorithm. This is done by changing the specification of
the loading stage of Algorithm C to terminate when the buffer contains B packets (which
corresponds to inter-arrival time of I max , as opposed to inter-arrival time of X a in the original
specification). Thereafter, the algorithm starts releasing packets according to the specification
of IDT. Call the resulting algorithm C \Gamma . Below, we bound the time which elapses in
an execution of C \Gamma until the buffer size will reach the value of L. Clearly, from that point
onward, all guarantees made in Theorem 5.1 hold true for Algorithm C \Gamma as well.
Lemma 5.7 Consider an execution of Algorithm C \Gamma . Let T be the time where the initial
loading ends, and let T + be the first time such that B on (T
Proof: For to be the first time after T where B on (t i )
Consider a time interval its length by i . Denote the number of packets
arriving in the interval by A i . Consider the off-line algorithm: For all 1 i
have that
Consider now the execution of Algorithm in the time interval the inter-departure
time is at least Y therefore S on
Ymax+iffi . Using Eq. (8)
and since B on (t by definition, we have
i.e.,
\GammaY
Summing over noting that
and that \GammaB
5.3 Multiplicative Rate Jitter
For some applications, it may be useful to define jitter as the ratio between the maximal
and minimal inter-arrival times. We call this measure the multiplicative rate jitter, or m-rate
jitter for short. It is easy to adapt Algorithm C to the case where we are interested in the
m-rate jitter. All that is needed is to define
I
I
I
for . In this case we obtain the following result, using the same proof
technique as for Theorem 5.1.
Theorem 5.8 Let J be the best m-rate-jitter attainable (for an off-line algorithm) using
buffer space B for a given arrival sequence. Then the maximal m-rate-jitter in the release
sequence generated by Algorithm C using function IDTm is at most J \Delta
h .
6 Conclusion
In this paper we have studied jitter control algorithms, measured in terms of guarantees
relative to the best possible by an off-line algorithm. Our results for delay jitter show that
the simple algorithm of filling half the buffer has a very strong relative property. For rate
jitter, we proposed a simple algorithm where the release rate is proportional to the fill level of
the buffer, and showed that its relative guarantees are quite strong as well. We have studied
a very simple distributed model for jitter control. We leave for further work analyzing
more realistic models of systems, including multiple streams and more interesting network
topology.
--R
Online Computation and Competitive Analysis.
Client requirements for real-time communication services
ATM Networks: Concepts
Rate control servers for very high-speed networks
Competitive snoopy caching.
An Engineering Approach to Computer Networking.
Isochronous applications do not require jitter-controlled networks
Amortized efficiency of list update and paging rules.
Computer Networks.
The ATM Forum Technical Committee.
Guaranteeing delay jitter bounds in packet switching networks.
Charcterizing traffic behavior and providing end-to-end service guarantees within ATM networks
Service disciplines for guaranteed performance service in packet-switched networks
Comparison of rate-based services disciplines
A New Architecture for Packet Switched Network Protocols.
--TR
Amortized efficiency of list update and paging rules
A stop-and-go queueing framework for congestion management
Comparison of rate-based service disciplines
On per-session end-to-end delay distributions and the call admission problem for real-time applications with QOS requirements
ATM networks (2nd ed.)
Leave-in-Time
Computer networks (3rd ed.)
An engineering approach to computer networking
Online computation and competitive analysis
Characterizing Traffic Behavior and Providing End-to-End Service Guarantees within ATM Networks
--CTR
Khuller, Problems column, ACM Transactions on Algorithms (TALG), v.2 n.1, p.130-134, January 2006
Pal , Mainak Chatterjee , Sajal K. Das, A two-level resource management scheme in wireless networks based on user-satisfaction, ACM SIGMOBILE Mobile Computing and Communications Review, v.9 n.4, October 2005
Yiping Gong , Bin Liu , Wenjie Li, On the performance of input-queued cell-based switches with two priority classes, Proceedings of the 15th international conference on Computer communication, p.507-514, August 12-14, 2002, Mumbai, Maharashtra, India | buffer overflow and underflow;competitive analysis;streaming connections;jitter control;quality of service networks |
504917 | Prefetching for improved bus wrapper performance in cores. | Reuse of cores can reduce design time for systems-on-a-chip. Such reuse is dependent on being able to easily interface a core to any bus. To enable such interfacing, many propose separating a core's interface from its internals by using a bus wrapper. However, this separation can lead to a performance penalty when reading a core's internal registers. In this paper, we introduce prefetching, which is analogous to caching, as a technique to reduce or eliminate this performance penalty, involving a tradeoff with power and size. We describe the prefetching technique, classify different types of registers, describe our initial prefetching architectures and heuristics for certain classes of registers, and highlight experiments demonstrating the performance improvements and size/power tradeoffs. We further introduce a technique for automatically designing a prefetch unit that satisfies user-imposed register-access constraints. The technique benefits from mapping the prefetching problem to the well-known real-time process scheduling problem. We then extend the technique to allow user-specified register interdependencies, using a Petri net model, resulting in even more efficient prefetch schedules. | Overview
Separating a core's interface behavior and internal behavior can lead to performance
penalties. For example, consider the core architectures shown in
Figures
1(a), 1(b) and 1(c), showing a core with no bus wrapper, a core with a bus
wrapper (BW) but without prefetching, and a core with a BW with prefetching,
respectively. The latter two architectures are similar to that being proposed
by the VSIA. The BW interfaces with the system bus, whose protocol may be
arbitrarily complex, include a variety of features like arbitration. The BW also
interfaces with the core internals, over a core internal bus; this bus is typically
extremely simple, implementing a straightforward data transfer. It is this
internal bus that the VSI On-Chip Bus group is standardizing. Without a BW,
a read of a core's internal register from the on-chip bus may take as little as
two cycles, as shown in Figure 2(a). With a BW, the read of a core's internal
register may require four cycles, two from the internal module to the BW, and
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 7
Fig. 3. PVCI's location in a system-on-a-chip.
two from the BW to the bus. Thus, a read may require extra cycles compared
with a core whose interface and internal behavior are combined.
However, a core with its interface behavior separated into a bus wrapper
is believed to be much easier to retarget to different buses than a core whose
interface behavior is integrated with its internal behavior. By standardizing
the interface between the core's internals and the bus wrapper, retargeting of
a core may become easier.
3.2 PVCI
After deciding that a single on-chip bus standard was unlikely, the VSIA developed
the VCI [Virtual Socket Interface Association 1997b]. The VCI is a proposed
standard interface between a core's internals and a core's bus wrapper,
as illustrated in Figure 3. Retargeting a core using VCI will involve roughly the
same changes to the bus wrapper, since the VCI ensures that the changes are
limited to the wrapper and not the internals, and since a bus provider can even
provide bus wrapper templates between the bus and the VCI. The VCI is a far
simpler protocol than a typical bus protocol, since it is a point-to-point transfer
protocol. In contrast, a bus protocol may involve more advanced features, such
as arbitration, data multiplexing, pipelining, and so on. Thus, standardizing
the VCI is far simpler than standardizing a bus protocol.
The PVCI is a simpli?ed version of the VCI, speci?cally intended for periph-
erals. PVCI cores would reside on a lower-speed peripheral bus as shown in
Figure
3, and thus would not need some of the high-speed features of the VCI,
e.g., packet chaining. The general structure of the PVCI is shown in Figure 4.
It consists of two unidirectional buses. One bus leads from the wrapper to the
internals. The wrapper sets the read line to indicate a read or a write, and sets
the address lines with a valid address. For a write, it also sets the wdata lines. It
asserts the val line to actually initiate the read or write. The wrapper must hold
all these lines constant until the internals assert the ack line. For a write, this
means that the internals have captured the write data. For a read, this means
that the internals have put the read data on the rdata bus. The transaction
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
8 Lysecky and Vahid
Fig. 4. PVCI's general structure.
is completed on the next rising clock edge. A fast internals module can keep
ack asserted continuously to provide for fast transfers, similar in spirit to the
synchronous wait protocol [Vahid and Givargis 1999].
3.3 Experiments with Bus Wrappers
We sought to evaluate the impact of a wrapper and of PVCI using a simple
peripheral bus. We used a bus with a two-phase handshake protocol to ensure
that the communication was as fast as possible for a given peripheral. As
previously demonstrated, using a wrapper results in a two-cycle overhead per
read as compared with an integrated core.
Figure
2(a) illustrates the timing of a read cycle of this peripheral bus for an
integrated core. The peripheral bus master (in our case, the bridge) places an
address on addr and then strobes rd. The peripheral responds by placing data
on data and strobing rdy as early as once cycle after receiving the rd strobe.
Thus, the total read cycle could be as little as two clock cycles.
Figure
2(b) illustrates the read cycle of the bus for a core using a bus wrapper.
After the bus master places the address and strobes rd, the wrapper responds
by translating this read request into a read request over the internal bus. This
translation involves translating the address to one appropriate for the core and
then placing that address on wrp addr, and then asserting wrp read. The core's
internals respond by placing data on wrp data and then asserting wrp rdy. The
wrapper receives the data, puts it on the peripheral bus, and strobes rdy.
A write cycle need not incur any performance overhead in the wrapper ver-
sions. When the bus master sets the addresses and strobes the appropriate
ready line, the wrapper can respond immediately by capturing the data and
strobing the ready line, just like an integrated core will do. The wrapper can
then proceed to write the captured data to the core internals, while the bus
master proceeds with other activities.
The example we evaluated was a simple version of a digital camera system,
illustrated in Figure 5. The camera system consists of a (simpli?ed) MIPS mi-
croprocessor, BIOS, and memory, all on a system bus, with a bridge from the
system bus to a peripheral bus (ISA) having a CCD (charge-coupled device)
Prefetching for Improved Bus Wrapper Performance 9
Fig. 5. Digital camera example system.
preprocessor and a simple CODEC (compressor/decompressor). The two-level
bus structure is in accord with the hierarchical bus concept described in [Virtual
Socket Interface Association 1997a]. The camera is written in register-transfer
level synthesizable VHDL, and synthesizes to about 100,000 cells. We used the
Synopsys Design Compiler as well as the Synopsys power analysis tools to evaluate
different design metrics. Power and performance were measured for the
processing of one frame.
We made changes to the CCD preprocessor and CODEC cores since they
represent the peripherals on the peripheral bus. These cores are used heavily
while processing a frame. We created three versions of the camera system:
(1) Integrated: The CCD preprocessor and CODEC cores were written with
the interface behavior inlined into the internal behavior of the core. Thus,
synthesis generates one entity for each core.
(2) Non-PVCI wrapper: The CCD preprocessor and CODEC cores were written
with the interface behavior separated into a wrapper. Thus, synthesis
generates two connected entities for the core. The interface between these
two wrapper and internal entities consisted of a single bidirectional bus, a
strobe control line and a read/write control line, and however many address
lines were necessary to distinguish among internal registers.
(3) PVCI wrapper: Same as the previous version, except that the interface
between the wrapper and internal entities was PVCI.
The non-PVCI wrapper version was created for another purpose, well before
the PVCI standard was developed and with no knowledge that the version
would be used in these experiments. Thus, its structure was developed to be as
simple as possible.
Table
I summarizes size, performance, and power results. Size is reported
in equivalent NAND gates, time in nanoseconds, and power in milliwatts. The
size overhead when using a bus-wrapper (non-PVCI) compared to the integrated
version was roughly 1500 gates per core. This overhead comes from extra control
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Table
I. Comparison of Interface Versions Using a Custom Bus
Version Ex.
Size of
Wrapper
Size of
Power for
Total time
I/O Time
Integrated
Wrapper
CCD 1661 34556 8.11 79055 15520
CODEC 1674 1904
PVCI
Wrapper
CCD 1439 33978 7.98 79055 15520
CODEC 1434 1588
and registers. In the integrated version, the core's internals includes control to
interface to the peripheral bus. In the wrapper version, this control is replaced
by control for interfacing to the wrapper, so the size of the core's internals
stays the same. However, the wrapper now must implement control for interfacing
to the internals, and for interfacing to the peripheral bus, representing
overhead. The wrapper must also include registers whose contents are copied
to/from the internals, representing additional overhead. The reason that the
non-PVCI wrapper version shows more size overhead than the PVCI wrapper
version is because the non-PVCI version used a single bus for transfers both
two and from the core internals, whereas PVCI speci?es two separate buses,
resulting in less logic but more wires. Fifteen hundred gates of size overhead
seems quite reasonable, given the continued increase of chips' gate capacities,
and given that peripheral cores typically posses 20,000 gates or more [Mentor
Graphics n.d.
The system power overhead was only about 1%. The extra power comes from
having to transfer items twice per access. On a write, an item must be transferred
?rst from the bus to the wrapper, then from the wrapper to the internals.
On a read, an item must be transferred ?rst from the internals to the wrap-
per, then from the wrapper to the bus. However, the power consumed by the
memory, system bus, and processor dominate, so the extra power due to the
wrappers is very small?even though the CCD and CODEC are heavily used
when processing a frame.
In
Table
I, we can see that there is a 100% increase in peripheral I/O access
time when bus wrappers are employed. This overhead is due to the use of a
wrapper, which would have occurred whether using PVCI or another wrapper.
In our experiments, the CCD was accessed 256 times per image frame, while the
CODEC was accessed a total of 128 times per frame. Because the MIPS processor
executed approximately 5000 instructions per frame, the overall overhead
of the bus wrappers amounts to approximately 5%.
One difference between the non-PVCI and PVCI interface that does not appear
in the results is the number of wires internal to the core. The non-PVCI
version uses a multiplexed bus, and has fewer signals (some PVCI signals were
not shown), and thus would have fewer internal wires.
Noting that our CCD and CODEC cores are relatively small and have simple
interfaces, it took us 6 designer hours, excluding synthesis and simulation
time, to retarget a design from one wrapper to another, e.g., to convert the
CCD's non-PVCI wrapper to a PVCI implementation. Synthesis time for the
CCD and CODEC was approximately 1 hour. Simulation time for capturing
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 11
one image frame was slightly over 10 hours and power analysis was an additional
5 hours. These times were obtained by synthesizing the models down
to gates using Synopsys Design Compiler with medium mapping effort, using
the lsi-10k library supplied by Synopsys, with no area or delay constraints
speci?ed. We used a dual 200-MHz Ultra Sparc II machine to perform both
our synthesis and simulation. Synthesis and simulation times were relatively
the same between the integrated bus implementations and those using a bus
wrapper. We note that peripheral devices capable of DMA or burst mode I/O
with interrupts will require more time to integrate into a system.
Although the use of bus wrappers improves the usefulness of a core by making
it easier to retarget to varying systems, this reusability comes at a cost. Bus
wrappers introduce both performance and power overhead, as we have demon-
strated. In tightly constrained systems where peripheral access time is critical,
this overhead is often infeasible. Ideally, the use of bus wrappers could allow
for quick retargeting of a core while not degrading performance. In the next
section, we present a technique called prefetching that effectively eliminates
the performance overhead of bus wrappers.
4. BASIC PREFETCHING
4.1
Overview
Our focus is to minimize this performance penalty in order to maximize the
usefulness of the core. We seek to do so in a manner transparent to both the
developers of the core internal behavior as well as developers of the on-chip bus.
Because of the continued exponential growth in chip capacity, we seek to gain
performance by making the tradeoff of increased size, since size constraints
continue to ease. However, we note that our approach increases the switching
activity of the core, and thus we must also evaluate the increased power
consumption and seek to minimize this increase.
We focus on peripheral cores, whose registers will be read by a microprocessor
over an on-chip bus (perhaps via a bus bridge) with the idea being to minimize
the read latency experienced by the microprocessor.
The basic technique that we propose is called prefetching. Prefetching is the
technique of copying a core's internal register data into a prefetch register in
a core's BW, so that when a read request from the bus occurs, the core can
immediately output prefetched data without spending extra cycles to ?rst get
the data from the core's internal module. We use the terms hit and miss in a
manner identical for caches; a hit means that the desired data is in a prefetch
register, while a miss means that the data must ?rst be fetched into a prefetch
register before being output to the on-chip bus. For example, Figure 2(c) shows
that prefetching a core's internal register D into a BW register D0 results in a
system read again requiring only two cycles, rather than four.
4.2 Classi?cation of Core Registers
We immediately recognized the need to classify common types of registers found
in peripheral cores, since different types would require different prefetching
approaches.
12 Lysecky and Vahid
After examining cores, primarily from the Inventra library [Mentor Graphics
n.d.], focusing on bus peripherals, serial communication, encryption, and com-
pression/decompression, we de?ned a register classi?cation scheme based on
four attributes: update type, access type, noti?cation type, and structure
type:
(1) The update type of a register describes how the register's contents are mod-
i?ed. Possible types include:
(a) A static-update register is updated by the system only, where the system
is the device (or devices) that communicate with the core over the on-chip
bus. An example of a static register is a con?guration register. After
the system updates the register, the register's content does not change
until the system updates it again.
(b) A volatile-update register is updated by a source other than the system
(e.g., internally by the core or externally by the core's environment)
at either a random or ?xed rate. An example is an analog-to-digital
converter, which samples external data, converts the data to digital,
and stores the result in a register, at a ?xed rate.
(c) An induced-update register is updated as a direct result of another register
within the core being updated. Thus, we associate this register
with the inducing register. Typically, an induced register is one that
provides status information.
(2) The access type of a register describes whether the system reads and/or
writes the register, with possible types including: (a) read-only access, (b)
write-only access, and (c) read/write access.
(3) The noti?cation type describes how the system is made aware that a register
has been updated, with possible types including:
(a) An interrupt noti?cation in which the core generates an interrupt when
the register is updated.
(b) A register-based ?ag noti?cation in which the core sets a ?ag bit (where
that bit may be part of another register).
(c) An output ?ag noti?cation in which the core has a speci?c output signal
that is asserted when the register is updated.
(d) No noti?cation in which the system is not informed of updates and
simply uses the most recent register data.
(4) The structure type of the register describes the actual storage capability of
the register, with possible types including:
(a) A singly structured register is accessed through some address and is
internally implemented as one register.
(b) A queue-structured register is a register that is accessed through some
address but is internally implemented as a block of memory. A common
example is a buffer register in a UART.
(c) A block-structured register is a block of registers that can be accessed
through consecutive addresses, such as a register ?le or a memory.
Prefetching for Improved Bus Wrapper Performance 13
4.3 Commonly Occurring Register Types
For our ?rst attempt at developing prefetching techniques for cores, we focused
on the following three commonly occurring combinations of registers in cores:
(1) Core1?con?guration registers: Many cores have con?gurable settings controlled
by a set of con?guration registers. A typical con?guration register
has the features of static update, read/write access, no noti?cation, and
singly structured. We refer to this example as Core1.
(2) Core 2?task registers: Many cores carry out a speci?c task from start to
completion and have a combination of a data input register, a data output
register, and a status register that indicates completion of the core's task.
For example, a CODEC (compress/decompress) core typically has such a set
of registers. We looked at how to prefetch the data output and status regis-
ters. The data output register has the following features: volatile-update at
a random rate, read-only access, register-based ?ag noti?cation with the ?ag
stored in the status register, and singly structured. The status register has
the following features: induced update by an update to the data output reg-
ister, read-only access, no noti?cation, and singly structured. Although the
data input register will not be prefetched, its features are: volatile-update
at a random rate, write-only access, no noti?cation, and singly structured.
We refer to this example as Core2.
(3) Core3?input-buffer registers: Many cores have a combination of a queue
data buffer that receives data and a status register that indicates the number
of bytes in the buffer. A common example of such a core is a UART.
Features of the data buffer include: volatile-update at a random rate, read-only
access, register-based ?ag noti?cation stored in the status register, and
queue-structured. The status register features include: induced-update by
an update to the data register, read-only access, no noti?cation, and singly
structured. We refer to this example as Core3.
4.4 Prefetching Architectures and Heuristics
4.4.1 Architecture. In order to implement the prefetching for each of the
above listed combinations of registers, we developed architectures for bus wrappers
for each. Figure 6 illustrates the architecture for each of the three combinations
respectively. Each BW architecture has three regions:
(1) Controller: The controller's main task is to interface with the on-chip bus. It
thus handles reads and writes from and to the core's registers. For a write,
the controller writes the data over the core internal bus to the core internal
register. For a read, the controller outputs the appropriate prefetch register
data onto the bus; for a hit, this outputting is done immediately, while for
a miss, it is done only after forcing the prefetch unit to ?rst read the data
from the core internals.
(2) Prefetch registers: These registers are directly connected to the on-chip bus
for fast output. Any output to the bus must pass through one of these
registers.
14 Lysecky and Vahid
Bus wrapper architecture and timing diagrams for (a) Core1, (b) Core2, and (c) Core3.
Fig. 6.
Prefetching for Improved Bus Wrapper Performance 15
(3) Prefetch unit: The PFU implements the prefetch heuristics, and is responsible
for reading data from the core internals to the prefetch registers. Its
goal is to maximize hits.
The architecture for the Core1 situation is shown in Figure 6(a), showing one
register D and its corresponding prefetch register D0. Since D is only updated by
the on-chip bus, no prefetch unit is needed; instead, we can write to D0 whenever
we write to D. Such a lack of a PFU is an exception to the normal situation.
Figure
6(b) shows the architecture for the Core2 situation. The data output
register DO and status register S both have prefetch registers in the BW, but
the data input register DI does not since it is never read by the on-chip bus. The
PFU carries out its prefetch heuristic (see next section), unless the controller
asserts the ?writing? line, in which case the PFU suspends prefetching so that
the controller may write to DI over the core internal bus. Figure 6(c) shows the
architecture for the Core3 example, which has no write-access registers and
hence does not include the bus between the controller and the core internal bus.
4.4.2 Heuristics. We applied the following prefetch heuristics within each
core's bus wrapper:
Upon a system write to the data register D, simultaneously write the
data into the prefetched data register D0. This assumes that a write to the data
register will occur prior to a read from the register.
After the system writes to the data input register DI, we read the
core's internal status register S into the prefetched status register S0.Ifthe
status indicates completion, we read the core's internal data output register
DO into the prefetched data-output register DO0. We repeat this process.
We continuously read the core's internal status register S into the
prefetched status register S0 until the status indicates the buffer is no longer
empty. We then read the core's data register D into the prefetched data register
D0. While waiting for the system to read the data, we continuously read the core's
internal status register into the prefetched status register, thereby providing
the most current status information. When the data is read by the system,
depending on whether the buffer is empty, we either read the next data item
from the core or repeat the process.
Figure
6 shows timing diagrams for the three cores with a BW and prefetch-
ing. In all three cores, the read latency for each core with a BW and prefetching
was equal to the latency of that core without a BW, thus eliminating the performance
penalty.
Note that a BW's architecture and heuristic are dependent on the core
internals. This is acceptable since the core developer builds the BW. The BW
controller's bus interface is not, however, dependent on the core internals, as
desired.
4.5 Experiments
We implemented cores representing the three earlier common examples, in
order to evaluate performance, power, and size tradeoffs achievable through
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Table
II. Impact of Prefetching on Several Cores
Size w/o BW (gates)
Size w/BW w/o PF (gates)
Size w/BW w/PF (gates)26692638617211506Performance w/o BW (ns)
Performance w/BW w/o PF (ns)
Performance w/BW w/PF (ns)9835551555454305Power w/o BW (microwatts)
Power w/BW w/o PF (microwatts)
Power w/BW w/PF (microwatts)13994805601521Energy w/o BW (nJ)
Energy w/BW w/o PF (nJ)
Energy w/BW w/PF (nJ)13.762.653.116.55prefetching. Results are summarized in Table II. All three cores were written as
soft cores in register-transfer-level behavioral VHDL. The three cores required
136, 220, and 226 lines of VHDL, respectively. We synthesized the cores using
Synopsys Design Compiler. Performance, average power, and energy metrics
were measured using Synopsys analysis tools, using a suite of core test vectors
for each core. It is important to note that these cores have simple internal
behavior and were used for experimentation purposes only. Although these
examples are small, because the PFU unit is independent of the core internals
our approach can be applied to larger examples as well.
In all three cores, when prefetching was added to the BW's, any performance
penalty was effectively eliminated. In Core2 and Core3, there was a trivial one-time
30-ns and 10-ns overhead associated with the initial time required to start
and restart the prefetching process for the particular prefetch heuristics.
The addition of a BW to cores adds size overhead to the design, but size constraints
continue to relax as chip capacities continue their exponential growth.
In the three cores described above, there was an average increase in the size
of each core by 1352 gates. The large percentage increase in size for Core1 and
was due to the fact that these cores were unusually small to begin with
since they had only simple internal behavior, having only 1000 or 2000 gates;
more typical cores would have closer to 10,000 or 20,000 gates, so the percentage
increase caused by the few thousand extra gates would be much smaller.
In order for prefetching to be a viable solution to our problem, power and
energy consumption must also be acceptable. Power is a function of the amount
of switching in the core, while energy is a function of both the switching and
the total execution time. BWs without prefetching caused both an increase in
power (due to additional internal transfers to the BW) and an increase in overall
energy consumption (due to longer execution time) in all three cores. Compared
to BWs without prefetching, BWs with prefetching may increase or decrease
power depending on the prefetch heuristic and particular application. For ex-
ample, in Core1 and Core3, there was an increase in power due to the constant
activity of the prefetch unit, but in Core2, there was a decrease in power due to
the periods of time during which the prefetch unit was idle. However, in all three
cores, the use of prefetching in the BW decreased energy consumption over the
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 17
Table
III. Impact of Prefetching on Digital Camera Performance
Reads
Cycles w/o
prefetching
Cycles w/
prefetching
CCD?Status
CCD?Data
CODEC?Status
CODEC?Data25625710241028512514
Total for 2 cores 772 3088 1544
Digital Camera
Peripheral I/O Access
Digital Camera Processor
Execution
Digital Camera 48,616 47,072
Table
IV. Impact of Prefetching on Digital Camera Power/Energy
prefetching BW w/prefetching
Power, mW 95.4 98.1 98.1
Energy, J 44.9 47.7 46.2
BW without prefetching because of reduced execution time. In addition, the
increase in energy consumption relative to the core without a bus wrapper was
fairly small.
To further evaluate the usefulness of prefetching, we analyzed a digital
camera as shown in Figure 5. We initially had implemented the CCD and
CODEC cores using BWs without prefetching. We therefore modi?ed them to
use prefetching, and compared the two versions of the digital camera system.
Table
III provides the number of cycles for reading status and data registers
for the two cores to capture one picture frame. The number of cycles required
for these cores with prefetching was half of the number of cycles required
without prefetching. The improvement in performance for reads from the CCD
and CODEC was 50%. The overall improvement in performance for the digital
camera was over 1500 cycles just by adding prefetching to these two cores, out
of a total of about 47,000 cycles to capture a picture frame. The prefetching
performance increase of the digital camera was directly related to the ratio of
I/O access to processor computation. Because the digital camera spends 78% of
execution time performing computation and only 12% performing I/O access,
prefetching did not have a large impact on overall performance. However,
the increase in performance for peripheral I/O access was 25%. Therefore, for
a design that is more I/O intensive, one would expect a greater percentage
performance increase. Furthermore, if the processor was pipelined, the number
of cycles required for program execution would decrease, and the percentage
of time required for I/O access would increase. Thus, one would again expect a
greater percentage performance increase from prefetching. Adding prefetching
to other cores would of course result in even further reductions. The power
and energy penalties are shown in Table IV. We see that, in this example,
prefetching is able to eliminate any performance overhead associated with
keeping interface and internals separated in a core.
Prefetching enables elimination of the performance penalty while fully supporting
the idea of a VSI standard for the internal bus between the BW and
core internals. It can also be varied to tradeoff performance with size and power;
ideally, a future tool would synthesize a BW satisfying the power, performance,
and size constraints given by the user of a core.
5. ?REAL-TIME" PREFETCHING
5.1
Overview
One of the drawbacks to the prefetching technique described above is that the
prefetch unit was manually designed and created. We desired to also investigate
an automatic solution to designing a prefetch unit. The bus wrapper in our
automated approach has an identical architecture to our previous bus wrapper
with prefetching. However, we now rede?ne the task of the prefetch unit (PFU).
The prefetch unit is responsible for keeping the prefetch registers as up-to-date
as possible, by prefetching the core's internal registers over the internal bus,
when the internal bus is not being used for a write by the controller, i.e., during
internal bus idle cycles. Only one register can be read from the core internals
at a time.
We assume we are given a list of the core's readable registers, which must be
prefetched. We also assume that the bus wrapper can accommodate one copy
of each such register. Each register in the list is annotated with two important
read-access constraints:
?Register age constraint: This constraint represents the number of cycles old
that data may be when read. In other words, it represents the period during
which the prefetch register must be updated at least once. An age constraint
of 0 means that the data must be the most recent data, which in turn means
that the data must come directly from the core and hence prefetching is
not allowed, since prefetched data is necessarily at least one cycle old. A
constraint of 0 also means that the access-time constraint must be at least
four cycles.
?Register access-time constraint: This constraint represents the maximum
number of cycles that a read access may take. The minimum is two, in which
case the register must be prefetched. An access-time constraint greater than
2 denotes that additional cycles may be tolerated.
We wish to design a PFU that reads the core internal registers into the prefetch
registers using a schedule that satis?es the age and access-time constraints on
those registers. Note that certain registers may be prefetched more frequently
than others if this is required to satisfy differing register access constraints.
The tradeoff of prefetching is performance improvement at the expense of size
and power. Our main goal is performance improvement, but we should ensure
that size and power do not grow more than an acceptable amount. Future work
may include optimizing a cost function of performance, size, and power.
For example, Figure 7 shows a core with three registers, A, B, and C. We
assume that registers A and B are independent registers that are read-only,
and updated randomly by the core internals. Assume that A and B have register
age constraints of four and six cycles, respectively. We might use a naive
prefetching heuristic that prefetches on every idle cycle, reading A 60% and
Prefetching for Improved Bus Wrapper Performance 19
Fig. 7. Bus wrapper with prefetching.
Table
V. Prefetch Scheduling for the
Core in Figure 7
Idle cycle Schedule 1 Schedule 213579
A
A
A
A
A
A
A
A
A
B 40% of the time, leading to Schedule 1 in Table V. However, we can create a
more ef?cient schedule, as shown in Schedule 2. Although both schedules will
meet the constraints, the ?rst schedule will likely consume more power. The
naive scheduler also does not consider the effects of register writes, which will
be taken into consideration using real-time scheduling techniques.
During our investigation for heuristics to solve the prefetching problem, we
noticed that the problem could be mapped to the widely studied problem of
real-time process scheduling, for which a rich set of powerful heuristics and
analysis techniques already exist. We now describe the mapping and then provide
several prefetching heuristics (based on real-time scheduling heuristics)
and analysis methods.
5.2 Mapping to Real-Time Scheduling
A simple de?nition of the real-time scheduling problem is as follows. Given a
set of N independent periodic processes, and a set of M processors, we must
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
20 Lysecky and Vahid
order the execution of the N processes onto the M processors. Each process
has a period, Pi, a deadline, Di, and a computation time, Ci. The period of a
process is the rate at which the process requests execution. The deadline is the
length of time in which a process must complete execution after it requests to be
executed. Finally, the computation time is the length of time a process takes to
perform its computation. Therefore, real-time scheduling is the task of ordering
the execution of the N processes among the M processors, to ensure that each
process executes once every period Pi and within its deadline Di, where each
process takes Ci time to complete.
A mapping of the prefetching problem to the real-time process-scheduling
problem is as follows.
process: A register that must be scheduled for prefetching corresponds
to a process that must be scheduled for execution.
?Internal bus ! processor: The internal bus can accommodate only one
prefetch at a time. Likewise, a processor can accommodate only one process
execution at a time. Thus, the internal bus corresponds to a processor.
?Prefetch ! process execution: A prefetch occurs over the internal bus, and
thus corresponds to a process execution occurring on a processor.
?Register age constraint ! process period: The register age constraint de?nes
the period during which the register must be prefetched, which corresponds
to the period during which a process must be scheduled.
?Register access-time constraint ! process deadline: The access-time constraint
de?nes the amount of time a read may take relative to the read
request, which corresponds to the amount of time a process must complete
its execution relative to the time it requested service.
Process computation time: A prefetch corresponds to a process
execution, so the time for a prefetch corresponds to the computation
time for a process. In this paper, we assume a prefetch requires two cycles,
although the heuristics and analysis would of course apply if we extended
the register model to allow for (the rather rare) situation where different
registers would require different amounts of time to read them from the core
internals.
Given this mapping, we can now use several known real-time scheduling and
analysis techniques to solve the prefetching problem.
5.3 Heuristics
5.3.1 Cyclic Executive Approach. The cyclic executive approach [Burns
and Wellings 1997] is a straightforward process scheduling method that can
be used for a ?xed set of periodic processes. The approach constructs a ?xed repeating
schedule called a major cycle, which consists of several minor cycles of
?xed duration. The minor cycle is the rate at which the process with the highest
priority will be executed. The minor cycle is therefore equal to the smallest age
of the registers to be prefetched. This approach is attractive due to its simplicity.
However, it does not handle sporadic processes (in our case, sporadic writes),
Prefetching for Improved Bus Wrapper Performance 21
Table
VI. Prefetch Core Descriptions
Core Register Max Age D Priority RM PF Time Response Time Util. Util. Bound
A
all process periods (register-age constraints) must be a multiple of the minor
cycle time, and constructing the executive may be computationally infeasible
for a large number of processes (registers).
To serve as examples, we describe three cores with various requirements.
Table
VI contains data pertaining to all three of our cores. Table VI contains
information regarding maximum register age constraint (Max Age), register access
time constraint or deadline (D), rate monotonic priority assignment (Prior-
ity RM), time required to prefetch register (PF Time), response time of register
(Response Time), utilization for register set (Util.), and utilization bound for
register set (Util. Bound). Core1 implements a single-channel DAC converter.
Although the analog portion of the converter could not be modeled in VHDL,
the technique for converting the analog input was implemented. The core has a
single register, DATA, that is read-only and updated randomly externally from
the system. Core2 calculates the Greatest Common Divisor (GCD) of three inputs
while providing checksum information for the inputs and the result. The
core contains three registers, GCD1, GCD2, and CS. The result from the GCD
calculator is valid when GCD1 is equal to GCD2. Registers GCD1, GCD2, and
CS are independent read-only registers that are updated externally from the
system. Core3 has ?ve registers, STAT, BIAS, A, B, and RES. STAT is a status
register that is read-only, and indicates the status of the core, i.e., busy or not
busy. Registers A and B are read-only registers that are updated randomly from
outside the system. RES is a read-only register containing the results of some
computation on registers A, B, and BIAS, where BIAS is a write-only register
that represents some programmable adjustment in the computation.
We can use the cyclic executive approach to create a schedule for each of our
three cores. For Core1, both the minor cycle and major cycles are three. For
Core2, the minor cycle is 10 and the major cycle is 20. Finally, for Core3, we can
construct a cyclic executive with a minor cycle of ?ve and a major cycle of 25.
5.3.2 Rate Monotonic Priority Assignment. A more general scheduling approach
can be used for more complex examples, wherein we determine which
process to schedule (register to prefetch) next based on a priority scheme. A
rate monotonic priority assignment [Burns and Wellings 1997] assigns a priority
to each register based upon its age. The register with the smallest age will
have the highest priority. Likewise, the register with the largest age will have
the lowest priority. For our examples we will use a priority of 1 to indicate the
highest priority possible. Rate monotonic priority assignment is known to be
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
22 Lysecky and Vahid
optimal in the sense that if a process set can be scheduled with a ?xed-priority
assignment scheme, then the set can also be scheduled with a rate monotonic
assignment scheme.
We again refer to Table VI for data pertaining to all three of our cores.
For Core1, the register age constraint of the register DATA is three cycles.
Given that DATA is the only register present, it is assigned the highest prior-
ity. Core2's registers GCD1, GCD2, and CS have age constraints of 10, 10, and
respectively. Therefore, the corresponding priorities from highest to lowest
are GCD1, GCD2, and CS. However, because the register age constraint for
GCD1 and GCD2 are equal, the priorities for Core2 could also be, from highest
to lowest, GCD2, GCD1, and CS. It is important to note that the priorities of
registers with the same age constraint can be assigned arbitrary relative priorities
as long as the constraints are met. For Core3, the age constraints for the
registers STAT, A, B, and RES are respectively 5, 25, 25, and 10. Therefore, the
priority of the registers from highest to lowest would be STAT, RES, A, and B.
5.3.3 Utilization-Based Schedulability Test. The utilization-based schedulability
test [8] is used to quickly indicate whether a set of processes can be
scheduled, or in our case whether the registers can be prefetched. All N registers
of a register set can be prefetched if Equation (1) is true, where Ci is the
computation time for register i, Ai is the age constraint of register i, and N
is the number of registers to be prefetched. The left-hand side of the equation
represents the utilization bound for a register set with N registers, and the
right-hand side represents the current utilization of the given register set:
If the register set passes this test, all registers can be prefetched and no
further schedulability analysis is needed. However, if the register set fails the
test, a schedule for this register set that meets all constraints might still exist.
In other words, the utilization-based schedulability test will indicate that a
register set can be prefetched, but does not indicate that a register set cannot
be prefetched.
We can analyze our cores to determine whether we can schedule them. From
Table
VI, we can see that both Core1 and Core2 pass the utilization-based
schedulability test with respective utilizations of 66.7% and 50.0%, where the
corresponding utilization bounds were 100% and 78.0%. This indicates that we
can create a schedule for both of these cores and we do not need to perform any
further analysis. However, Core3 has a utilization of 86.0%, but the utilization
bound for four registers is 75.7%. Therefore, we have failed the utilization-based
schedulability test, though a schedule might still exist.
5.3.4 Response-Time Analysis. Response-time analysis [Burns and
Wellings 1997] is another method for analyzing whether a process set (in
our case, register set) can be scheduled. However, in addition to testing the
schedulability of a set of registers, it also provides the worst-case response
time for each register. We calculate the response of a register using Equation
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 23
(2), where Ri is the response time for register i, Ci is the computation time of
register i, and Ii is the maximum interference that register i can experience
in any time interval [t, t C Ri). The interference of a register is the amount of
time that a process must wait while other higher-priority processes execute.
Ri D Ci C Ii (2)
A register set is schedulable if all registers in the set have a response time
less than or equal to their age constraint. From Table VI, we can see that the
registers of all three cores will meet their register age constraints. Therefore, it
is possible to create a prefetching schedule for all three cores. It is interesting
to note that although the utilization-based schedulability test failed for Core3,
response time analysis indicates that all of the registers can be prefetched. We
refer the reader to Burns and Wellings [1997] for further details on response-time
analysis.
Writes. We now consider the impact of writes to
core registers. Writes come at unknown intervals, and a write ties up the core's
internal bus and thus delays prefetches until done. We can therefore view a
register write as a high-priority sporadic process. We can attribute a maximum
rate at which write commands will be sent to the core. We will also introduce a
deadline for a write. The deadline of a write is similar to the access-time for a
register being prefetched. This deadline indicates that when a write occurs, it
must be completed within the speci?ed number of cycles.
In order to analyze how a register write will impact this scheduling, we can
create a dummy register, WR, in our register set. The age of the WR register
will be the period that corresponds to the maximum rate at which a write will
occur. WR's access-time will be equal to its deadline. We can now analyze the
register set to determine if a prefetching schedule exists for it. This analysis
will provide us with an analysis of the worst case scenario in which a write will
occur once every period.
5.3.6 Deadline Monotonic Priority Assignment. Up to this point, we have
been interested mainly in a static schedule of the register set. However, because
writes are sporadic, we must provide some dynamic mechanism for handling
them. Thus, a dynamic scheduling technique should be used because we cannot
accurately predict these writes. Therefore, we can use a more advanced priority
assignment scheme, deadline monotonic priority assignment [Burns and
Wellings 1997]. Deadline monotonic priority assignment assigns a priority to
each process (register) based upon its deadline (access-time), where a smaller
access-time corresponds to a higher priority. We can still incorporate rate monotonic
priority assignment in order to assign priorities to registers with equal
access-times. Deadline monotonic priority assignment is known to be optimal
in the sense that if a process set can be scheduled by a priority scheme, then it
can be scheduled by deadline monotonic priority assignment.
For example, in order to accommodate writes to the BIAS register in Core3,
we can add the BIAS register to the prefetching algorithm. The deadline for
the BIAS register will be such that we can ensure that writes will always have
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Fig. 8. Performance in ns (top), size in gates (middle), and energy in nJ (bottom).
the highest priority when we use the deadline monotonic priority assignment.
Using this priority assignment mechanism, the priority of the registers from
highest to lowest would be BIAS, STAT, RES, A, and B.
5.4 Experiments with ?Real-Time? Prefetching
In addition to implementing the ADJUST core as described above, we implemented
two additional examples in order to evaluate the impact on perfor-
mance, size, and energy using our real-time pre-fetching. The CODEC core
contains three registers DIN, DOUT, and STAT. This core behaves like a simple
compressor/decompressor, whereby the input data is modi?ed via some arbitrary
translation, after which the STAT register is updated to re?ect completion.
The FIFO core contains two registers DATA and STAT. This core represents a
simple FIFO that has data stored in DATA and the current number of items in
the FIFO stored in STAT.
We modeled the cores as synthesizable register-transfer VHDL models, requiring
215, 204, and 253 lines of code, respectively?note that we intentionally
did not describe internal behavior of the cores, but rather just the register-
access-related behavior, so we could see the impacts of prefetching most clearly.
We used Synopsys Design Compiler for synthesis as well as Synopsys power
analysis tools.
Figure
8 summarizes the results for the three cores. For each core, we examined
three possible bus wrapper con?gurations: no bus wrapper (No BW),
a bus wrapper without prefetching (BW), and a bus wrapper with real-time
prefetching (RTPF).
The ?rst chart in Figure 8 summarizes performance results. Using our
real-time prefetching heuristic, we can see a good performance improvement
when compared to a bus wrapper without prefetching. However in FIFO, we
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 25
only see a very small performance improvement using real-time prefetch-
ing. This small improvement is due to fact that the DATA register in FIFO
cannot be prefetched using this approach. If we were to prefetch using
real-time prefetching, we would empty the FIFO and lose data. Fur-
thermore, without any prefetching, we can see a signi?cant performance
penalty.
The second chart in Figure 8 summarizes size results. As expected, the size
of the cores increased when a bus wrapper was added, and further increased
when prefetching was added to the bus wrapper. The average increase in size
caused by adding real-time prefetching to the bus wrapper was only 1.4K gates.
This increase in design complexity was due to the need to keep track of current
register ages. Furthermore, this size increase was relatively small when
compared to a typical core size of 10K to 20K gates.
The third chart in Figure 8 summarizes energy consumption for our test
vectors. In all three cores, there was an overall increase in energy consumption
when a bus wrapper was added to the core. However, the addition of prefetching
to the bus wrappers did not always strictly increase or decrease energy
consumption. In fact, real-time prefetching increased energy consumption in
CODEC and FIFO, and decreased energy consumption in ADJUST. As expected,
when compared to the core without a bus wrapper, prefetching resulted in an
increase in energy consumption.
6. UPDATE-DEPENDENCY BASED PREFETCHING USING PETRI NETS
6.1
Overview
In some cases, a core designer may be able to provide us more information
regarding when the core's internal registers get updated?in particular, update
dependencies among registers, e.g., if register A is updated externally, then
register B will be updated one cycle later. Using this information, we can design
a schedule that performs fewer prefetches to satisfy given constraints, and thus
can yield advantages of being able to handle more complex problems, or of using
less power.
6.2 General Register Attributes
We need a method for capturing the information a designer provides regarding
register updates. In Section 3.2, we provided a taxonomy of register attributes
can be used to categorize how a register is used. We extend this by introducing
update dependecies. Update dependencies provide further details on when a
register gets updated as a result of other updates (inducements). There are two
kinds of update dependencies:
?Internal dependencies: Dependencies between registers must be accurately
described. Dependencies between registers affect both the operation of the
core and the time at which registers are updated. Therefore, these dependencies
are extremely important in providing an accurate model of a core's
behavior.
26 Lysecky and Vahid
?External dependencies: Updates to registers via reads and writes over the
OCB also need to be included in our model. This information is important
because reads and writes can directly update registers or trigger updates to
other registers, e.g., a write to a control register of a CODEC core will trigger
an event that will update the output data register. Likewise, updates from
external ports to internal core registers must also be present in our model.
These events occur at random intervals and cannot be directly monitored by
a bus wrapper and are therefore needed to provide a complete model of a
core.
We needed to create a model to capture the above information. After analyzing
many possible models to describe both internal and external update depen-
dencies, we concluded that a Petri net model would best ?t our requirements.
6.3 Petri Net Model Construction
As in all Petri net models, we have places, arcs, and transitions. In our model,
a place represents data storage, i.e., a register, or any bus that the bus wrapper
can monitor. In this model, a bus place will generate tokens that will be out-
puted over all outgoing arcs and consumed by data storage places whenever an
appropriate transition is ?red. A transition represents an update dependency
between either the bus and a register or between two registers. Transitions
may be labeled with conditions that represent some requirement on the data
coming into a transition. However, in many cases, a register may be updated
from some external source, i.e., the register's update-type is volatile. Therefore,
we need a mechanism to describe such updates. We will use a transition without
incoming arcs and without an associated condition to represent this behavior.
We will refer to such a transition as a random transition. Given random transi-
tions, tokens can also be generated by external sources that cannot be directly
monitored by the bus wrapper. Thus, our model provides a complete description
of the core's internal register dependencies without providing all details of the
core's internal behavior.
We implemented three core examples to analyze our update dependency
model and prefetching technique. In order to demonstrate the usefulness of our
model we will describe one of the cores we implemented, which we will refer
to as ADJUST, and elaborate on this example throughout the paper. ADJUST
contains three registers GO, MD, and S. First, we annotate each register with
the general register attributes described earlier. The GO register has the attributes
of static-update, write access, no noti?cation, and singly structured.
The MD register has the attributes of volatile-update, read/write access, no
noti?cation, and singly structured. Finally, the S register has the attributes of
volatile update, read-only access, no noti?cation, and singly structured. Next,
we constructed the Petri net for ADJUST.
Figure
9 shows the register update dependency model for ADJUST. From
this model we can see how each register is updated. GO is updated whenever a
write request for GO is initiated on the OCB. S is updated randomly by some
external event that is unknown to the prefetch unit. MD is updated when GO
is equal to 1, a write request for MD is initiated on the OCB, and some external
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 27
Fig. 9. ADJUST register dependencies.
event occurs. Therefore, we now have a complete model of the ADJUST core
that can be used to create a prefetching algorithm.
Using the current model of ADJUST, we need three prefetch registers in
the bus wrapper, namely, GO0,MD0, and S0.GO0 would be updated whenever a
write to GO was initiated over the OCB. For MD and S, we need some method
of refreshing the prefetch registers to keep them as up-to-date as possible. We
will later discuss the heuristics for updating registers with incoming random
transitions. However, we further know that prefetching the MD register would
not be required until a write to MD was made over the OCB and the GO register
was equal to 1. This simple interpretation of the model will reduce the power
consumed by the prefetch unit by not prefetching MD if it is not needed.
6.4 Model Re?nement for Dependencies
Further re?nement of our register update dependency model can be made to
eliminate some random transitions. Although the model of a particular core may
have many random transitions, there may exist some relationships between the
registers with random transitions. If two registers are both updated by the same
external event, it is possible that a relationship may exist between the registers.
For example, in a typical CODEC core, we would ?nd a data register and a
status register. When the data register is updated, the status register is also
updated to indicate the operation has completed. Although both registers are
updated at random times, we know that if the status register indicates comple-
tion, then the data register has been updated.
We can thus eliminate one random transition by replacing the random transition
with a transition having an incoming arc from the related register and assigning
an appropriate condition to this transition. Thus, we have successfully
re?ned our model to eliminate a random transition. The goal of this re?nement
is to eliminate as many random transitions as possible, but it is important
to note that it is not possible to eliminate all random transitions. Therefore,
we still need a method for refreshing the contents of registers with incoming
random transitions.
Figure
shows a re?ned register update dependency model for the ADJUST
core. In this new model, we have eliminated one random transition by replacing
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
28 Lysecky and Vahid
Fig. 10. Re?ned ADJUST register dependencies.
it with a transition that will ?re if S is equal to 1. Hence, we need to modify our
prefetching algorithm to accompany this change. We now know that we only
need to prefetch MD if S is equal to 1, GO is equal to 1, and a write to MD
was initiated over the OCB. This re?nement further simpli?es our prefetching
algorithm and will again reduce power consumption.
6.5 Prefetch Scheduling
Given an update dependency model of a core, we need to construct a schedule to
prefetch the core's registers into the bus wrapper's prefetch registers. Figure 11
describes our update dependency model prefetching heuristic using pseudo-
code. Our heuristic uses the update dependency model in conjunction with our
real-time prefetching to create a schedule for prefetching the core's registers.
The following description will further elaborate on the heuristic.
In order to implement our prefetching heuristic, we will need two data struc-
tures. The ?rst data structure needed is a prefetch register heap, or priority
queue, used to store the registers that need to be prefetched. Second, we need
a list of update arcs that must be analyzed after a register is prefetched or a
read or write request is detected on the OCB. Using these data structures, we
will next describe how the prefetch unit will be designed.
The ?rst step in our prefetching heuristic is to add all registers with incoming
random transitions to the prefetch register heap. These registers will always
remain in the heap because they will need to be repeatedly prefetched in order
to satisfy their register age constraints.
Next, our prefetch heuristic needs to respond to read and write requests on
the OCB. In the event of a read request, the prefetch unit will add any outgoing
arcs to the list of arcs needed to be analyzed. As described in our real-time
prefetching work, a write is treated as another register with special age and
access-time constraints, i.e., the register age constraint is 0 and the access-time
constraint is initially set to in?nity. Because the core internal bus may
be currently in use performing a prefetch, we use this mechanism to eliminate
any contention. As described below, by setting the access-time constraint on
the write register to 0, we will ensure that the write will be the next action
performed. Therefore, a write request will be handled by ?rst copying the data
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 29
Fig. 11. General register model prefetching heuristic used to implement PFU.
into the corresponding prefetch register, setting the access-time constraint to
0, and adding the write register to the prefetch register heap. In addition, any
outgoing arcs will be added to the list of update arcs.
We will use our real-time prefetching to prefetch registers according to their
priorities as assigned by the deadline monotonic priority assignment. When
two registers have the same priority assigned by this mechanism, we will use
the priority assigned by the rate-monotonic priority assignment to schedule
the prefetching. According to this heuristic, registers with an access-time constraint
of 0 will be prefetched ?rst. That means that all write requests and, as
we will describe later, all registers that have been updated will be prefetched
?rst. Note that writes will still take highest priority because their register
age constraint is 0. If no write requests or registers without incoming random
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
transitions need to be prefetched, our prefetching heuristic will next schedule
registers with incoming random transitions according to their rate-monotonic
priority assignment. Therefore, our prefetch register heap will be sorted ?rst by
deadline-monotonic priority assignment and further by rate-monotonic priority
assignment.
After each register prefetch is made or a read or write request is detected on
the OCB, we need to analyze all arcs in the update arc list. If any transition
?res, the outgoing arcs of this transition will be added to the list. If a token
reaches another place, we set the corresponding register's access-time to 0 and
add it to the heap, thus ensuring that this register is prefetched as soon as
possible.
In order to better understand this prefetching heuristic, we will look at the
ADJUST core. In ADJUST, we have one random transition which is connected
to the S register. We noticed that in our design, on average, we only needed to
read the contents of S every six cycles. Therefore, we set the register age constraint
to six cycles, and the register access-time constraint to two, indicating
that the register S must be prefetched every six cycles. For MD, both the register
age and access-time constraints are two cycles. GO, however, has neither an
age constraint nor an access-time constraint because it is a write-only register.
Note that even though GO is a write-only register, a copy must be maintained
in the bus wrapper, as it is needed in order to analyze the update dependencies.
Our prefetching algorithm will monitor the OCB. If a write to the GO register
is made, the data will be copied into GO0, and the write register access-time
will be set to 0. On a write to the MD register, the access-time of S will be set to
Also if GO is equal to 1 and S is equal to 1, then set the access-time for MD
to 0. Finally, we will use the scheduling above to prefetch the registers when
needed and perform write operations.
6.6 Experiments with Update-Dependency Prefetching
We implemented the update dependency prefetching on the same three cores as
above, namely ADJUST, CODEC, and FIFO. Figure 12 summarizes the results
for the three cores. We now have four possible bus wrapper con?gurations for
each core: no bus wrapper (no BW), a bus wrapper without prefetching (BW),
a bus wrapper with real-time prefetching (RTPF), and a bus wrapper with our
update dependency prefetching model (UDPF).
The ?rst chart in Figure 12 summarizes performance results. In all three
cores, the use of our update dependency prefetching method almost entirely
eliminated the performance penalty associated with the bus wrapper. There
was still a slight overhead caused by starting the prefetch unit. Using our real-time
prefetching heuristic, we can see that although there is a performance
improvement when compared to a bus wrapper without prefetching, it did not
perform as well as our update dependency model.
The second chart in Figure 12 summarizes size results. The average increase
in size caused by adding the update dependency prefetching technique to the
bus wrapper was only 1.5K gates. In comparison, real-time prefetching resulted
in an average increase of 1.4K gates. It is interesting to note why the two
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 1, January 2002.
Prefetching for Improved Bus Wrapper Performance 31
Fig. 12. Performance in ns (top), size in gates (middle), and energy in nJ (bottom).
approaches, although quite different, resulted in approximately the same size
increase. As stated earlier, using real-time prefetching, we increase the design
complexity due to the need to keep track of current register ages. However,
using our extended approach, complexity increases due to added logic needed
to analyze update dependencies.
The third chart in Figure 12 summarizes energy consumption for our test
vectors. However, the addition of prefetching to the bus wrappers does not always
strictly increase or decrease energy consumption. In fact, we can see that
in ADJUST and FIFO, there is a decrease in energy consumption when our
update dependency prefetching is added to the bus wrapper, but in CODEC,
there is an increase. On the other hand, real-time prefetching increases energy
consumption in CODEC and FIFO, and decreases energy consumption in
ADJUST.
More importantly, if we compare the results of our real-time prefetching
to our update dependency prefetching, we notice that the update dependency
prefetching results in signi?cantly less energy consumption. This is easily explained
by the fact that this approach only prefetches registers when they have
been updated whereas our real-time prefetching will prefetch registers more often
to keep them as up-to-date as possible. Therefore, by eliminating the need
to prefetch all registers within their register age constraints, we can reduce
energy consumption.
7. CONCLUSIONS
While keeping a core's interface and internal behavior separated is key to a
core's marketability, we demonstrated that the use of such bus wrappers, both
non-PVCI and PVCI, results in size, power, and performance overhead. Thus,
the retargetability advantages of such a standard seem to come with some
penalties.
We introduced prefetching as a technique to overcome the performance over-
head. We demonstrated that in some common cases of register combinations,
prefetching eliminates the performance degradation at the expense of acceptable
increases in size and power. By overcoming the performance degradation
associated with bus wrappers, prefetching thus improves the usefulness of
cores.
We have further provided a powerful solution to this problem by mapping the
problem to the real-time process-scheduling domain, and then applying heuristics
and analysis techniques from that domain. We also provided a general
register update dependency model that we used to construct a more ef?cient
prefetching schedule, in conjunction with our real-time prefetching. We demonstrated
the effectiveness of these solutions through several experiments, showing
good performance improvements with acceptable size and energy increases.
Furthermore, we demonstrated that using our update dependency model we
were able to better prefetch registers when compared to our real-time prefetching
methodology. The two approaches are thus complementary?the real-time
approach can be used when only register constraints are provided, while the
model-based approach of this paper can be used when register update information
is also provided.
8. FUTURE WORK
Although prefetching works well, there are many possibilities for improve-
ments. In our current approach we assume that all registers of the core will be
prefetched. However, for cores with large numbers of registers, this approach
is not feasible. Thus, we are considering restricting the number of registers
that can appear in a bus wrapper. This creates new cache-like issues such as
mapping, replacement, and coherency issues that are not present in our current
design. In addition, we can further evaluate the effects of prefetching on
larger core examples. Another direction involves developing prefetching heuristics
that optimize a given cost function of performance, power, and size.
--R
Interface co-synthesis techniques for embedded systems
Fast prototyping: a system design
Java driven codesign and prototyping of networked embedded systems.
A new direction for computer architecture research.
Description and simulation of hard- ware/software systems with Java
Interface design for core-based systems
Inventra core library.
Computer architecture
Introduction to rapid silicon prototyping: hardware-software co-design for embedded systems-on-a-chip ICs
ASSOCIATION. <Year>1999</Year>. International technology roadmap for semiconductors:
The case for a con?
An object-oriented communication library for hardware-software co-design
Experiences with system level design for consumer ICs.
Constructing application-speci?c heterogeneous embedded architectures from custom HW/SW applications
VIRTUAL SOCKET INTERFACE ASSOCIATION.
VIRTUAL SOCKET INTERFACE ASSOCIATION.
VIRTUAL SOCKET INTERFACE ASSOCIATION.
accepted May
--TR
Computer architecture: a quantitative approach
Real-time systems and their programming languages
Interface co-synthesis techniques for embedded systems
Constructing application-specific heterogeneous embedded architectures from custom HW/SW applications
Interface-based design
The case for a configure-and-execute paradigm
Fast prototyping
Java driven codesign and prototyping of networked embedded systems
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
A New Direction for Computer Architecture Research
Interface Design for Core-Based Systems
An Object-Oriented Communication Library for Hardware-Software CoDesign
Bus-Based Communication Synthesis on System-Level
--CTR
Ken Batcher , Robert Walker, Cluster miss prediction for instruction caches in embedded networking applications, Proceedings of the 14th ACM Great Lakes symposium on VLSI, April 26-28, 2004, Boston, MA, USA
Minas Dasygenis , Erik Brockmeyer , Bart Durinck , Francky Catthoor , Dimitrios Soudris , Antonios Thanailakis, A combined DMA and application-specific prefetching approach for tackling the memory latency bottleneck, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.3, p.279-291, March 2006 | PVCI;VSIA;system-on-a-chip;interfacing;cores;bus wrapper;on-chip bus;design reuse;intellectual property |
505243 | An optimal minimum spanning tree algorithm. | We establish that the algorithmic complexity of the minimum spanning tree problem is equal to its decision-tree complexity. Specifically, we present a deterministic algorithm to find a minimum spanning tree of a graph with n vertices and m edges that runs in time O(T*(m,n)) where T* is the minimum number of edge-weight comparisons needed to determine the solution. The algorithm is quite simple and can be implemented on a pointer machine.Although our time bound is optimal, the exact function describing it is not known at present. The current best bounds known for T* are (m) and O(m (m,n)), where is a certain natural inverse of Ackermann's function.Even under the assumption that T* is superlinear, we show that if the input graph is selected from Gn,m, our algorithm runs in linear time with high probability, regardless of n, m, or the permutation of edge weights. The analysis uses a new martingale for Gn,m similar to the edge-exposure martingale for Gn,p. | Introduction
The minimum spanning tree (MST) problem has been studied for much of this century and
yet despite its apparent simplicity, the problem is still not fully understood. Graham and Hell
[GH85] give an excellent survey of results from the earliest known algorithm of Boruvka [Bor26]
to the invention of Fibonacci heaps, which were central to the algorithms in [FT87, GGST86].
Chazelle [Chaz97] presented an MST algorithm based on the Soft Heap [Chaz98] having complexity
O(m(m;n) log (m; n)), where is a certain inverse of Ackermann's function. Recently Chazelle
[Chaz00] modied the algorithm in [Chaz97] to bring down the running time to O(m (m; n)).
Later, and in independent work, a similar algorithm of the same running time was presented in
Pettie [Pet99], which gives an alternate exposition of the O(m (m; n)) result. This is the tightest
time bound for the MST problem to date, though not known to be optimal.
This is an updated version of UTCS Technical Report TR99-17 which includes performance analysis on random
graphs and new references. Part of this work was supported by Texas Advanced Research Program Grant 003658-
0029-1999. Seth Pettie was also supported by an MCD Fellowship.
All algorithms mentioned above work on a pointer machine [Tar79] under the restriction that
edge weights may only be subjected to binary comparisons. If a more powerful model is assumed,
the MST can be computed optimally. Fredman and Willard [FW90] showed that on a unit-cost
RAM where the bit-representation of edge weights may be manipulated, the MST can be computed
in linear time. Karger et al. [KKT95] presented a randomized MST algorithm that runs in linear
time with high probability, even if edge weights are only subject to comparisons.
It is still unknown whether these more powerful models are necessary to compute the MST
in linear time. However, in this paper we give a deterministic, comparison-based MST algorithm
that runs on a pointer machine in O(T (m; n)) time, where T (m; n) is the number of edge-weight
comparisons needed to determine the MST on any graph with m edges and n vertices. Additionally,
we show that our algorithm runs in linear time for the vast majority of graphs, regardless of density
or the permutation of edge weights.
Because of the nature of our algorithm, its exact running time is not known. This might seem
paradoxical at rst. The source of our algorithm's optimality, and its mysterious running time, is the
use of precomputed 'MST decision trees' whose exact depth is unknown but nonetheless provably
optimal. A trivial lower bound on our algorithm is
m); the best upper bound, O(m(m;n)),
is due to Chazelle [Chaz00]. We should point out that precomputing optimal decision trees does
not increase the constant factor hidden by big-Oh notation, nor does it result in a non-uniform
algorithm.
Our optimal MST algorithm should be contrasted with the complexity-theoretic result that any
optimal verication algorithm for some problem can be used to construct an optimal algorithm
for the same problem [Jo97]. Though asymptotically optimal, this construction hides astronomical
constant factors and proves nothing about the relationship between algorithmic complexity and
decision-tree complexity. See Section 8 for a discussion of these and other related issues.
In the next section we review some well-known MST results that are used by our algorithm.
In section 3 we prove a key lemma and give a procedure for partitioning the graph in an MST-
respecting manner. Section 4 gives an overview of the optimal algorithm and discusses the structure
and use of pre-computed decision-trees for the MST problem. Section 5 gives the algorithm and
a proof of optimality. Section 6 shows how the algorithm may be modied to run on a pointer
machine. In section 7 we show our algorithm runs in linear-time w.h.p. if the input graph is
selected at random. Sections 8 & 9 discuss related problems and algorithms, open questions, and
the actual complexity of MST.
Preliminaries
The input is an undirected graph E) where each edge is assigned a distinct real-valued
weight. The minimum spanning forest (MSF) problem asks for a spanning acyclic subgraph of
G having the least total weight. In this paper we assume for convenience that the input graph
is connected, since otherwise we can nd its connected components in linear time and then solve
the problem on each connected component. Thus the MSF problem is identical to the minimum
spanning tree problem.
It is well-known that one can identify edges provably in the MSF using the cut property, and
edges provably not in the MSF using the cycle property. The cut property states that the lightest
edge crossing any partition of the vertex set into two parts must belong to the MSF. The cycle
property states that the heaviest edge in any cycle in the graph cannot be in the MSF.
2.1 Boruvka steps
The earliest known MSF algorithm is due to Boruvka [Bor26]. The algorithm is quite simple: It
proceeds in a sequence of stages, and in each stage it executes a Boruvka step on the graph G, which
identies the set F consisting of the minimum-weight edge incident on each vertex in G, adds these
edges to the MSF (since they must be in the MSF by the cut property), and then forms the graph
as the input to the next stage, where GnF is the graph obtained by contracting each
connected component formed by F . This computation can be performed in linear time. Since
the number of vertices reduces by at least a factor of two, the running time of this algorithm is
O(m log n), where m and n are the number of vertices and edges in the input graph.
Our optimal algorithm uses a procedure called Boruvka2(G; F; G 0 ). This procedure executes
two Boruvka steps on the input graph G and returns the contracted graph G 0 as well as the set of
edges F identied for the MSF during these two steps.
2.2 Dijsktra-Jarnk-Prim Algorithm
Another early MSF algorithm that runs in O(m log n) time is the one by Jarnk [Jar30], re-discovered
by Dijkstra [Dij59] and Prim [Prim57]. We will refer to this algorithm as the DJP algorithm. Brie
y,
the DJP algorithm grows a tree T , which initially consists of an arbitrary vertex, one edge at a
time, choosing the next edge by the following simple criterion: Augment T with the minimum
weight edge (x; y) such that x 2 T and y 62 T . By the cut property, all edges in T are in the MSF.
Lemma 2.1 Let T be the tree formed after the execution of some number of steps of the DJP
algorithm. Let e and f be two arbitrary edges, each with exactly one endpoint in T , and let g be the
maximum weight edge on the path from e to f in T . Then g cannot be heavier than both e and f .
Proof: Let P be the path in T connecting e and f , and assume the contrary, that g is the heaviest
edge in P [ fe; fg. Now consider the moment when g is selected by DJP and let P 0 be the portion
of P present in the tree. There are exactly two edges in (P P which are eligible to
be chosen by the DJP algorithm at this moment, one of which is the edge g. If the other edge is
in P then by our choice of g it must be lighter than g. If the other edge is either e or f then by
our assumption it must be lighter than g. In both cases g could not be chosen next by the DJP
algorithm, a contradiction. 2
2.3 The Dense Case Algorithm
The algorithms presented in [FT87, GGST86, Chaz97, Chaz00, Pet99] will nd the MSF of a graph
in linear time if the graph is su-ciently dense, i.e., has a su-ciently large edge-to-vertex ratio. For
our purposes, 'su-ciently dense' will mean
an (3) n), where n is the number of vertices in the
graph. All of the above algorithms run in linear time for that density.
The procedure DenseCase(G; F ) takes as input an n-node graph G and returns the MSF F of
G in linear time for graphs with
density
263 (3) n).
Our optimal algorithm will call DenseCase on a graph derived from an n-node, m-edge graph
by contracting vertices so that the number of vertices is reduced by a factor
of
n). The
number of edges in the contracted graph is no more than m. It is straightforward to see that
DenseCase will run in O(m + n) time on such a graph.
2.4 Soft Heap
The main data structure used by our algorithm is the Soft Heap [Chaz98]. The Soft Heap is a kind
of priority queue that gives us an optimal tradeo between accuracy and speed. It supports the
following operations:
MakeHeap(): returns an empty soft heap.
Insert(S; x): insert item x into heap S.
Findmin(S): returns item with smallest key in heap S.
delete x from heap S.
create new heap containing the union of items stored in S 1
and S 2 , destroying S 1 and S 2 in the process.
All operations take constant amortized time, except for Insert, which takes O(log( 1
To
save time the Soft Heap allows items to be grouped together and treated as though they have a
single key. An item adopts the largest key of any item in its group, corrupting the item if its new
key diers from its original key. Thus the original key of an item returned by Findmin (i.e. any
item in the group with minimum key) is no more than the keys of all uncorrupted items in the
heap. The guarantee is that after n Insert operations, no more than n corrupted items are in the
heap. The following result is shown in [Chaz98].
Lemma 2.2 Fix any parameter 0 < < 1=2, and beginning with no prior data, consider a mixed
sequence of operations that includes n inserts. On a Soft Heap the amortized complexity of each operation
is constant, except for insert, which takes O(log(1=)) time. At most n items are corrupted
at any given time.
3 A Key Lemma and Procedure
3.1 A Robust Contraction Lemma
It is well known that if T is a tree of MSF edges, we can contract T into a single vertex while
maintaining the invariant that the MSF of the contracted graph plus T gives the MSF for the
graph before contraction.
In our algorithm we will nd a tree of MSF edges T in a corrupted graph, where some of the
edge weights have been increased due to the use of a Soft Heap. In the lemma given below we show
that useful information can be obtained by contracting certain corrupted trees, in particular those
constructed using some number of steps from the Dijkstra-Jarnik-Prim (DJP) algorithm. Ideas
similar to these are used in Chazelle's 1997 algorithm [Chaz97], and more explicitly in the recent
algorithms of Pettie [Pet99] and Chazelle [Chaz00].
Before stating the lemma, we need some notation and preliminary concepts. Let V (G) and
E(G) be the vertex and edge sets of G, and n and m be their cardinality, respectively. Let the
G-weight of an edge be its weight in graph G (the G may be omitted if implied from context).
For the following denitions, M and C are subgraphs of G. Denote by G * M a graph derived
from G by raising the weight of each edge in M by arbitrary amounts (these edges are said to be
corrupted). Let MC be the set of edges in M with exactly one endpoint in C. Let GnC denote
the graph obtained by contracting all connected components induced by C, i.e. by replacing each
connected component with a single vertex and reassigning edge endpoints appropriately.
We dene a subgraph C of G to be DJP-contractible if after executing the DJP algorithm on
G for some number of steps, with a suitable start vertex in C, the tree that results is a spanning
tree for C.
Lemma 3.1 Let M be a set of edges in a graph G. If C is a subgraph of G that is DJP-contractible
w.r.t. G * M , then MSF (G) is a subset of MSF (C) [ MSF (GnC MC ) [ MC .
Proof: Each edge in C that is not in MSF(C) is the heaviest edge on some cycle in C. Since that
cycle exists in G as well, that edge is not in MSF(G). So we need only show that edges in GnC
that are not in MSF(GnC MC are also not in MSF(G).
hence we need to show that no edge in H MSF (H) is in MSF (G). Let
e be the heaviest edge on some cycle in H (i.e. e 2 H MSF (H)). If does not involve the
vertex derived by contracting C, then it exists in G as well and e 62 MSF (G). Otherwise, forms
a path P in G whose end points, say x and y, are both in C. Let the end edges of P be (x; w) and
included no corrupted edges with one end point in C, the G-weight of these edges
is the same as their (G * M)-weight.
Let T be the spanning tree of C * M derived by the DJP algorithm, Q be the path in T
connecting x and y, and g be the heaviest edge in Q. Notice that P [ Q forms a cycle. By our
choice of e, it must be heavier than both (x; y) and (w; z), and by Lemma 2.1, the heavier of (x; y)
and (w; z) is heavier than the (G * M)-weight of g, which is an upper bound on the G-weights of
all edges in Q. So w.r.t. G-weights, e is the heaviest edge on the cycle P [ Q and cannot be in
MSF (G). 2
3.2 The Partition Procedure
Our algorithm uses the Partition procedure which is given below. This procedure nds DJP-
contractible subgraphs C in which edges are progressively being corrupted by the Soft
Heap. Let MC i contain only those corrupted edges with one endpoint in C i at the time it is
completed.
Each subgraph C i will be DJP-contractible w.r.t a graph derived from G by several rounds of
contractions and edge deletions. When C i is nished it is contracted and all incident corrupted
edges are discarded. By applying Lemma 3.1 repeatedly we see that after C i is built, the MSF of
G is a subset of
MSF
The Partition procedure is shown in Figure 1. The arguments appearing before the semicolon
are inputs; the others are outputs. M is a set of edges and C=fC is a set of subgraphs of
G. No edge will appear in more than one of M;C
Initially, Partition sets every vertex to be live. The objective is to convert each vertex to dead,
signifying that it is part of a component C i with maxsize vertices and part of a conglomerate
of maxsize vertices, where a conglomerate is a connected component of the graph
Intuitively a conglomerate is a collection of C i 's linked by common vertices. This scheme for
growing components is similar to the one given in [FT87].
We grow the C i 's one at a time according to the DJP algorithm, except that we use a Soft
Heap. A component is done growing if it reaches maxsize vertices or if it attaches itself to an
existing component. Clearly if a component does not reach maxsize vertices, it has linked to a
Partition(G;
All vertices are initially "live"
While there is a live vertex
Increment i
live vertex
Create a Soft Heap consisting of v's edges (uses )
While all vertices in V i are live and jV i j < maxsize
Repeat
Find and delete min-weight edge (x; y) from Soft Heap
Until y
If y is live then insert each of y's edges into the Soft Heap
all vertices in V i to be dead
be the corrupted edges with one endpoint in V i
Dismantle the Soft Heap
Let C := fC z is the subgraph of G induced by V z
Exit.
Figure
1: The Partition Procedure.
conglomerate of at least maxsize vertices. Hence all its vertices can be designated dead. Upon
completion of a component C i , we discard the set of corrupted edges with one endpoint in C i .
The running time of Partition is dominated by the heap operations, which depend on . Each
edge is inserted into a Soft Heap no more than twice (once for each endpoint), and extracted no
more than once. We can charge the cost of dismantling the heap to the insert operations which
created it, hence the total running time is O(m log( 1
)). The number of discarded edges is bounded
by the number of insertions scaled by , thus jM j 2m. Thus we have
Lemma 3.2 Given a graph G, any 0 < < 1
2 , and a parameter maxsize, Partition nds edge-disjoint
subgraphs
a) For all
b) For all i, jV (C i )j maxsize.
c) For each conglomerate P 2
d) jE(M)j 2 jE(G)j
4 Overview of the Optimal Algorithm
Here is an overview of our optimal MSF algorithm.
In the rst stage we nd DJP-contractible subgraphs C with their associated set
of edges
consists of corrupted edges with one endpoint in C i .
In the second stage we nd the MSF F i of each C i , and the MSF F 0 of the contracted
graph Gn(
3.1, the MSF of the whole graph is contained within
Note that at this point we have not identied any edges as being in the
MSF of the original graph G.
In the third stage we nd some MSF edges, via Boruvka steps, and recurse on the graph
derived by contracting these edges.
We execute the rst stage using the Partition procedure described in the previous section.
We execute the second stage with optimal decision trees. Essentially, these are hardwired
algorithms designed to compute the MSF of a graph using an optimal number of edge-weight
comparisons. In general, decision trees are much larger than the size of the problem that they solve
and nding optimal ones is very time consuming. We can aord the cost of building decision trees
by guaranteeing that each one is extremely small. At the same time, we make each conglomerate
formed by the C i to be su-ciently large so that the MSF F 0 of the contracted graph can be found
in linear time using the DenseCase algorithm.
Finally, in the third stage, we have a reduction in vertices due to the Boruvka steps, and a
reduction in edges due to the application of Lemma 3.1. In our optimal algorithm both vertices
and edges reduce by a constant factor, thus resulting in the recursive applications of the algorithm
on graphs with geometrically decreasing sizes.
4.1 Decision Trees
An MSF decision tree is a rooted tree having an edge-weight comparison associated with each
internal node (e.g. weight(x; y) < weight(w; z)). Each internal node has exactly two children, one
representing that the comparison is true, the other that it is false. The leaves of the tree list
the edges in some spanning tree. An MSF decision tree is said to be correct if the edge-weight
comparisons encountered on any path from the root to a leaf uniquely identify the spanning tree
at that leaf as the MSF. A decision tree is said to be optimal if it is correct and there exists no
correct decision tree with lesser depth.
Let us bound the time needed to nd all optimal decision trees for graphs of r vertices by
brute force search. There are fewer than 2 r 2
such graphs and for each graph we must check all
possible decision trees bounded by a depth of r 2 . There are < r 4 possibilities for each internal node
and < r 2 r 2 +O(1)
decision trees to check. To determine if a decision tree is correct we generate all
possible permutations of the edge weights and for each, solve the MSF problem on the given graph.
Now we simultaneously check all permutations against a decision tree. First put all permutations
at the root, then move them to the left or right child depending on the truth or falsity of the
edge-weight comparison w.r.t to each permutation. Repeat this step until all permutations reach
a leaf. If for each leaf, all permutations sharing that leaf agree on the MSF, then the decision tree
is correct. This process takes no longer than (r for each decision tree. Setting
allows us to precompute all optimal decision trees in o(n) time.
Observe that in the high-level algorithm we gave in section 4, if the maximum size of each
component C i is su-ciently small, the components can be organized into a relatively small number
of groups of isomorphic components (ignoring edge weights). For each group we use a single
precomputed optimal decision tree to determine the MSF of components in that group.
In our optimal algorithm we will use a procedure DecisionTree(G; F), which takes as input a
collection of graphs G, each with at most r vertices, and returns their minimum spanning forests
in F using the precomputed decision trees.
5 The Algorithm
As discussed above, the optimal MSF algorithm is as follows. First, precompute the optimal
decision trees for all graphs with log (3) n vertices. Next, divide the input graph into subgraphs
discarding the set of corrupted edges MC i as each C i is completed. Use the decision
trees found earlier to compute the MSF F i of each C i , then contract each connected component
spanned by F (i.e., each conglomerate) into a single vertex. The resulting graph has
n= log (3) n vertices since each conglomerate has at least log (3) n vertices by Lemma 3.2. Hence
we can use the DenseCase algorithm to compute its MSF F 0 in time linear in m. At this point,
by Lemma 3.1 the MSF is now contained in the edge set F . On this
graph we apply two Boruvka steps, reducing the number of vertices by a factor of four, and then
compute recursively. The algorithm is given below.
(this is used by the Soft Heap in the Partition procedure).
Precompute optimal decision trees for all graphs with log (3) n 0 vertices, where n 0 is the number
of vertices in the original input graph.
If
r := log (3) jV (G)j
Partition(G;
G a :=
Apart from recursive calls and using the decision trees, the computation performed by Opti-
malMSF is clearly linear since Partition takes O(m log( 1
owing to the reduction in
vertices, the call to DenseCase also takes linear time. For
8 , the number of edges passed to the
nal recursive call is m=4 giving a geometric reduction in the number of edges.
Since no MSF algorithm can do better than linear time, the bottleneck, if any, must lie in using
the decision trees, which are optimal by construction.
More concretely, let T (m; n) be the running time of OptimalMSF. Let T (m; n) be the optimal
number of comparisons needed on any graph with n vertices and m edges and let T (G) be the
optimal number of comparisons needed on a specic graph G. The recurrence relation for T is given
below. For the base case note that the graphs in the recursive calls will be connected if the input
graph is connected. Hence the base case graph has no edges and one vertex, and we have T (0; 1)
equal to a constant.
It is straightforward to see that if T (m; n) = O(m) then the above recurrence gives T (m;
O(m). One can also show that T (m; n) = O(T (m; n)) for many natural functions for T (including
n)). However, to show that this result holds no matter what the function describing
T (m; n) is, we need to establish some results on the decision tree complexity of the MSF problem,
which we do in the next section.
5.1 Some Results for MSF Decision Trees
In this section we establish some results on MSF decision trees that allow us to establish our main
result that OptimalMSF runs in O(T (m; n)) time.
Proposition 5.1 T (m; n) m=2.
Proposition 5.2 For xed m and
Proposition 5.1 is obviously true since every edge should participate in a comparison to determine
inclusion in or exclusion from the MSF. Proposition 5.2 holds since we can add isolated vertices
to a graph, which obviously does not aect the MSF or the number of necessary comparisons.
We now state a property that is used by Lemmas 5.4 and 5.5.
Property 5.3 The structure of G dictates that
are edge-disjoint subgraphs of G.
are the components returned by Partition, it can be seen that the graph
Denition 5.3 since every simple cycle in this graph must be contained in exactly one of
the C i . To see this, consider any simple cycle and let i be the largest index such that C i contains
an edge in the cycle. Since each C i shares no more than one vertex with
contain an an edge from
. The proof of the following lemma can be found in [PR99b].
Lemma 5.4 If Property 5.3 holds for G, then there exists an optimal MSF decision tree for G
which makes no comparisons of the form e < f where e 2 C
Proof: Consider a subset P of the permutations of all edge weights where for e 2 C
holds that weight(e) < weight(f ). Permutations in P have two useful properties which
can be readily veried. First, any number of inter-component comparisons shed no light on the
relative weights of edges in the same component. Second, any spanning forest of a component is
the MSF of that component for some permutation in P.
Now consider any optimal decision tree T for G. Let T 0 be the subtree of T which contains only
leaves that can be reached by some permutation in P. Each inter-component comparison node in
must have only one child, and by the rst property, the MSF at each leaf was deduced using
only intra-component comparisons. By the second property, T 0 must determine the MSF of each
component correctly, and thus by Property 5.3 it must determine the MSF of the graph G correctly.
Hence we can contract T 0 into a correct decision tree T 00 by replacing each one-child node with its
only child. 2
Lemma 5.5 If Property 5.3 holds for G, then T
Proof: Given optimal decision trees T i for the C i we can construct a decision tree for G by replacing
each leaf of T 1 by T 2 , and in general replacing each leaf of T i by T i+1 and by labeling each leaf of
the last tree by the union of the labels of the original trees along this path. Clearly the height of
this tree is the sum of the heights of the T i , and hence T (G)
need only prove
that no optimal decision tree for G has height less than the sum of the heights of the T i .
Let T be an optimal decision tree for G that has no inter-component comparisons (as guaranteed
by Lemma 5.4). We show that T can be transformed into a 'canonical' decision tree T 0 for G of
the same height as T , such that in T 0 , all comparisons for C i precede all comparisons for C i+1 , for
each i, and further, for each i, the subgraph of T 0 containing the comparisons within C i consists
of a collection of isomorphic trees. This establishes the desired result since T 0 must contain a path
that is the concatenation of the longest path in an optimal decision tree for each of the C i .
We rst prove this result for the case when there are only two components, C 1 and C 2 . Assume
inductively that the subtrees rooted at all vertices at a certain depth d in T have been transformed
to the desired structure of having the C 1 comparisons occur before he C 2 comparisons, and with all
subtrees for C 2 within each of the subtrees rooted at depth d being isomorphic. (This is trivially
the case when d is equal to the height of T .)
Consider any node v at depth d 1. If the comparison at that node is a C 1 comparison, then
all C 2 subtrees at descendent nodes must compute the same set of leaves for C 2 . Hence the subtree
rooted at v can be converted to the desired format simply by replacing all C 2 subtrees by one having
minimum depth (note that there are only two dierent C 2 subtrees { all C 2 subtrees descendent
to the left (right) child of v must be isomorphic). If the comparison at v is a C 2 comparison, we
know that the C 1 subtrees rooted at its left child x and its right child y must both compute the
same set of leaves for C 1 . Hence we pick the C 1 subtree of smaller height (w.l.o.g. let its root be
x) and replace v by x, together with the C 1 subtree rooted at x. We then copy the comparison at
node v to each leaf position of this C 1 subtree. For each such copy, we place one of the isomorphic
copies of the C 2 subtree that is a descendant of x as its left subtree, and the C 2 subtree that is a
descendant of y as its right subtree. The subtree rooted at x, which is now at depth d 1 is now
in the desired form, it computes the same result as in T , and there was no increase in the height of
the tree. Hence by induction T can be converted into canonical decision tree of no greater height.
Assume inductively that the result hold for up to k 1 2 components. The result easily
extends to k components by noting that we can group the rst k 1 components as C 0
1 and let C k
be C 0
. By the above method we can transform T to a canonical tree in which the C k comparisons
appear as leaf subtrees. We now strip the C k subtrees from this canonical tree and then by the
inductive assumption we can perform the transformation for remaining k 1 components. 2
Corollary 5.6 Let the C i be the components formed by the Partition routine applied to graph G,
and let G have m edges and n vertices. Then,
Corollary 5.7 For any m and n,
We can now solve the recurrence relation for the running time of OptimalMSF given in the
previous section.
(Corollary 5.6)
(Corollary 5.7 and Propositions 5.1, 5.2)
c T (m; n) (for su-ciently large c; this completes the induction)
This gives us the desired theorem.
Theorem 5.8 Let T (m; n) be the decision-tree complexity of the MSF problem on graphs with
m edges and n nodes. Algorithm OptimalMSF computes the MSF of a graph with m edges and n
vertices deterministically in O(T (m; n)) time.
6 Avoiding Pointer Arithmetic
We have not precisely specied what is required of the underlying machine model. Upon examina-
tion, the algorithm does not seem to require the full power of a random access machine (RAM). No
bit manipulation is used and arithmetic can be limited to just the increment operation. However, if
procedure DecisionTree is implemented in the obvious manner it will require using a table lookup,
and thus random access to memory. In this section we describe an alternate method of handling the
decision trees which can run on a pointer machine [Tar79], a model which does not allow random
access to memory. Our method is similar to that described in [B+98], but we ensure that the time
overhead in performing the table lookups during a call to DecisionTree is linear in the size of the
current input to DecisionTree.
A pointer machine distinguishes pointers from all other data types. The only operations allowed
on pointers are assignment, comparison for equality and dereferencing. Memory is organized into
records, each of which holds some constant number of pointers and normal data words (integers,
oats, etc. Given a pointer to a particular record, we can refer to any pointer or data word in that
record in constant time. On non-pointer data, the usual array of logical, arithmetic, and binary
comparison operations are allowed.
We rst describe the representation of a decision tree. Each decision tree has associated with it a
generic graph with no edge weights. This decision tree will determine the MST of each permutation
of edge weights for this generic graph. At each internal node of the decision tree are four pointers,
the rst two point to edges in the generic graph being compared and the second two point to the
left and right child of the node. Each leaf lists the edges in some spanning tree of the generic graph.
Since a decision tree is a pointer-based structure, we can construct each precomputed decision tree
(by enumerating and checking all possibilities) without using table lookups.
We now describe our representation of the generic graphs. The vertices of a generic graph are
numbered in order by integers starting with 1, and the representation consists of a listing of the
vertices in order, starting from 1, followed by the adjacency list for each vertex, starting with vertex
1. Each generic graph will have a pointer to the root of its decision tree.
Recall that we precomputed decision trees for all generic graphs with at most log (3) n 0 vertices
(where n 0 is the number of vertices in the input graph whose MSF we need to nd). The generic
graphs will be generated and stored in lexicographically sorted order. Note that with our represen-
tation, in the sorted order the generic graphs will appear in nondecreasing order of the number of
vertices in the graph.
Before using a decision tree on an actual graph (which must be isomorphic to the generic graph
for that decision tree), we must associate each edge in the actual graph with its counterpart in the
generic graph. Thus a comparison between edge weights in the generic graph can be substituted
by the corresponding weights in the actual graph in constant time.
On a random access machine, we can encode each possible graph in a single machine word (say,
as an adjacency matrix), then index the generic graph in an array according to this representation.
Thus given a graph we can nd the associated decision tree in constant time. On a pointer machine
however, converting a bit vector or an integer to a pointer is specically disallowed.
We now describe our method to identify the generic graph for each C i e-ciently. We assume
that each C i is specied by the adjacency lists representation, and that each edge (x; y) has a pointer
to the occurrence of (y; x) in y's adjacency list. Each edge also has a pointer to a record containing
its weight. Let m and n be the number of edges and vertices in
We rewrite each C i in the same form as the generic graphs, which we will call the numerical
representation. Let C i have p vertices (note that p r). We assign the vertices numbers from 1 to
p in the order in which they are listed in the adjacency lists representation, and we rewrite each
edge as a pair of such numbers indicating its endpoints. Each edge will retain the pointer to its
weight, but that is separate from its numerical representation.
We then change the format for each graph as follows: Instead of a list of numbers, each in the
range [1::r], we will represent the graph as a list of pointers. For this we initialize a linked list with
r buckets, labeled 1 through r. If, in the numerical representation the number j appears, it will be
replaced by a pointer to the j th bucket.
We transform a graph into this pointer representation by traversing rst the list of vertices and
then the list of edges in order, and traversing the list of buckets simultaneously, replacing each
vertex entry, and the rst vertex entry for each edge by a pointer to the corresponding bucket.
Thus edge (x; y), also appearing as (y; x), will now appear as (ptr(x); y) and (ptr(y); x). We then
employ the twin pointers to replace the remaining y and x with their equivalent pointers. Clearly
this transformation can be performed in O(m) time, where m is the sum of the sizes of all of the
We will now perform a lexicographic sort [AHU74] on the sequence of C i 's in order to group
together isomorphic components. With our representation we can replace each bucket indexing
performed by traditional lexicographic sort by an access to the bucket pointer that we have placed
for each element. Hence the running time for the pointer-based lexicographic sort is O(
is the length of the i th vector and DecisionTree is called
with graphs of size r = O(log (3) n), we have and the sum of the sizes of the graphs is
O(m). Hence the radix sort can be performed in O(m
Finally, we march through the sorted list of the C i 's and the sorted list of generic graphs,
matching them up as appropriate. We will only need to traverse an initial sequence of the sorted
generic graphs containing O(2 r 2
entries in order to match up the graphs. This takes time O(m
7 Performance on Random Graphs
Even if we assume that MST has some super-linear complexity, we show below that our algorithm
runs in linear time for nearly all graphs, regardless of edge weights. This improves upon the
expected linear-time result of Karp and Tarjan [KT80], which depended on the edge weights being
chosen randomly. Our result may also be contrasted with the randomized algorithm of Karger et al.
[KKT95], which is shown to run in O(m) time w.h.p. by a proof that depends on the permutation of
edge weights and random bits chosen, not the graph topology. In fact, none of the earlier published
MST algorithms appear to have this property of running in linear time w.h.p. on random graphs
for all edge-weights. Using the analysis of this section and suitably souped-up versions of earlier
algorithms [FT87, GGST86, Chaz00], we may obtain the same high probability result.
Our analysis hinges on the observation that for sparse random graphs, w.h.p. any subgraph
constructed by the Partition routine has only a miniscule number in edges in excess of the number
of spanning forst edges in that subgraph. The MST of such graphs can be computed in linear time,
and hence the computation on optimal decision trees takes linear time on these graphs.
Throughout this section will denote (m; n).
Theorem 7.1 The MST of a graph can be found in linear time with probability
e
graph drawn from G n;m
graph drawn from G n;p .
Both (1) and (2) hold regardless of the permutation of edge weights.
In the next section we describe the edge-addition martingale for the G n;m model. In section 7.2
we use this martingale and Azuma's inequality to prove part (1) of Theorem 7.1. Part (2) is shown
to follow from part (1).
7.1 The Edge-Addition Martingale
Consider the G n;m random graph model in which each graph with n labeled vertices and m edges
is equally likely. For analytical purposes, we select a random graph by beginning with n vertices
and adding one edge at a time [ER61]. Let X i be a random edge s.t. X i
be the graph made up of the rst i edges, with G 0 being the graph on n vertices
having no edges.
A martingale is a sequence of random variables Y
We now prove that if g is any graph-theoretic function and
for is a martingale.
Lemma 7.2 The sequence m, is a martingale, where g is any
graph theoretic function, G 0 is the edge-free graph on n vertices, and G i is derived from G i 1 by
adding a random edge not in G i 1 to G i 1 .
g. Given that G i 1 has been xed,
call the sequence proved to be a martingale in Lemma 7.2 the edge-addition martingale in
contrast to the edge-exposure martingale for G n;p .
We now recall the well-known Azuma's inequality (see, e.g., [AS92]).
Theorem 7.3 (Azuma'a Inequality.) Let Y Ym be a martingale with jY
m. Let > 0 be arbitrary. Then Pr[jY
To facilitate the application of Azuma's inequality to our edge-addition martingale we establish
the following lemma.
Lemma 7.4 Consider the sequence proved to be a martingale in Lemma 7.2. Let g be any graph-theoretic
function such that jg(G) g(G 0 )j 1 for any pair of graphs G and G 0 of the form
are the average of
range over their possible outcomes, given G i and G i 1 respectively. We identify each outcome
of
with equal-size disjoint sets of outcomes of X m
which cover all outcomes of X m
. Then
may be regarded as an average of set averages. If, for each set corresponding to an
outcome P of X m
we establish that the set average diers from g(G i [ P ) by no more than 1,
the Lemma follows.
The correspondence is as follows. Let a. For each outcome x m
, the corresponding set
consists of outcomes x j
ranges over all edges not
appearing in G i 1 and x m
. For each outcome
i+1 and all Q in P 's associated set,
since the graphs dier in at most one edge. Clearly
holds as well, where the average is over outcomes Q in P 's associated
set. 2
7.2 Analysis
We dene the excess of a subgraph H to be jE(H)j jF (H)j, where F (H) is any spanning forest
of H. Let f(G) be the maximum excess of the graph made up of intra-component edges, where the
sets of components range over all possible sets returned by the Partition procedure. (Recall that the
size of any component is no more than
The key observation leading to our linear-time result is that each pass of our optimal algorithm
denitely runs in linear time if f(G) m=(m;n). To see this, note that if this bound on f(G)
holds, we can reduce the total number of intra-component edges to 2m= in linear time using
log Boruvka steps, and then, clearly, the MST of the resulting graph can be determined in O(m)
time. We show below that if a graph is randomly chosen from G n;m , f(G) m=(m;n) with high
probability.
We now show that Lemma 7.4 applies to the graph-theoretic function f , and then apply Azuma's
inequality to obtain our desired result.
Lemma 7.5 Let be two graphs on a set of labeled vertices which
dier by no more than one edge. Then jf(G) f(G 0 )j 1.
Proof: Suppose w.l.o.g. that f(G) f(G 0 ) > 1, then we could apply the optimal set of components
of G to G 0 . Every intra-component edge of G remains an intra-component edge, except possibly
e. This can reduce the excess by no more than one, a contradiction. The possibility that e 0 may
become an intra-component edge can only help the argument. 2
Lemma
Proof: Notice that if m
simply impossible to have m= intra-component edges, so we
assume m
An upper bound on f E (G 0 ) is the expected number of indices i s.t. edge X i completed a cycle
of length k in G i 1 , since all edges which caused f to increase must have satised this criterion.
be the probability that X i completed a cycle of length k. By bounding the number of
such cycles, and the probability they exist in the graph, we have
Y
<n
nm
(recall that i m)
O
n)
In either case, f
G be chosen from G n;m . Then Pr[f(G) > m=] < e
Proof: By applying Azuma's inequality, we have that Pr[jf E (Gm
Setting
m gives the Lemma. Note that by Lemma 7.6 f
insignicant. 2
We are now ready to prove Theorem 7.1.
Proof: We examine only the rst log k passes of our optimal algorithm, since all remaining passes
certainly take o(m) time. Lemma 7.7 assures us that the rst pass runs in linear time w.h.p.
However, the topology of the graph examined in later passes does depend on the edge weights.
Assuming the Boruvka steps contract all parts of the graph at a constant rate, which can easily
be enforced, a partition of the graph in one pass of the algorithm corresponds to a partition of the
original graph into components of size less than k c , for some xed c. Using k c in place of k does
not aect Lemma 7.6, which gives the Theorem for G n;m , that is, part (1). For G n;p note that the
probability that there are not (pn 2 ) edges is exponential in pn 2 ), hence the probability that
the algorithm fails to run in linear time is dominated by the bound in part (1).For the sparse case where m < n=, Theorem 7.1 part (1) holds with probability 1, and for
< 1=n, by a Cherno bound, part (2) holds with probability 1 e
n=) .
An intriguing aspect of our algorithm is that we do not know its precise deterministic running
time although we can prove that it is within a constant factor of optimal. Results of this nature
have been obtained in the past for sensitivity analysis of minimum spanning trees [DRT92] and
convex matrix searching [Lar90]. Also, for the problem of triangulating a convex polygon, it
was observed in [DRT92] that an alternate linear-time algorithm could be obtained using optimal
decision trees on small subproblems. However, these earlier algorithms make use of decision trees
in more straightforward ways than the algorithm presented here.
As noted in Section 4.1, the construction of optimal decision trees takes sub-linear time. Thus, it
is important to observe that our use of decision trees does not result in a large constant factor in the
running time. Further, this construction of optimal decision trees is performed by a straightforward
brute-force search, hence the resulting algorithm is uniform.
It was mentioned in the introduction that an optimal algorithm can be constructed for any prob-
lem, given an optimal verication algorithm for that problem [Jo97]. This construction produces an
algorithm which enumerates programs (for some machine model) and executes them incrementally.
Whenever one of the programs halts the verier checks its output for correctness. Using a linear-time
MST verication algorithm such as [DRT92, K97, B+98], this construction yields an optimal
MST algorithm, however it is unsatisfactory for several reasons. Aside from truly astronomical
constant factors (roughly exponential in the size of the optimal program), the algorithm is optimal
only with respect to a particular machine model (say a TM, a RAM, or a pointer machine). Our
result, in contrast, is robust in that it ties the algorithmic complexity of MST to its decision-tree
complexity, a limiting factor in any machine model. It is not always the case that algorithmic complexity
and decision-tree complexity are asymptotically equivalent. In fact, one can easily concoct
simple problems which are NP-hard but nevertheless have polynomial-depth decision-trees (e.g. nd
the lightest edge on any Hamiltonian path). See [GKS93], [PR01, Section 8] for two sorting-type
problems whose decision-tree complexity and algorithmic complexity provably diverge.
9 Conclusion
We have presented a deterministic MSF algorithm that is provably optimal. The algorithm runs
on a pointer machine, and on graphs with n vertices and m edges, its running time is O(T (m; n)),
where T (m; n) is the decision-tree complexity of the MSF problem on n-node, m-edge graphs.
Also, on random graphs our algorithm runs in linear time with high probability for all possible
edge-weights. Although the exact running time of our algorithm is not known, we have shown that
the time bound depends only on the number of edge-weight comparisons needed to determine the
MSF, and not on any data structural issues.
Determining the worst-case complexity of our algorithm is the main open question remaining
in the MSF problem, however, there is a subtler open question. We have given an optimal uniform
algorithm for the MSF problem. Is there an optimal uniform algorithm which does not use
precomputed decision trees (or some similar technique)? More generally, are there problems where
precomputation is necessary? One may wish to study this issue in a simpler setting, say the MSF
verication problem on a pointer machine. Here there is still an (m; n) factor separating the best
pointer machine algorithm which uses precomputed decision trees [B+98] and the one which does
not [Tar79b].
One may also ask for the parallel complexity of the MSF problem. Here, resolved recently
were the randomized work-time complexity [PR99] and the deterministic time complexity [CHL99]
of the MSF problem on the EREW PRAM. An open question that remains here is to obtain a
deterministic work-time optimal parallel MSF algorithm. Parallelizing our optimal algorithm is
not at all straightforward. Although handling decision trees does not present any problems in the
parallel context, we still need a method for identifying contractible components in parallel and a
base case algorithm that performs linear work for graph-densities of log (3) n. Existing sequential
algorithms which are suitable for the base case, such as the one in [FT87], are also not easily
parallelizable.
--R
The Design and Analysis of Computer Algorithms.
The Probabilistic Method.
A faster deterministic algorithm for minimum spanning trees.
A minimum spanning tree algorithm with inverse-Ackermann type complexity
On the parallel time complexity of undirected connectivity and minimum spanning trees.
A note on two problems in connexion with graphs.
Fibonacci heaps and their uses in improved network optimization algorithms.
On the history of the minimum spanning tree problem.
Optimal randomized algorithms for local sorting and set- maxima
Computability and Complexity: From a Programming Perspective.
A randomized linear-time algorithm to nd minimum spanning trees
Linear expected-time algorithms for connectivity problems
A simpler minimum spanning tree veri
An optimal algorithm with unknown time complexity for convex matrix searching.
A randomized time-work optimal parallel algorithm for nding a minimum spanning forest <Proceedings>Proc
An optimal minimum spanning tree algorithm.
Computing undirected shortest paths with comparisons and additions.
Finding minimum spanning trees in O(m
Shortest connection networks and some generalizations.
A class of algorithms which require nonlinear time to maintain disjoint sets.
Applications of path compression on balanced trees.
--TR
Efficient algorithms for finding minimum spanning trees in undirected and directed graphs
Fibonacci heaps and their uses in improved network optimization algorithms
An optimal algorithm with unknown time complexity for convex matrix searching
Verification and sensitivity analysis of minimum spanning trees in linear time
Optimal randomized algorithms for local sorting and set-maxima
Trans-dichotomous algorithms for minimum spanning trees and shortest paths
A randomized linear-time algorithm to find minimum spanning trees
Computability and complexity
Linear-time pointer-machine algorithms for least common ancestors, MST verification, and dominators
Applications of Path Compression on Balanced Trees
The soft heap
A minimum spanning tree algorithm with inverse-Ackermann type complexity
Concurrent threads and optimal parallel minimum spanning trees algorithm
Computing shortest paths with comparisons and additions
Minimizing randomness in minimum spanning tree, parallel connectivity, and set maxima algorithms
The Design and Analysis of Computer Algorithms
A Randomized Time-Work Optimal Parallel Algorithm for Finding a Minimum Spanning Forest
A Faster Deterministic Algorithm for Minimum Spanning Trees
Finding Minimum Spanning Trees in O(m alpha(m,n)) Time
--CTR
Jess Cerquides , Ramon Lpez Mntaras, TAN Classifiers Based on Decomposable Distributions, Machine Learning, v.59 n.3, p.323-354, June 2005
Artur Czumaj , Christian Sohler, Estimating the weight of metric minimum spanning trees in sublinear-time, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Tzu-Chiang Chiang , Chien-Hung Liu , Yueh-Min Huang, A near-optimal multicast scheme for mobile ad hoc networks using a hybrid genetic algorithm, Expert Systems with Applications: An International Journal, v.33 n.3, p.734-742, October, 2007
Seth Pettie, A new approach to all-pairs shortest paths on real-weighted graphs, Theoretical Computer Science, v.312 n.1, p.47-74, 26 January 2004
Ran Mendelson , Robert E. Tarjan , Mikkel Thorup , Uri Zwick, Melding priority queues, ACM Transactions on Algorithms (TALG), v.2 n.4, p.535-556, October 2006
Amos Korman , Shay Kutten, Distributed verification of minimum spanning trees, Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing, July 23-26, 2006, Denver, Colorado, USA | optimal complexity;graph algorithms;minimum spanning tree |
505422 | Closing the smoothness and uniformity gap in area fill synthesis. | Control of variability in the back end of the line, and hence in interconnect performance as well, has become extremely difficult with the introduction of new materials such as copper and low-k dielectrics. Uniformity of chemical-mechanical planarization (CMP) requires the addition of area fill geometries into the layout, in order to smoothen the variation of feature densities across the die. Our work addresses the following smoothness gap in the recent literature on area fill synthesis. (1)The very first paper on the filling problem (Kahng et al., ISPD98 [7]) noted that there is potentially a large difference between the optimum window densities in fixed dissections vs. when all possible windows in the layout are considered. (2)Despite this observation, all filling methods since 1998 minimize and evaluate density variation only with respect to a fixed dissection. This paper gives the first evaluation of existing filling algorithms with respect to "gridless" ("floating-window") mode, according to both the effective and spatial density models. Our experiments indicate surprising advantages of Monte-Carlo and greedy strategies over "optimal" linear programming (LP) based methods. Second, we suggest new, more relevant methods of measuring a local uniformity based on Lipschitz conditions, and empirically demonstrate that Monte-Carlo methods are inherently better than LP with respect to the new criteria. Finally, we propose new LP-based filling methods that are directly driven by the new criteria, and show that these methods indeed help close the "smoothness gap". | INTRODUCTION
Chemical-mechanical planarization (CMP) and other manufacturing
steps in nanometer-scale VLSI processes have
varying effects on device and interconnect features, depending
on the local characteristics of the layout. To improve
manufacturability and performance predictability, foundry
rules require that a layout be made uniform with respect
to prescribed density criteria, through insertion of area fill
(dummy fill) geometries.
All existing methods for synthesis of area fill are based
on discretization: the layout is partitioned into tiles, and
filling constraints or objectives (e.g., minimizing the maximum
density variation) are enforced for square windows that
each consists of r \Thetar tiles. Thus, to practically control layout
density in arbitrary windows, density bounds are enforced in
only a finite set of windows. More precisely, both foundry
rules and EDA physical verification and layout tools attempt
to enforce density bounds within r 2 overlapping fixed dissec-
tions, where r determines the "phase shift" w=r by which
the dissections are offset from each other. The resulting
fixed r-dissection (see Figure 1) partitions the n \Theta n layout
into tiles T ij , then covers the layout by w \Theta w-windows W ij ,
1, such that each window W ij consists of
Two main filling objectives are considered in the recent
literature:
ffl (Min-Var Objective) the variation in window density
(i.e., maximum window density minus minimum window
density) is minimized while the window density
does not exceed the given upper bound U ;
ffl (Min-Fill Objective) the number of inserted fill geometries
is minimized while the density of any window
remains in the given range (L; U ).
Recent methods on area fill synthesis also focused exclusively
on the fixed-dissection context, including:
ffl Linear Programming (LP) methods based on rounding
relaxation of the corresponding integer linear program
formulations. The LP formulations for filling were first
proposed by Kahng et al. in [6] and adapted to other
objectives and CMP models in [12, 13]);
ffl Greedy methods which iteratively find the best tile for
the next filling geometry to be added into the layout.
These methods were first used in [3] for ILD thickness
control, and also used for shallow-trench isolation
(STI) CMP model in [13]);
ffl Monte-Carlo (MC) methods, which are similar to greedy
methods but insert the next filling geometry randomly.
Due to its efficiency and accuracy, these were used for
both flat [3, 4] and hierarchical [2] layout density con-
trol; and
ffl Iterated Greedy (IGreedy) and Iterated Monte-Carlo
methods, which improve the solution quality
by iterating the insertions and deletions of dummy fill
features with respect to the density variation ([3]).
The motivation for our present work is a "smoothness
gap" in the fill literature. All existing filling methods fail to
consider the potentially large difference between extremal
densities in fixed-dissection windows and extremal densities
when all possible windows are considered. On the other
hand, the very first paper in the fill synthesis literature
(Kahng et al., ISPD98 [7]) already pointed out the gap between
fixed-dissection and "gridless" analyses, for both tile
density and window density. 1 The potential consequence
of the smoothness gap is that the fill result will not satisfy
either given upper bounds on post-fill window density,
or given bounds on density variation between windows. As
post-CMP variation for oxide ILD polishing is essentially
monotone in window density variation [11], this smoothness
gap can compromise manufacturability of the layout, particularly
given the small values of r in recent design rules.
We first address the discretization gap in existing analyses
(i.e., evaluations) methods. Previous works compare density
control methods only with respect to a given fixed grid,
which underestimates the actual "gridless" density varia-
tion, but has been justified on grounds that gridless analysis
is impractical. In this paper, we show for the first time the
viability of gridless or floating window analyses, originally
developed for the spatial density model [6], and extend it for
the more accurate effective density model [9]. Second, previous
research in layout density control concentrated on the
global uniformity achieved by minimizing the window density
variation over the entire layout. However, the density
variation between locations which are far from each other
is actually not so critical, in that the pressure/speed of the
polishing pad can be (self-)adjusted during CMP. Thus, we
propose and analyze criteria for "local uniformity" as a measure
of smoothness in filling solutions. We evaluate existing
methods with respect to the new criteria, and we suggest LP-based
methods that directly optimize filling solutions with
respect to smoothness.
The rest of the paper is organized as follows. In Section
2 we show how to apply floating window density analysis
1 Bounding the spatial density in a fixed set of w \Thetaw windows
can incur substantial error, since other windows may still
violate the density bounds [6].
methods (such as extremal-density window and multilevel
density analyses) to spatial and effective density models.
We then give the first "gridless" evaluation of existing filling
algorithms, under the effective as well as spatial density
models. Our experiments indicate surprising advantages of
Monte-Carlo and greedy methods over "optimal" linear programming
(LP) based methods. In Section 3 we introduce
new Lipschitz-like measures for layout smoothness and describe
new LP-based filling methods driven by such mea-
sures. We also compare the results of existing and new filling
approaches, with respect to the new smoothness criteria.
Section 4 concludes with directions for future work.
2. LAYOUT DENSITY ANALYSES
As noted above, for the sake of tractability, previous works
have used fixed dissections to decide the amount and positions
of dummy fill features[6]. A smoothness gap thus exists
because a filling solution based on a fixed-dissection does not
address the true post-filling density variation. Here we first
summarize the two main density models used in the current
literature. We then introduce two extremal-density analysis
algorithms (for spatial and effective density, respectively)
which we use to compute post-fill layout density. Finally, we
evaluate existing filling methods according to (near-)gridless
density variation.
2.1 Density Models for Oxide CMP
We focus on layout density control for (oxide) interlevel
dielectric CMP. 2 Several models have been proposed in [8],
including the model of [10], where the interlevel dielectric
thickness z at location (x; y) is calculated as:
(1)
The crucial element of this model is the determination of the
effective initial pattern density, ae(x; y). The simplest model
for ae(x; y) is the local areal feature density, i.e., the window
density is simply equal to the sum:
area(Tkl ) (2)
where area(Tkl ) denotes the original layout area of the tile
Tkl . This spatial density model is due to [6], which solved
the resulting filling problem using linear programming.
A more accurate model considers the deformation of the
polishing pad during the CMP process [5]: effective local
density ae(x; y) is calculated as the sum of weighted spatial
pattern densities within the window, relative to an elliptical
weighting function:
with experimentally determined constants c0 , c1 , and c2 [12].
The discretized effective local pattern density ae for a window
ij in the fixed-dissection regime (henceforth referred to as
effective density) is:
recent works, particularly by Wong et al., have
studied alternative arenas for dummy fill, including shallow-
trench isolation and dual-damascene copper. For such are-
nas, density calculations and physical polish mechanisms are
different from those in the oxide context. Consideration of
these alternate models is orthogonal to our contribution;
certainly, the concept of a "smoothness gap" applies to all
filling contexts.
arbitrary window W
window
tile
shrunk fixed
dissection window
fixed dissection
bloated fixed
dissection window
Figure
2: An arbitrary floating w \Theta w-window W
always contains a shrunk (r \Gamma 1) \Theta (r \Gamma 1)-window
of a fixed r-dissection, and is always covered by a
bloated (r+1)\Theta(r+1)-window of the fixed r-dissection.
A standard r \Theta r fixed-dissection window is shown
with thick border. A floating window is shown in
light gray. The white window is the bloated fixed-
dissection window, and the dark gray window is the
shrunk fixed-dissection window.
area(Tkl
where the arguments of the elliptical weighing function f
are the x- and y-distances of the tile Tkl from the center of
the window W ij .
2.2 Window Density Analyses
The authors of [6] proposed optimal extremal-density (i.e.,
minimum or maximum window density in the layout) analysis
algorithms. Their ALG1, with complexity O(k is the
number of rectangles in layout), is proposed as a means of
checking the gridless post-filling density variation. However,
with a large number of original and dummy fill features, this
algorithm may be infeasible in practice.
Another method of [6] overcomes the intractability of optimal
extremal-density analysis, based on the following fact
(see Fig. 2).
Lemma 1. Given a fixed r-dissection, any arbitrary w \Theta w
window will contain some shrunk w(1 \Gamma 1=r) \Theta w(1 \Gamma 1=r)
window of the fixed r-dissection, and will be contained in
some bloated w(1 window of the fixed
r-dissection.
The authors of [6] implemented the above Lemma within
a multi-level density analysis algorithm (see Fig. 3). Here
used to denote the required user-defined accuracy in
finding the maximum window density. The lists TILES and
WINDOWS are byproducts of the analysis. Since any floating
w \Theta w-window W is contained in some bloated window,
the filled area in W ranges between Max (maximum w \Theta w-
window filled area found so far) and BloatMax (maximum
bloated window filled area found so far). The algorithm terminates
when the relative gap between Max and BloatMax
is at most 2 \Delta ffl, and then outputs the middle of the range
(Max,BloatMax).
We use this algorithm (with accuracy = 1.5%) throughout
this paper to achieve an accurate, efficient post-filling density
analysis. 3 To handle the effective density model, the
3 For the test cases used in this paper, the runtimes of the
Multi-Level Density Analysis Algorithm
Input: n \Theta n layout and accuracy ffl ? 0
Output: maximum density of w \Theta w window with accuracy ffl
(1) Make a list ActiveTiles of all w=r \Theta w=r-tiles
(3) While Accuracy do
(a) Find all rectangles in w=r \Theta w=r-tiles from ActiveTiles
(b) Find area of each window consisting of tiles from
ActiveTiles, add such window to the list WINDOWS
(c) maximum area of standard window with tiles
from ActiveTiles
maximum area of bloated window
with tiles from ActiveTiles
(e) For each tile T from ActiveTiles which do not belong to
any bloated window of area more than Max do
remove T from ActiveTiles
(f) Replace in ActiveTiles each tile with four of its subtiles
(4) Move all tiles from ActiveTiles to TILES
Figure
3: Multi-level density analysis algorithm.
multi-level density analysis based on bloated and shrunk
windows must be refined somewhat. To obtain more accurate
results, the multi-level density analysis algorithm divides
the r-dissection into smaller grids, so that more windows
will be considered. With the effective density model,
the discretized formulation (effective) shows that effective
local pattern density is dependent on the window size w
and the r-dissection. That is, we have to consider the effect
on the formulation of the further division of layout during
post-filling density analysis. We assume here that the effective
local pattern density is still calculated with the value of
r-dissection used in the filling process. The only difference is
that the windows phase-shift will be smaller. For example,
in
Figure
4(a) we calculate the effective density of the window
shown in light gray by considering 5 \Theta 5 tiles (also called
"cells") during the filling process. In Figure 4(b) the layout
is further partitioned by a factor of 4. The effective density
of the light gray window will be still calculated with the 5 \Theta 5
"cells". Here each "cell" has the same dimension as a tile in
the filling process and consists of 2 \Theta 2 smaller tiles. More
windows (e.g., the window with thick border) with smaller
phase-shifts will be considered in the more gridded layout.
2.3 Accurate Analysis of Existing Methods
Here we compare the performance of existing fill synthesis
methods, using the accurate multilevel floating-window
density analysis. All experiments are performed using part
of a metal layer extracted from an industry standard-cell
layout 4 (Table 2.3). Benchmark L1 is the M2 layer from an
8,131-cell design and benchmark L2 is the M3 layer from a
20,577-cell layout.
multi-level analysis with accuracy = 1:5% appear reason-
able. Our (unoptimized) implementation has the following
runtimes for Min-Var LP solutions and the spatial density
model: L1/32 (45 sec), L1/16 (183 sec), L2/28 (99 sec),
L2/14 (390 sec), For the effective density model, the run-times
are: L1/32 (49 sec), L1/16 (194 sec), L2/28 (109 sec),
4 Our experimental testbed integrates GDSII Stream input,
conversion to CIF format, and internally-developed geometric
processing engines, coded in C++ under Solaris. We use
CPLEX version 7.0 as the linear programming solver. All
runtimes are CPU seconds on a 300 MHz Sun Ultra-10 with
1GB of RAM.
(a) (b)
window
tile
cell
Figure
4: Post-filling density analysis for the effective
density model. (a): a fixed-dissection, where
each window consists of 5 \Theta 5 cells (the same size
as tiles); (b): a fixed-dissection for post-filling density
analysis, where each window consists of
smaller tiles and each cell consists of 2 \Theta 2 tiles.
Table
2 shows that underestimation of the window density
variation as well as violation of the maximum window density
in fixed-dissection filling can be severe: e.g., for the LP
method applied to the case L2/28/4 for the spatial (resp.
effective) density model, the density variation is underestimated
by 210% (resp. 264%) and the maximum density
is violated by 21% (resp. 15%). Even for the finest grid
(L2/28/16), the LP method may still yield considerable er-
ror: 11% (resp. 23%) in density variation and 1.2% (resp.
3.2%) in maximum density violation. Note that the LP
method cannot easily handle a finer grid since the runtime
is proportional to r 6 .
Our comparisons show that the winning method is IMC
and the runner-up is IGreedy. IMC and IGreedy can be run
for much finer grids since its runtime is proportional to r 2
log r). Although for L2/28/16 errors in density
variation and the maximum density violation are similar,
the iterative methods become considerably more accurate.
test case L1 L2
layout size n 125,000 112,000
rectangles k 49,506 76,423
Table
1: Parameters of four industry test cases.
Here 40 units are equivalent to 1 micron.
3. LOCAL DENSITY VARIATION
The main objective of layout filling is to improve CMP
and increase yield. Traditionally, layout uniformity has been
measured by global spatial or effective density variation over
all windows. Such a measure does not take in account that
the polishing pad during CMP can change (adjust) the pressure
and rotation speed according to the pattern distribution
(see [11]). Boning et al. [1] further point out that while the
effective density model is excellent for local CMP effect pre-
diction, it fails to take into account global step heights. The
influence of density variation between far-apart regions can
be reduced by a mechanism of pressure adjustment, which
leads to the contact wear model proposed in [1]. Within each
local region, the area fill can be used to improve the CMP
performance. Therefore, density variation between two windows
in opposite corners of the layout will not cause problems
because of the polishing dynamics. According to the
integrated contact wear and effective density model, only a
significant density variation between neighboring windows
will complicate polishing pad control and may cause either
dishing or underpolishing. Thus, it is more important to
measure density variation between neighboring windows.
3.1 Lipschitz Measures of Smoothness
Depending on the CMP process and the polishing pad
movement relative to the wafer, we may consider different
window "neighborhoods". Below we propose three relevant
Lipschitz-like definitions of local density variation which differ
only in what windows are considered to be neighbors.
ffl Type I: The maximum density variation of every r
neighboring windows in each row of the fixed-dissection.
The intuition here is that the polishing pad is moving
along window rows and therefore only overlapping
windows in the same row define a neighborhood.
ffl Type II: The maximum density variation of every
cluster of windows which cover one tile. The idea here
is that the polishing pad can touch all overlapping windows
almost simultaneously.
ffl Type III: The maximum density variation of every
cluster of windows which cover one square consisting
of r=2 \Theta r=2 tiles. The difference between this and the
previous definition is the assumption that the polishing
pad is moving slowly; if windows overlap but are still
too far from each other, then we can disregard their
mutual influence.
We compared the behaviors of existing filling methods
with respect to these Lipschitz characteristics. The results
in
Table
3 show that there is a gap between the traditional
Min-Var objective and the new "smoothness" objectives:
the solution with the best Min-Var objective value does not
always have the best value in terms of "smoothness" objec-
tives. For the spatial (resp. effective) density model, though
LP yields the best result for the case L2/28/4 with Min-Var
objective with fixed-dissection model, it can not obtain the
best result with respect to Lipschitz type-I variation. Thus,
"smoothness" objectives may be considered separately for
the filling process. We also notice that Monte-Carlo methods
can achieve better solutions than LP with respect to
the "smoothness" objectives (note that although LP is "op-
timal", it suffers from rounding and discreteness issues when
converting the LP solution to an actual filling solution).
3.2 Smoothness Objectives for Filling
Obviously, all Lipschitz conditions are linear and can be
implemented as linear programming formulations. We describe
four linear programming formulations for the "smooth-
ness" objectives with respect to the spatial density model.
(The linear programming formulations for the effective density
model are similar.)
The first Linear Programming formulation for the Min-
Lip-I objective is:
Minimize: L
Subject to:
st
nr
area(T st
st
LP Greedy MC IGreedy IMC
Test case OrgDen FD Multi-Level FD Multi-Level FD Multi-Level FD Multi-Level FD Multi-Level
T/W/r MaxD MinD DenV MaxD DenV DenV MaxD DenV DenV MaxD DenV DenV MaxD DenV DenV MaxD DenV
Spatial Density Model
Effective Density Model
Table
2: Multi-level density analysis on results from existing fixed-dissection filling methods. Notation:
T/W/r: Layout / window size / r-dissection; LP: linear programming method; Greedy: Greedy method;
MC: Monte-Carlo method; IGreedy: iterated Greedy method; IMC: iterated Monte-Carlo method; OrgDen:
density of original layout; FD: fixed-dissection density analysis; Multi-Level: multi-level density analysis;
MaxD: maximum window density; MinD: minimum window density; DenV: density variation.
Test case LP Greedy MC IGreedy IMC
T/W/r LipI LipII LipIII LipI LipII LipIII LipI LipII LipIII LipI LipII LipIII LipI LipII LipIII
Spatial Density Model
Effective Density Model
L1/16/4 4.048 4.333 3.864 5.332 5.619 5.190 3.631 4.166 3.448 3.994 4.254 3.132 4.245 4.481 3.315
L2/28/4 2.882 5.782 4.855 2.694 6.587 6.565 1.498 5.579 5.092 2.702 6.317 5.678 2.532 5.640 4.981
Table
3: Different behaviors of existing filling methods on "smoothness" objectives. Note: All data for
effective density model have been timed by 10 3 . Notation: LipI: Lipschitz condition I; LipII: Lipschitz
condition II; LipIII: Lipschitz condition III.
Here, U is the given upper bound of the effective tile densi-
ties. The constraints (5) imply that features can be added
but not deleted from any tile. The slack constraints (6) are
computed for each tile. The pattern-dependent coefficient
pattern denotes the maximum pattern area which can embedded
in an empty unit square. If a tile T ij is originally
overfilled, then we set slack(T ij In the LP solution,
the values of p ij indicate the fill amount to be inserted in
each tile T ij . The constraint (7) says that no window can
have density greater than than U (unless it was initially over-
filled). The constraints (8) imply that the auxiliary variable
L is an upper bound on all variation between (2r
in the same row.
The second Linear Programming formulation for the Min-
Lip-II objective replaces the constraints (8) with the following
constraints
Here, the auxiliary variables minDen(i; j) and maxDen(i;
are the minimum and maximum tile effective densities in
centered at T i;j . The constraints
above ensure that the density variations among all windows
which cover T i;j is less than the auxiliary variable L.
The Min-Lip-III objective strives to minimize the maximum
density variation of every cluster of windows which
cover one square consisting of k tiles. The constraints
are changed to the following:
r
The constraints (10) ensure that the density variation between
any two windows which cover r \Theta r tiles is less than
the auxiliary variable L.
Finally, in order to consider the "smoothness" objectives
together with the Min-Var objective, we propose another
LP formulation with the combined objective which is the
linear summation of Min-Var, Lip-I, and Lip-II objectives
with specific coefficients.
Minimize: C0 M
Lip-I Constraints (8), Lip-II (9) and Min-Var constraints
are added for the combined objective:
Here, the auxiliary variables LI and LII are the maximum
Lipschitz condition type-I and type-II, and the auxiliary
variable M is a lower bound on all tile densities.
3.3 Computational Experience
We tested the smoothness of filling solutions generated
using the same test cases, with smoothness evaluated using
finest-r density analysis with 64. Runtimes of the
new methods are substantially longer than for the original
Min-Var LP formulation, because many more constraints are
added for each layout window due to the Lipschitz condition
objectives. For example, for L2/28/8 and the spatial den-
Test case Min-Var LP LipI LP LipII LP LipIII LP Comb LP
Spatial Density Model
Effective Density Model
Table
4: Comparison among the LP methods on Min-Var and Lipschitz condition objectives. Notation: Min-
Var LP: LP with Min-Var objective; LipI LP: LP with Min-Lip-I objective; LipII LP: LP with Min-Lip-II
objective; LipIII LP: LP with Min-Lip-III objective; Com LP: LP with combined objective.
sity model, the runtime of Min-Var LP is 6.9 seconds, while
the Lip-I LP runtime is 3.41 seconds, the Lip-II LP runtime
is 994 seconds and the Lip-III LP runtime is 71.4 seconds.
For L2/28/8 and the effective density model, the runtime
of Min-Var LP is 2.3 seconds, while the Lip-I LP runtime
is 708 seconds, the Lip-II LP runtime is 5084 seconds and
the Lip-III LP runtime is 2495 seconds. Since fill generation
is a post-processing step (currently performed in PV tools),
we do not believe that these runtimes are prohibitive. Our
major win is that LP is tractable with Lipschitz objectives
(as opposed to intractable with large values of r). Of course,
finding smoothness objectives that result in smaller LPs is
a direction for future work.
The performances of the new LP formulations with "smooth-
ness" objectives are studied in Table 4. We use the coefficients
(0.4/0.4/0.2) in the combined objective; these values
were derived from the greedy testing of all coefficient combi-
nations. Because of LP's rounding error, 5 some new LPs do
not achieve the best value on certain test cases. From the
comparison between the new LPs and Min-Var LP, it appears
that neither Min-Var LP nor the Lipschitz condition-
derived LPs are dominant. At the same time, when compared
against existing filling methods in Table (3), the new
LP with combined objective normally achieves the best comprehensive
solutions in terms of trading off among the Min-
Den, Lipschitz conditions I and II. Another interesting observation
is that the LP with combined objective can achieve
even smaller density variations than the Min-Var LP. This
shows that the solution qualities of LP methods can be significantly
damaged by rounding effects, and that a better
non-LP method may be possible.
4. CONCLUSIONS&FUTURERESEARCH
To improve manufacturability and performance predictabil-
ity, it is necessary to "smoothen" a layout by the insertion
of "filling (dummy) geometries". In this paper, we pointed
out the potentially large difference between fixed-dissection
filling results and the actual maximum or minimum window
density in optimal density analyses. We compared existing
filling algorithms in gridless mode using the effective as well
as the spatial density models. We also suggested new methods
of measuring local uniformity of the layout based on
Lipschitz conditions and proposed new filling methods based
on these properties. Our experimental results highlight the
advantages of Monte-Carlo and greedy -based methods over
previous linear program based approaches.
5 The desired fill area specified for each tile in the LP solution
must be rounded to an area that corresponds to an an integer
number of dummy fill features.
Ongoing work addresses extensions of multi-level density
analyses to measuring local uniformity ("smoothness") with
respect to other CMP physical models. We also seek improved
methods for optimizing fill synthesis with respect to
our new (and possibly other alternative) local uniformity
objectives.
5.
--R
"Models for Pattern Dependencies: Capturing Effects in Oxide, STI, and Copper CMP"
"Hierarchical Dummy Fill for Process Uniformity"
"Practical Iterated Fill Synthesis for CMP Uniformity"
"New Monte-Carlo Algorithms for Layout Density Control"
"Effect of Fine-line Density and Pitch on Interconnect ILD Thickness Variation in Oxide CMP Process"
"Filling Algorithms and Analyses for Layout Density Control"
"Filling and Slotting: Analysis and Algorithms"
"Modeling of Chemical-Mechanical Polishing: A Review"
"An Integrated Characterization and Modeling Methodology for CMP Dielectric Planarization"
"A Closed-Form Analytical Model for ILD Thickness Variation in CMP Processes"
"Rapid Characterization and Modeling of Pattern Dependent Variation in Chemical Mechanical Polishing"
"Model-Based Dummy Feature Placement for Oxide Chemical Mechanical Polishing Manufacturability"
"Dummy feature placement for chemical-mechanical polishing uniformity in a shallow trench isolation process "
--TR
Filling and slotting
Model-based dummy feature placement for oxide chemical-mechanical polishing manufacturability
Practical iterated fill synthesis for CMP uniformity
Monte-Carlo algorithms for layout density control
Dummy feature placement for chemical-mechanical polishing uniformity in a shallow trench isolation process
Hierarchical dummy fill for process uniformity
--CTR
Hua Xiang , Liang Deng , Ruchir Puri , Kai-Yuan Chao , Martin D.F. Wong, Dummy fill density analysis with coupling constraints, Proceedings of the 2007 international symposium on Physical design, March 18-21, 2007, Austin, Texas, USA
Performance-impact limited area fill synthesis, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Hua Xiang , Kai-Yuan Chao , Ruchir Puri , Martin D.F. Wong, Is your layout density verification exact?: a fast exact algorithm for density calculation, Proceedings of the 2007 international symposium on Physical design, March 18-21, 2007, Austin, Texas, USA
Andrew B. Kahng, Research directions for coevolution of rules and routers, Proceedings of the international symposium on Physical design, April 06-09, 2003, Monterey, CA, USA | monte-carlo;dummy fill problem;VLSI manufacturability;density analysis;chemical-mechanical polishing |
505522 | High-speed architectures for Reed-Solomon decoders. | New high-speed VLSI architectures for decoding Reed-Solomon codes with the Berlekamp-Massey algorithm are presented in this paper. The speed bottleneck in the Berlekamp-Massey algorighm is in the iterative computation of discrepencies followed by the updating of the error-locator polynomial. This bottleneck is eliminated via a series of algorithmic transformations that result in a fully systolic architecture in which a single array of processors computes both the error-locator and the error-evaluator polynomials. In contrast to conventional Berlekamp-Massey architectures in which the critical path passes through two multipliers and 1+[log2(t +1)] adders, the critical path in the proposed architecture passes through only one multiplier and one adder, which is comparable to the critical path in architectures based on the extended Euclidean algorithm. More interestingly, the proposed architecture requires approximately 25% fewer multipliers and a simpler control structure than the architectures based on the popular extended Euclidean algorithm. For block-interleaved Reed-Solomon codes, embedding the interleaver memory into the decoder results in a further reduction of the critical path delay to just one XOR gate and one multiplexer, leading to speed ups of as much as an order of magnitude over conventional architectures. | Introduction
Reed-Solomon codes [1], [3] are employed in numerous communications systems such as those
for deep space, digital subscriber loops, and wireless systems as well as in memory and data
storage systems. Continual demand for ever higher data rates makes it necessary to devise very
high-speed implementations of decoders for Reed-Solomon codes. Recently reported decoder
implementations [5], [19] have quoted data rates of ranging from 144 Mb/s to 1.28 Gb/s. These
high throughputs have been achieved by architectural innovations such as pipelining and parallel
processing. A majority of the implementations [2], [8], [15], [19] employ an architecture based
on the extended Euclidean (eE) algorithm for computing the greatest common divisor of two
polynomials [3]. A key advantage of architectures based upon the eE algorithm is regularity.
In addition, the critical path delay in these architectures is at best Tmult
Tmult , T add , and Tmux are the delays of the nite-eld multiplier, adder, and 2 1 multiplexer
respectively, and this is su-ciently small for most applications. In contrast, relatively few decoder
implementations have employed architectures based on the Berlekamp-Massey (BM) algorithm
DRAFT June 19, 2000
[1], [3], [10], presumably because the architectures were found to be irregular and to have a
longer critical path delay that was also dependent on the error-correcting capability of the code
[5]. In this paper, we show that, in fact, it is possible to reformulate the BM algorithm to
achieve extremely regular decoder architectures. Surprisingly, these new architectures can not
only operate at data rates comparable to architectures based on the eE algorithm, but they also
have lower gate complexity and simpler control structures.
This paper begins with a brief tutorial overview of the encoding and decoding of Reed-Solomon
codes in Section II. Conventional architectures for decoders based on the BM algorithm are
described in Section III. In Section IV, we show that it is possible to algorithmically transform
the BM algorithm so that a homogenous systolic array architecture for the decoder can be
developed. Finally, in Section V, we describe a pipelined architecture for block-interleaved Reed-Solomon
codes that achieves an order of magnitude reduction in the critical path delay over the
architectures presented in Sections III and IV.
II. Reed-Solomon Codes
We provide a brief overview of the encoding and decoding of Reed-Solomon codes.
A. Encoding of Reed-Solomon Codes
data symbols (bytes) that are to be transmitted over
a communication channel (or stored in memory.) These bytes are regarded as elements of the
nite eld (also called Galois eld) GF(2 m ), 1 and encoded into a codeword (c
of n > k bytes. These codeword symbols are transmitted over the communication channel (or
stored in memory.)
For Reed-Solomon codes over GF(2 m is odd, and the code can correct
(n k)=2 byte errors. The encoding process is best described in terms of the data polynomial
being transformed into a codeword polynomial
polynomials C(z) are polynomial
Addition (and subtraction) in GF(2 m ) is the bit-by-bit XOR of the bytes. The 2 m 1 nonzero elements of
can also be regarded as the powers, of a primitive element (where
so that the product of eld elements
June 19, 2000 DRAFT
multiples of G(z), the generator polynomial of the code, which is dened as
Y
typically 0 or 1. However, other choices sometimes simplify the decoding process
slightly. Since 2t consecutive powers of are roots of G(z), and C(z)
is a multiple of G(z), it follows that
for all codeword polynomials C(z). In fact, an arbitrary polynomial of degree less than n is a
codeword polynomial if and only if it satises (2).
A systematic encoding produces codewords that are comprised of data symbols followed by
parity-check symbols, and is obtained as follows. Let Q(z) and P (z) denote the quotient and
remainder respectively when the polynomial z n k D(z) of degree n 1 is divided by G(z) of
degree
is a multiple of G(z). Furthermore, since the lowest degree
term in z n k D(z) is d 0 z n k while P (z) is of degree at most n k 1, it follows that the codeword
is given by
and consists of the data symbols followed by the parity-check symbols.
B. Decoding of Reed-Solomon Codes
Let C(z) denote the transmitted codeword polynomial and let R(z) denote the received word
polynomial. The input to the decoder is R(z), and it assumes that
where, if e > 0 errors have occurred during transmission, the error polynomial E(z) can be
written as
It is conventional to say that the error values have occurred at the error locations
. Note that the decoder does not know E(z); in fact, it does
DRAFT June 19, 2000
not even know the value of e. The decoder's task is to determine E(z) from its input R(z), and
thus correct the errors by subtracting o E(z) from R(z). If e t, then such a calculation is
always possible, that is, t or fewer errors can always be corrected.
The decoder begins its task of error correction by computing the syndrome values
If all 2t syndrome values are zero, then R(z) is a codeword, and it is assumed that
that is, no errors have occurred. Otherwise, the decoder knows that e > 0 and uses the syndrome
polynomial S(z), which is dened to be
to calculate the error values and error locations. Dene the error locator polynomial (z) of
degree e and the error evaluator
polynomial
z) of degree at most e 1 to be
e
Y
e
e
Y
These polynomials are related to S(z) through the key equation [1], [3]:
Solving the key equation to determine both (z)
and
z) from S(z) is the hardest part of the
decoding process. The BM algorithm (to be described in Section III) and the eE algorithm
can be used to solve (6). If e t, these algorithms nd (z)
and
z), but if e > t, then the
algorithms almost always fail to nd (z)
and
z). Fortunately, such failures are usually easily
detected.
Once (z)
and
z) have been found, the decoder can nd the error locations by checking
whether for each j, 0 j n 1. Usually, the decoder computes the value of
just before the j-th received symbol r j leaves the decoder circuit. This process is called
a Chien search [1], [3]. If is one of the error locations (say X i ). In other
words, r j is in error, and needs to be corrected before it leaves the decoder. The decoder can
June 19, 2000 DRAFT
6 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
calculate the error value Y i to be subtracted from r j via Forney's error value formula [3]:
z 0 (z)
z= j
denotes the formal derivative of (z). Note that the formal
derivative simplies to 0 since we are considering codes over GF(2 m ). Thus,
z which is just the terms of odd degree in (z). Hence, the value of
z 0 (z) at can be found during the evaluation of (z) at z = j and does not require
a separate computation. Note also that (7) can be simplied by choosing
C. Reed-Solomon Decoder Structure
In summary, a Reed-Solomon decoder consists of three blocks:
the syndrome computation (SC) block,
the key-equation solver (KES) block, and
the Chien search and error evaluator (CSEE) block.
These blocks usually operate in pipelined mode in which the three blocks are separately and
simultaneously working on three successive received words. The SC block computes the syndromes
via (3) usually as the received word is entering the decoder. The syndromes are passed to
the KES block which solves (6) to determine the error locator and error evaluator polynomials.
These polynomials are then passed to the CSEE block which calculates the error locations and
error values via (7) and corrects the errors as the received word is being read out of the decoder.
The throughput bottleneck in Reed-Solomon decoders is in the KES block which solves (6):
in contrast, the SC and CSEE blocks are relatively straightforward to implement. Hence, in
this paper we focus on developing high-speed architectures for the KES block. As mentioned
earlier, the key equation (6) can be solved via the eE algorithm (see [19] and [17] for imple-
mentations), or via the BM algorithm (see [5] for implementations). In this paper, we develop
high-speed architectures for a reformulated version of the BM algorithm because we believe that
this reformulated algorithm can be used to achieve much higher speeds than can be achieved
by other implementations of the BM and eE algorithms. Furthermore, as we shall show in
Section IV.B.4, these new architectures also have lower gate complexity and a simpler control
structure than architectures based on the eE algorithm.
DRAFT June 19, 2000
III. Existing Berlekamp-Massey (BM) Architectures
In this section, we give a brief description of dierent versions of the Berlekamp-Massey (BM)
algorithm and then discuss a generic architecture, similar to that in the paper by Reed et al.
[13], for implementation of the algorithm.
A. The Berlekamp-Massey Algorithm
The BM algorithm is an iterative procedure for solving (6). In the form originally proposed
by Berlekamp [1], the algorithm begins with polynomials (0;
iteratively
determines polynomials (r; z)
and
satisfying the polynomial congruence,
for thus obtains a solution (2t; z) and
d t; z) to the key equation (6). Two
\scratch" polynomials B(r; z) and H(r; z) with initial values B(0; are
used in the algorithm. For each successive value of r, the algorithm determines (r; z) and B(r; z)
from (r 1; z) and B(r 1; z). Similarly, the algorithm
determines
z) and H(r; z) from
r 1; z) and H(r 1; z). Since S(z) has degree 2t 1, and the other polynomials can have
degrees as large as t, the algorithm needs to store roughly 6t eld elements. If each iteration
is completed in one clock cycle, then 2t clock cycles are needed to nd the error-locator and
error-evaluator polynomials.
In recent years, most researchers have used the formulation of the BM algorithm given by
Blahut [3] in which only (r; z) and B(r; z) are computed iteratively. Following the completion
of the 2t iterations, the error-evaluator
polynomial
y t; z) is computed as the terms of degree
t 1 or less in the polynomial product (2t; z)S(z). An implementation of this version thus
needs to store only 4t eld elements, but the computation of
t; z) requires an additional t
clock cycles. Although this version of the BM algorithm trades space against time, it also
suers from the same problem as the Berlekamp version, viz. during some of the iterations, it is
necessary to divide each coe-cient of (r; z) by a quantity - r . These divisions are most e-ciently
handled by rst computing - 1
r , the inverse of - r , and then multiplying each coe-cient of (r; z)
by - 1
r . Unfortunately, regardless of whether this method is used or whether one constructs
separate divider circuits for each coe-cient of (r; z), these divisions, which occur inside an
iterative loop, are more time-consuming than multiplications. Obviously, if these divisions could
be replaced by multiplications, the resulting circuit implementation would have a smaller critical
June 19, 2000 DRAFT
8 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
path delay and higher clock speeds would be usable. 2 A less well-known version of the BM
algorithm [4], [13], has precisely this property, and has been recently employed in practice [13],
[5]. We focus on this version of the BM algorithm in this paper.
The inversionless BM (iBM) algorithm is described by the pseudocode shown below. The
iBM algorithm actually nds scalar multiples (z) and
z) instead of the (z)
and
dened in (4) and (5). However, it is obvious that the Chien search will nd the same error
locations, and it follows from (7) that the same error values are obtained. Hence, we continue
to refer to the polynomials computed by the iBM algorithm as (z)
and
z). As a minor
implementation detail, in (4) and thus requires no latches for storage, but the iBM
algorithm must store Note also that b 1 (r) which occurs in Steps iBM.2 and iBM.3 is
a constant: it has value 0 for all r.
The iBM Algorithm
Initialization:
t.
for do
begin
Step iBM.1
Step iBM.2
Step iBM.3 if -(r) 6= 0 and k(r) 0
then
begin
else
2 The astute reader will have noticed that the Forney error value formula (7) also involves a division. Fortunately,
these divisions can be pipelined because they are feed-forward computations. Similarly, the polynomial evaluations
needed in the CSEE block (as well as those in the SC block) are feed-forward computations that can be pipelined.
Unfortunately, the divisions in the KES block occur inside an iterative loop, and hence pipelining the computation
becomes di-cult. Thus, as was noted in Section II, the throughput bottleneck is in the KES block.
DRAFT June 19, 2000
begin
t.
For r < t, Step iBM.1 includes terms s 1 r+1 (r); s 2 r+2 involving
unknown quantities s Fortunately, it is known [3] that deg (r; z) r so that
therefore the unknown s i do not aect the value of
-(r). Notice also the similarity between Steps iBM.1 and iBM.4. These facts have been used
to simplify the architecture that we describe next.
B. Architectures Based on the iBM algorithm
Due to the similarity of Steps iBM.1 and iBM.4, architectures based on the iBM algorithm
need only two major computational structures as shown in Fig. 1:
The discrepancy computation (DC) block for implementing Step iBM.1, and
The error locator update (ELU) block which implements Steps iBM.2 and iBM.3 in parallel.
The DC block contains latches for storing the syndromes s i , the GF(2 m ) arithmetic units for
computing the discrepancy -(r) and the control unit for the entire architecture. It is connected
to the ELU block which contains latches for storing for (r; z) and B(r; z) as well as GF(2 m )
arithmetic units for updating these polynomials, as shown in Fig. 1. During a clock cycle, the
DC block computes the discrepancy -(r) and passes this value together with
(r) and a control
signal MC(r) to the ELU block which updates the polynomials during the same clock cycle.
operations are completed in one clock cycle, we assume that m-bit
parallel arithmetic units are being employed. Architectures for such Galois eld arithmetic units
can be found in numerous references including [7] and will not be discussed here.
June 19, 2000 DRAFT
block
(in cycles 2t+1 to 3t)
block
Syndromes from the
To the block
To the
l t (r) l t-1 (r) 0011
d
Fig. 1. The iBM architecture.
B.1 DC Block Architecture
The DC block architecture shown in Fig. 2 has 2t latches constituting the DS shift register
that are initialized such that the latches DS contain the syndromes
In each of the rst 2t clock cycles, the t +1 multipliers compute
the products in Step iBM.1. These are added in a binary adder tree of depth dlog 1)e to
produce the discrepancy -(r). Thus, the delay in computing -(r) is T
A typical control unit such as the one illustrated in Fig. 2 has counters for the variables r and
k(r), and storage for
(r). Following the computation of -(r), the control unit computes the
OR of the m bits in -(r) to determine whether -(r) is nonzero. This requires m 1 two-input
gates arranged in a binary tree of depth dlog 2 me. If the counter for k(r) is implemented
in twos-complement representation, then k(r) 0 if and only if the most signicant bit in the
counter is 0. The delay in generating signal MC(r) is thus me T or + T and .
Finally, once the MC(r) signal is available, the counter for k(r) can be updated. Notice that
a twos-complement arithmetic addition is needed if k(r 1. On the other hand,
negation in twos-complement representation complements all the bits and then adds 1, and hence
the update k(r only the complementation of all the bits in the k(r)
counter. We note that it is possible to use ring counters for r and k(r), in which case k(r) is
DRAFT June 19, 2000
SC block
Syndromes from the
CONTROL
AND
msb
CNTR r
Fig. 2. The discrepancy computation (DC) block.
updated just Tmux seconds after the MC(r) signal has been computed.
Following the 2t clock cycles for the BM algorithm, the DC block computes the error-locator
polynomial
z) in the next t clock cycles. To achieve this, the DS t ; DS latches
are reset to zero during the 2t-th clock cycle, so that, at the beginning of the (2t
cycle, the contents of the DS register (see Fig. 2) are s Also, the
outputs of the ELU block are frozen so that these do not change during the computation of
z). From Step iBM.4, it follows that the \discrepancies" computed during the next t clock
June 19, 2000 DRAFT
cycles are just the coe-cients ! 0
z). The architecture in Fig. 2 is
an enhanced version of the one described in [13]. The latter uses a slightly dierent structure
and dierent initialization of the DS register in the DC block, which requires more storage and
makes it less adaptable to the subsequent computation of the error-locator polynomial.
Note that the total hardware requirements of the DC block are 2t m-bit latches, t
tipliers, t adders, and miscellaneous other circuitry (counters, arithmetic adder or ring counter,
gates, inverters and latches), in the control unit. From Fig. 2, the critical path delay of the
DC block is
me T or
B.2 ELU Block Architecture
Following the computation of the discrepancy -(r) and the MC(r) signal in the DC block,
the polynomial coe-cient updates of Steps iBM.2 and iBM.3 are performed simultaneously
in the ELU block. The processor element PE0 (hereinafter the PE0 processor) that updates
one coe-cient of (z) and B(z) is illustrated in Fig. 3(a). The complete ELU architecture is
shown in Fig. 3(b), where we see that signals -(r),
(r) and MC(r) are broadcast to all the PE0
processors. In addition, the latches in all the PE0 processors are initialized to zero except for
which has its latches initialized to the element 1 latches
and multipliers, and t adders and multiplexers are needed. The critical path delay of the
ELU block is given by
B.3 iBM Architecture
Ignoring the hardware used in the control section, the total hardware needed to implement the
iBM algorithm is 4t multiplexers. The
total time required to solve the key equation for one codeword is 3t clock cycles. Alternatively,
if
t; z) is computed iteratively, the computations require only 2t clock cycles. However, since
the computations required to
update
are the same as that of (r; z), a near-duplicate of
the ELU block is needed. 3 This increases the hardware requirements to 6t
t; z) < t, the array has only t PE0 processors.
DRAFT June 19, 2000
d (r)
d (r)
l
d (r)
d (r)
l
(a)
d (r)010101010101010101
PE0001100110011l l l l 0t-1
(b)
Fig. 3. The ELU block diagram:(a) the PE0 processor, and (b) the ELU architecture. The latches in
are initialized to 1 2 GF those in other PE0s are initialized to 0.
In either case, the critical path delay of the
iBM architecture can be obtained from Figs. 1, 2, and 3 as
which is the delay of the direct path that begins in the DC block starting from the DS i latches,
through a multiplier, an adder tree of height dlog (generating the signal -(r)), feeding
into the ELU block multiplier and adder before being latched. We have assumed that the
indirect path taken by -(r) through the control unit (generating signal MC(r)) feeding into the
ELU block multiplexer is faster than the direct path, i.e., Tmult > dlog 2 me T or + T and . This
is a reasonable assumption in most technologies. Note that more than half of T iBM is due to
the delay in the DC block, and that this contribution increases logarithmically with the error
correction capability. Thus, reducing the delay in the DC block is the key to achieving higher
speeds.
In the next section, we describe algorithmic reformulations of the iBM algorithm that lead to
June 19, 2000 DRAFT
14 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
a systolic architecture for the DC block and reduce its critical path delay to TELU .
IV. Proposed Reed-Solomon Decoder Architectures
The critical path in iBM architectures of the type described in Section III passes through two
multipliers as well as the adder tree structure in the DC block. The multiplier units contribute
signicantly to the critical path delay and hence reduce the throughput achievable with the iBM
architecture. In this section, we propose new decoder architectures that have a smaller critical
path delay. These architectures are derived via algorithmic reformulation of the iBM algorithm.
This reformulated iBM (riBM) algorithm computes the next discrepancy -(r + 1) at the same
time that it is computing the current polynomial coe-cient updates, that is, the
and the b i (r 1)'s. This is possible because the reformulated discrepancy computation does not
use the explicitly. Furthermore, the discrepancy is computed in a block which has
the same structure as the ELU block, so that both blocks have the same critical path delay
A. Reformulation of the iBM Algorithm
A.1 Simultaneous Computation of Discrepancies and Updates
Viewing Steps iBM.2 and iBM.3 in terms of polynomials, we see that Step iBM.2 computes
while Step iBM.3 sets B(r+1; z) either to (r; z) or to zB(r; z). Next, note that the discrepancy
-(r) computed in Step iBM.1 is actually - r (r), the coe-cient of z r in the polynomial product
Much faster implementations are possible if the decoder computes all the coe-cients of (r; z)
(and of (r; even though only - r (r) is needed to compute (r and to
decide whether B(r is to be set to (r; z) or to z B(r; z).
Suppose that at the beginning of a clock cycle, the decoder has available to it all the coe-cients
of (r; z) and (r; z) (and, of course, of (r; z) and B(r; z) as well.) Thus,
available at the beginning of the clock cycle, and the decoder can compute (r
DRAFT June 19, 2000
Furthermore, it follows from (10) and (11) that
set to either (r; or to z (r;
z B(r; z) S(z). In short, (r are computed in exactly the same manner
as are (r Furthermore, all four polynomial updates can be computed
simultaneously, and all the polynomial coe-cients as well as - r+1 (r + 1) are thus available at the
beginning of the next clock cycle.
A.2 A New Error-Evaluator Polynomial
The riBM algorithm simultaneously updates four polynomials (r; z), B(r; z), (r; z), and
S(z). The 2t iterations
thus produce the error-locator polynomial (2t; z) and also the polynomial (2t; z). Note that
since
c t; z) (2t; z) S(z) mod z 2t it follows from (11) that the low-order coe-cients of
(2t; z) are
just
s t; z), that is, the 2t iterations compute both the error-locator polynomial
(2t; z) and the error-evaluator
polynomial
y t; z) { the additional t iterations of Step iBM.4
are not needed. The high-order coe-cients of (2t; z) can also be used for error evaluation. Let
(2t; z)
where
(h) (z) of degree at most e 1 contains the high-order
terms. Since X 1
i is a root of (2t; z), it follows from (11) that (2t; X 1
0: Thus, (7) can be re-written as
z 0 (z)
z=X 1
We next show that this variation of the error evaluation formula has certain architectural advan-
tages. Note that the choice m preferable if (12) is to be used.
A.3 Further Reformulation
Since the updating of all four polynomials is identical, the discrepancies can be calculated
using an ELU block like the one described in Section III. Unfortunately, for
the discrepancy - r (r) is computed in processor PE0 r . Thus, multiplexers are needed to route the
June 19, 2000 DRAFT
appropriate latch contents to the control unit and to the ELU block that computes (r
Additional reformulation of the iBM algorithm, as described next, eliminates
these multiplexers. We use the fact that for any i < aect the value
of any later discrepancy - r+j (r j). Consequently, we need not store - i (r) and i (r) for i < r.
Thus, for
and the polynomials
with initial values ^
It follows that these polynomial coe-cients are
updated as
set either to - i+1+r or to i+r
(r). Note that
the discrepancy - r
is always in a xed (zero-th) position with this form of update. As a
nal comment, note this form of update ultimately produces ^
thus (12) can be used for error evaluation in the CSEE block.
The riBM algorithm is described by the following pseudocode. Note that b 1
for all values of r, and these quantities do not need to be stored or updated.
The riBM Algorithm
Initialization:
t.
for do
begin
Step riBM.1
Step riBM.2 if ^
then
begin
DRAFT June 19, 2000
else
begin
Next, we consider architectures that implement the riBM algorithm.
B. High-speed Reed-Solomon Decoder Architectures
As in the iBM architecture described in Section III, the riBM architecture consists of a
reformulated discrepancy computation (rDC) block connected to an ELU block.
B.1 The rDC Architecture
The rDC block uses the processor PE1 shown in Fig. 4(a) and the rDC architecture shown
in Fig. 4(b). Notice that processor PE1 is very similar to processor PE0 of Fig. 3(a). How-
ever, the contents of the upper latch \
ow through" PE1 while the contents of the lower latch
\recirculate". In contrast, the lower latch contents \
ow through" in processor PE0 while the
contents of the upper latch \recirculate". Obviously, the hardware complexity and the critical
path delays of processors PE0 and PE1 are identical. Thus, assuming as before that
Tmult > dlog 2 me T or +T and , we get that T Note that the delay is independent
of the error-correction capability t of the code.
The hardware requirements of the proposed architecture in Fig. 4 are 2t PE1 processors, that
is, 4t latches, 4t multipliers, 2t adders, and 2t multiplexers, in addition to the control unit which
is the same as that in Fig. 2.
June 19, 2000 DRAFT
d (r)
d (r) d (r) d (r)
d (r)
d (r)
d
d
(a)
s
d (r)
sw
sw
s
(b)
Fig. 4. The rDC block diagram:(a) the PE1 processor, and (b) the rDC architecture.
B.2 The riBM Architecture
The overall riBM architecture is shown in Fig. 5. It uses the rDC block of Fig. 4 and the
ELU block in Fig. 3. Note that the outputs of the ELU block do not feed back into the rDC
block. Both blocks have the same critical path delay of T add and since
they operate in parallel, our proposed riBM architecture achieves the same critical path delay:
which is less than half the delay T add of the enhanced iBM
architecture.
As noted in the previous subsection, at the end of the 2t-th iteration, the PE1 i s,
contain the coe-cients
of
(2t; z) which can be used for error evaluation. Thus, 2t clock cycles
are used to determine both (z)
and
(h) (z) as needed in (12). Ignoring the control unit, the
hardware requirement of this architecture is 3t
DRAFT June 19, 2000
d (r)
l l l l 0t-1
ELU- block
CONTROL
rDC-block
Fig. 5. The systolic riBM architecture.
multiplexers. This compares very favorably with the
multiplexers needed to implement the
enhanced iBM architecture of Section III in which both the error-locator and the error-evaluator
polynomial are computed in 2t clock cycles. Using only t 1 additional multipliers and t additional
multiplexers, we have reduced the critical path delay by more than 50%. Furthermore, the riBM
architecture consists of two systolic arrays and is thus very regular.
B.3 The RiBM Architecture
We now show that it is possible to eliminate the ELU block entirely, and to implement the
BM algorithm in an enhanced rDC block in which the array of 2t PE1 processors has been
lengthened into an array of 3t processors as shown in Fig. 6. In this completely systolic
architecture, a single array computes both (z)
and
(z). Since the t processors
eliminated from the ELU block re-appear as the t additional PE1 processors, the RiBM
architecture has the same hardware complexity and critical path delay as the riBM architecture.
June 19, 2000 DRAFT
However, its extremely regular structure is esthetically pleasing, and also oers some advantage
in VLSI circuit layouts.h
t-1t
d
s
CONTROL
l (r)
s
Fig. 6. The homogenous systolic RiBM architecture.
An array of PE0 processors in the riBM architecture (see Fig. 5) carries out the same polynomial
computation as an array of PE1 processors in the RiBM architecture (see Fig 6), but in
the latter array, the polynomial coe-cients shift left with each clock pulse. Thus, in the RiBM
architecture, suppose that the initial loading of PE1 0 , PE1 1 , . , PE1 2t 1 is as in Fig. 4, while
are loaded with zeroes, and the latches in PE1 3t are loaded
as the iterations proceed, the polynomials ^
are
updated in the processors in the left-hand end of the array (eectively, (r; z) and (r; z) get
updated and shifted leftwards). After 2t clock cycles, the coe-cients
of
(h) (z) are in processors
Next, note that PE1 3t contains (0; z) and B(0; z), and as the iterations
proceed, (r; z) and B(r; z) shift leftwards through the processors in the right-hand end of the
array, with i (r) and b i (r) being stored in processor PE1 3t r+i . After 2t clock cycles, processor
PE1 t+i contains i (2t) and b i (2t) for t. Thus, the same array is carrying out two
separate computations. These computations do not interfere with one another. Polynomials
DRAFT June 19, 2000
are stored in processors numbered 3t r or higher. On the other hand,
since deg (r;
is known to be an upper bound on deg (r; z). It is known [3] that l(r) is
a nondecreasing function of r, and that it has maximum value errors have
occurred. Hence, 2t 1 r thus, as (r; z) and B(r; z)
shift leftwards, they do not over-write the coe-cients of ^
We denote the contents of the array in the RiBM architecture as polynomials ~
~
z) with initial values ~
z 3t . Then, the RiBM architecture implements
the following pseudocode. Note that ~ - 3t+1 values of r, and this quantity
does not need to be stored or updated.
The RiBM Algorithm
Initialization:
~
for do
begin
then
begin
~
else
begin
~
June 19, 2000 DRAFT
22 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
B.4 Comparison of Architectures
Table
I summarizes the complexity of the various architectures described so far. It can be seen
that, in comparison to the conventional iBM architecture (Berlekamp's version), the proposed
riBM and RiBM systolic architectures require t 1 more multipliers and t more multiplexers.
All three architectures require the same numbers of latches and adders, and all three architectures
require 2t cycles to solve the key equation for a t-error-correcting code. The riBM and RiBM
architectures require considerably more gates than the conventional iBM architecture (Blahut's
version), but also require only 2t clock cycles as compared to the 3t clock cycles required by
the latter. Furthermore, since the critical path delay in the riBM and RiBM architectures
is less than half the critical path delay in either of the iBM architectures, we conclude that
the new architectures signicantly reduce the total time required to solve the key equation (and
thus achieve higher throughput) with only a modest increase in gate count. More important,
the regularity and scalability of the riBM and RiBM architectures creates the potential for
automatically generating regular layouts (via a core generator) with predictable delays for various
values of t and m.
Comparison of the riBM and RiBM architectures with eE architectures is complicated by the
fact that most recent implementations use folded architectures in which each processor element
in the systolic array has only a few arithmetic units, and these units carry out all the needed
computations via time-division-multiplexing. For example, the hypersystolic eE architecture in
[2] has 2t elements each containing only one multiplier and adder. Since each
iteration of the Euclidean algorithm requires 4 multiplications, the processors of [2] need several
multiplexers to route the various operands to the arithmetic units, and additional latches to store
one addend until the other addend has been computed by the multiplier, etc. As a result, the
architecture described in [2] requires not only many more latches and multiplexers, but also many
more clock cycles than the riBM and RiBM architectures. Furthermore, the critical path delay
is slightly larger because of the multiplexers in the various paths. On the other hand, nite-eld
multipliers themselves consist of large numbers of gates (possibly as many as 2m 2 , but fewer
if logic minimization techniques are used), and thus a complete comparison of gate counts for
the two architectures requires very specic details about the multipliers. Nonetheless, a rough
DRAFT June 19, 2000
I
Comparison of Hardware Complexity and Path Delays
Architecture Adders Multipliers Latches Muxes Clock Critical
cycles path delay
iBM
iBM
Euclidean
Euclidean [2](folded) 2t
comparison is that the riBM and RiBM architectures require three times as many gates as the
hypersystolic eE architecture, but solve the key equation in one-sixth the time.
It is, of course, possible to implement the eE algorithm with more complex processor elements,
as described by Shao et al. [14]. Here, the 4 multiplications in each processor are computed using
4 separate multipliers. The architecture described in [14] uses only 2t+1 processors as compared
to the 3t processors needed in the riBM and RiBM architectures, but each
processor in [14] has 4 multipliers, 4 multiplexers, and 2 adders. As a result, the riBM and
RiBM architectures compare very favorably to the eE architecture of [14] { the new architectures
achieve the same (actually slightly higher) throughput with much smaller complexity.
One nal point to be made with respect to the comparison between the riBM and RiBM
architectures and the eE architectures is that the controllers for the systolic arrays in the former
are actually much simpler. In the eE architecture of [14], each processor also has a \control
section" that uses an arithmetic adder, comparator, and two multiplexers. 2dlog 2 te bits of
arithmetic data are passed from processor to processor in the array, and these are used to
generate multiplexer control signals in each processor. Similarly, the eE architecture of [2] has a
separate control circuit for each processor. The delays in these control circuits are not accounted
for in the critical path delays for the eE architectures that we have listed in Table I. In contrast,
all the multiplexers in the riBM and RiBM architectures receive the same signal and the
computations in these architectures is purely systolic in the sense that all processors carry out
exactly the same computation in each cycle, with all the multiplexers set the same way in all the
June 19, 2000 DRAFT
processors { there are no cell-specic control signals.
Preliminary Layout Results
Preliminary layout results from a core generator are shown in Fig. 7 for the KES block for
a 4-error-correcting Reed-Solomon code over GF(2 8 ). The processing element PE1 is shown
in Fig. 7(a) where the upper 8 latches store the element ~
- r while the lower 8 latches store the
element ~
r . A complete RiBM architecture is shown in Fig. 7(b) where the 13 PE1 processing
elements are arrayed diagonally and the error locator and error evaluator polynomials output
latches can be seen to be arrayed vertically. The critical path delay of the RiBM architecture
as reported by the synthesis tool in SYNOPSYS was 2:13 ns in TSMC's 0:25m 3:3V CMOS
technology.
(a) (b)
Fig. 7. The RiBM architecture synthesized in a 3:3V , 0:25m CMOS technology:(a) the PE1 processing
element, and (b) the RiBM architecture.
In the next section, we develop a pipelined architecture that further reduces the critical path
delay by as much as an order of magnitude by using a block-interleaved code.
DRAFT June 19, 2000
V. Pipelined Reed-Solomon Decoders
The iterations in the original BM algorithm were pipelined using the look-ahead transformation
[12] by Liu et al. [9], and the same method can be applied to the riBM and RiBM algorithms.
However, such pipelining requires complex overhead and control hardware. On the other hand,
pipeline interleaving (also described in [12]) of a decoder for a block-interleaved Reed-Solomon
code is a simple and e-cient technique that can reduce the critical path delay in the decoder by
an order of magnitude. We describe our results for only the RiBM architecture of Section IV
but the same techniques can also be applied to the riBM architecture as well as to the decoder
architectures described in Section III.
A. Block-Interleaved Reed-Solomon Codes
A.1 Block Interleaving
Error-correcting codes for use on channels in which errors occur in bursts are often interleaved
so that symbols from the same codeword are not transmitted consecutively. A burst of errors thus
causes single errors in multiple codewords rather than multiple errors in a single codeword. The
latter occurrence is undesirable since it can easily overwhelm the error-correcting capabilities of
the code and cause a decoder failure or decoder error. Two types of interleavers, block interleavers
and convolutional interleavers, are commonly used (see, e.g. [16], [18].) We restrict our attention
to block-interleaved codes.
Block-interleaving an (n; k) code to depth M results in an (nM; kM) interleaved code whose
codewords have the property that (c (n 1)M+i ; c (n
is a codeword in the (n; Equivalently, a codeword of the (nM; kM) code
is a multichannel data stream in which each of the M channels carries a codeword of the (n;
code.
A.2 Interleaving via Memory Arrays
The usual description (see, e.g., [16], [18]) of an encoder for the block-interleaved (nM; kM)
code involves partitioning kM data symbols (d blocks of k consecutive
symbols, and encoding each block into a codeword of the (n;
codewords are stored row-wise into an M n memory array. The memory is then read out
column-wise to form the block-interleaved codeword. Notice that the block-interleaved code-
June 19, 2000 DRAFT
26 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
word is systematic in the sense that the parity-check symbols follow the data symbols, but the
Reed-Solomon encoding process described in Section II-A results in a block-interleaved codeword
in which the data symbols are not transmitted over the channel in the order in which they entered
the encoder. 4 At the receiver, the interleaving process is reversed by storing the nM received
symbols column-wise into an M n memory array. The memory is then read out row-wise to
received words of length n that can be decoded by a decoder for the (n; code. The
information symbols appear in the correct order in the de-interleaved stream, and the decoder
output is passed on to the destination.
A.3 Embedded Interleavers
An alternative form of block interleaving embeds the interleaver into the encoder, thereby
transforming it into an encoder for the (nM; kM) code. For interleaved Reed-Solomon codes,
the mathematical description of the encoding process is that the generator polynomial of the
interleaved code is G(z M ), where G(z) denotes the generator polynomial of the (n;
dened in (1), and the codeword is formed as described in Section II-A { i.e., with D(z) now
denoting the data polynomial d kM 1 z of degree kM
1, the polynomial z (n k)M D(z) is divided by G(z M ) to obtain the remainder P (z) of degree
(n k)M 1. The transmitted codeword is z (n k)M D(z) P (z). In essence, the data stream
treated as if it were a multichannel data stream and the stream in
each channel is encoded with the (n; code. The output of the encoder is a codeword in the
block-interleaved Reed-Solomon code (no separate interleaver is needed) and it has the property
that the data symbols are transmitted over the channel in the order in which they entered the
encoder.
The astute reader will have observed already that the encoder for the (nM; kM) code is just
a delay-scaled encoder for the (n; code. The delay-scaling transformation of an architecture
replaces every delay (latch) in the architecture with M delays, and re-times the architecture to
account for the additional delays. The encoder treats its input as a multichannel data stream
and produces a multichannel output data stream, that is, a block-interleaved Reed-Solomon
In fact, the data symbol ordering is that which is produced by interleaving the data stream in blocks of k
symbols to depth M .
DRAFT June 19, 2000
codeword. Note also that while the interleaver array has been eliminated, the delay-scaled
encoder uses M times as much memory as the conventional encoder.
Block-interleaved Reed-Solomon codewords produced by delay-scaled encoders contain the
data symbols in the correct order. Thus, a delay-scaled decoder can be used to decode the
received word of nM symbols, and the output of the decoder also will have the data symbols
in the correct order. Note that a separate de-interleaver array is not needed at the receiver.
However, the delay-scaled decoder uses M times as much memory as the conventional decoder.
For example, delay-scaling the PE1 processors in the RiBM architecture of Fig. 6 results in the
delay-scaled processor DPE1 shown in Fig. 8. Note that for 0 i 2t 1, the top and bottom
sets of M latches in DPE1 i are initialized with the syndrome set S 0;M 1
where s i;j is the i th syndrome of the j th codeword. For 2t i 3t 1, the latches in DPE1 i
are initialized to 0 while the latches in DPE1 3t are initialized to 1 2 GF(2 m ). After 2tM clock
cycles, processors DPE1 0 { DPE1 t 1 contain the interleaved error-evaluator polynomials while
processors DPE1 t { DPE1 2t contain the interleaved error-locator polynomials.
MD
MD
Fig. 8. Delay-scaled DPE1 processor. Initial conditions in the latches are indicated in ovals. The
delay-scaled RiBM architecture is obtained by replacing the PE1 processors in Fig. 6 with DPE1
processor and delay-scaling the control unit as well.
We remark that delay-scaled decoders can also be used to decode block-interleaved Reed-Solomon
codewords produced by memory array interleavers. However, the data symbols at the
June 19, 2000 DRAFT
28 SUBMITTED TO IEEE TRANS. VLSI SYSTEMS
output of the decoder will still be interleaved and an M k memory array is needed for de-interleaving
the data symbols into their correct order. This array is smaller than the M n
array needed to de-interleave the M codewords prior to decoding with a conventional decoder,
but the conventional decoder also uses less memory than the delay-scaled decoder.
Delay-scaling the encoder and decoder eliminates separate interleavers and de-interleaver and
is thus a natural choice for generating and decoding block-interleaved Reed-Solomon codewords.
However, a delay-scaled decoder has the same critical path delay as the original decoder, and
hence cannot achieve higher throughput than the original decoder. On the other hand, the extra
delays can be used to pipeline the computations in the critical path, and this leads to signicant
increases in the achievable throughput. We discuss this concept next.
B. Pipelined Delay-Scaled Decoders
The critical path delay in the RiBM architecture is mostly due to the nite-eld multipliers in
the processors. For the delay-scaled processors DPE1 shown in Fig. 8, these multipliers can be
pipelined and the critical path delay reduced signicantly. We assume that M m and describe
a pipelined nite-eld multiplier with m stages.
B.1 A Pipelined Multiplier Architecture
While pipelining a multiplier, especially if it is a feedforward structure, is trivial, it is not so in
this case. This is because for RS decoders the pipelining should be done in such a manner that
the initial conditions in the pipelining latches are consistent with the syndrome values generated
by the SC block. The design of nite-eld multipliers depends on the choice of basis for the
representation. Here, we consider only the standard polynomial basis in which the m-bit byte
represents the Galois eld element
The pipelined multiplier architecture is based on writing the product of two GF
X and Y as
Let pp i denote the sum of the rst i terms in the sum above. The multiplier processing element
shown in Fig. 9(a) computes pp i+1 by adding either X i (if y
DRAFT June 19, 2000
Simultaneously, MPE i multiplies X i by . Since is a constant, this multiplication requires
only XOR gates, and can be computed with a delay of only T On the other hand,
the delay in computing pp i+1 is T . Thus, the critical path delay is an order of
magnitude smaller than T tremendous speed gains can be achieved if
the pipelined multiplier architecture is used in decoding a block-interleaved Reed-Solomon code.
Practical considerations such as the delays due to pipelining latches, clock skew and jitter will
prevent the fullest realization of the speed gains due to pipelining. Nevertheless, the pipelined
multiplier structure in combination with the systolic architecture will provide signicant gains
over existing approaches.
DD
pp
pp
pp
pp
a
(m-1)D (m-2)D D
(b)
Fig. 9. The pipelined multiplier block diagram:(a) the multiplier processing element (MPE), and (b)
the multiplier architecture. Initial conditions of the latches at the y input are indicated in ovals.
The pipelined multiplier thus consists of m MPE processors connected as shown in Fig. 9(b)
with inputs pp and the y i 's. The initial conditions of the latches at the y input are zero,
and therefore the initial conditions of the lower latches in the MPEs do not aect the circuit
operation. The product XY appears in the upper latch of MPEm 1 after m clock cycles and
each succeeding clock cycle thereafter computes a new product. Notice also that during the rst
June 19, 2000 DRAFT
clock cycles, the initial contents of the upper latches of the MPEs appear in succession at the
output of MPEm 1 . This property is crucial to the proper operation of our proposed pipelined
decoder.
B.2 The Pipelined Control Unit
If the pipelined multiplier architecture described above (and shown in Fig. is used in the
DPE1 processors of Fig. 8, the critical path delay of DPE1 is reduced from Tmult + T add to
just . Thus, the control unit delay in computing MC(r), which is inconsequential in
the RiBM architecture (as well as in the iBM and riBM architectures, and the delay-scaled
versions of all these), determines the largest delay in a pipelined RiBM architecture.
Fortunately, the computation of MC(r) can also be pipelined in (say) stages.
This can be done by noting that d delays from DPE1 0 in the M-delay scaled RiBM architecture
(see Fig. 8) can be retimed to the outputs of the control unit and then subsequently employed
to pipeline it. Note, however, that the d latches in DPE1 0 that are being retimed are initialized
to S 0;d 1
i at the begininng of every decoding cycle. Hence, the retimed latches in the control
unit will need to be initialized to values that are a function of syndromes S 0;d 1
i . This is not a
problem because these syndromes will be produced by the SC block in the beginning of each
decoding cycle.
B.3 Pipelined Processors
If pipelined multiplier units as described above are used in a delay-scaled DPE1 processor,
and the control unit is pipelined as described above, then we get the pipelined PPE1 i processor
shown in Fig. 10 (and the pipelined RiBM (pRiBM) architecture also described in Fig. 10). The
initial values stored in the latches are the same as were described earlier for the DPE1 processors.
Note that some of the latches that store the coe-cients of ~
are part of the latches in the
pipelined multiplier. However, the initial values in the latches in the lower multiplier in Fig. 10
are 0. Thus, during the rst m clock cycles
ow through into the
leftmost latches without any change.
From the above description, it should be obvious that the pRiBM architecture based on the
PPE1 processor of Fig. 10 has a critical path delay of
DRAFT June 19, 2000
mD
mD
d (Mr+j)(M-d)D
d
(M-m)D
Fig. 10. Pipelined PPE1 processor. Initial conditions in the latches are indicated in ovals. The pipelined
RiBM architecture is obtained by replacing the PE1 processors in Fig. 6 with PPE1 processor and
employing the pipelined delay-scaled controller.
Thus, the pRiBM architecture can be clocked at speeds that can be as much as an order of magnitude
higher than those achievable with the unpipelined architectures presented in Sections III
and IV.
C. Decoders for Non-interleaved Codes
The pRiBM architecture can decode a block-interleaved code at signicantly faster rates than
the RiBM architecture can decode a non-interleaved code. In fact, the dierence is large enough
that a designer who is asked to devise a decoder for non-interleaved codes should give serious
thought to the following design strategy.
Read in M successive received words into an block-interleaver memory array.
Read out a block-interleaved received word into a decoder with the pRiBM architecture.
Decode the block-interleaved word and read out the the data symbols into a block-deinterleaver
memory array.
Read out the de-interleaved data symbols from the deinterleaver array.
Obviously, similar decoder design strategies can be used in other situations as well. For example,
to decode a convolutionally interleaved code, one can rst de-interleave the received words,
and then re-interleave them into block-interleaved format for decoding. Similarly, if a block-
interleaved code has very large interleaving depth M , the pRiBM architecture may be too large
to implement on a single chip. In such a case, one can de-interleave rst and then re-interleave to
a suitable depth. In fact, the \de-interleave and re-interleave" strategy can be used to construct
June 19, 2000 DRAFT
a universal decoder around a single decoder chip with xed interleaving depth.
VI. Concluding Remarks
We have shown that the application of algorithmic transformations to the Berlekamp-Massey
algorithm result in the riBM and RiBM architectures whose critical path delay is less than
half that of conventional architectures such as the iBM architecture. The riBM and RiBM
architectures use systolic arrays of identical processor elements. For block-interleaved codes,
the de-interleaver can be embedded in the decoder architecture via delay-scaling. Furthermore,
pipelining the multiplications in the delay-scaled architecture result in an order of magnitude
reduction in the critical path delay. In fact, the high speeds at which the pRiBM architecture
can operate makes it feasible to use it to decode non-interleaved codes by the simple stratagem
of internally interleaving the received words, decoding the resulting interleaved word using the
pRiBM architecture, and then de-interleaving the output.
Future work is being directed towards integrated circuit implementations of the proposed
architectures and their incorporation into broadband communications systems such as those for
very high-speed digital subscriber loops and wireless systems.
VII.
Acknowledgments
The authors would like to thank the reviewers for their constructive criticisms which has
resulted in signicant improvements in the manuscript.
--R
Algebraic Coding Theory
Theory and Practice of Error-Control Codes
Applied Coding and Information Theory for Engineers
Control Systems for Digital Communication and Storage
--TR
systems for digital communication and storage
Applied coding and information theory for engineers
--CTR
Kazunori Shimizu , Nozomu Togawa , Takeshi Ikenaga , Satoshi Goto, Reconfigurable adaptive FEC system with interleaving, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Tong Zhang , Keshab K. Parhi, On the high-speed VLSI implementation of errors-and-erasures correcting reed-solomon decoders, Proceedings of the 12th ACM Great Lakes symposium on VLSI, April 18-19, 2002, New York, New York, USA
Y. W. Chang , T. K. Truong , J. H. Jeng, VLSI architecture of modified Euclidean algorithm for Reed-Solomon code, Information Sciences: an International Journal, v.155 n.1-2, p.139-150, 1 October
Zhiyuan Yan , Dilip V. Sarwate, Universal Reed-Solomon decoders based on the Berlekamp-Massey algorithm, Proceedings of the 14th ACM Great Lakes symposium on VLSI, April 26-28, 2004, Boston, MA, USA
Jung H. Lee , Jaesung Lee , Myung H. Sunwoo, Design of application-specific instructions and hardware accelerator for reed-solomon codecs, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.1346-1354, January
Zhiyuan Yan , Dilip V. Sarwate, New Systolic Architectures for Inversion and Division in GF(2^m), IEEE Transactions on Computers, v.52 n.11, p.1514-1519, November | systolic architectures;interleaved codes;berlekamp-massey algorithm;pipelined decoders;reed-solomon codes |
505524 | Delay fault testing of IP-based designs via symbolic path modeling. | Predesigned blocks called intellectual property (IP) cores are increasingly used for complex system-on-a-chip (SOC) designs. The implementation details of IP cores are often unknown or unavailable, so delay testing of such designs is difficult. We propose a method that can test paths traversing both IP cores and user-defined blocks, an increasingly important but little-studied problem. It models representative paths in IP circuits using an efficient form of binary decision diagram (BDD) and generates test vectors from the BDD model. We also present a partitioning technique, which reduces the BDD size by orders of magnitude and makes the proposed method practical for large designs. Experimental results are presented which show that it robustly tests selected paths without using extra logic and, at the same time, protects the intellectual contents of IP cores. | Introduction
While reusable predesigned circuits called intellectual property (IP) circuits or cores are becoming
increasingly popular for VLSI system-on-a-chip (SOC) designs [1, 3, 8, 11, 12, 14, 21], they present difficult
testing problems that existing methodologies cannot adequately handle. Path delay verification of IP-based
designs is among the most challenging problems because the implementation details of the IP circuits
are hidden. This is particularly the case when the paths traverse both IP circuits and user-defined
circuits. Conventional delay fault testing methods using standard scan [6, 22], boundary scan [3, 8,
21], or enhanced scan methods [7, 9] cannot test such paths effectively. We previously proposed a method
called STSTEST [12] which can test complete paths between IP and UD circuits, but requires extra scan
logic. Nikolos et al. [16] suggest calculating the delays of complete paths by measuring the delays of partial
paths. This method appears suited to delay evaluation of a prototype circuit but is impractical for production
testing of high-speed circuits due to the difficulty of accurately measuring analog delay values.
To address these problems, we propose a delay testing method dubbed symbolic path modeling-
based testing (SPMTEST) which can directly test selected complete paths between IP and UD circuits
without using extra logic. It employs an IP modeling scheme that abstracts the information of the IP cir-
cuit's paths using a special style of binary decision diagram (BDD), and protects the intellectual content of
the IP circuits. We also present an associated ATPG algorithm that generates robust delay tests for the
paths using the symbolic IP path models.
Figure
1 shows an example design where a UD circuit UDB1 and an IP circuit IPB1 form a single
combinational block. Like many delay fault testing methods [7, 9, 13, 17, 18], we assume that to sensitize
the target paths, two test patterns are applied via an enhanced scan register R1 which uses two flip-flops in
each scan cell to hold a pair of test patterns. Complete single-cycle paths exist from register R1 to that
traverse both UDB1 and IPB1, such as the one marked by the thick solid line. Neither the IP providers nor
Extra scan register
(boundary scan or STS)
Intellectual property (IP) circuit
User-defined (UD) circuits
Fig. testing of a circuit containing IP and UD blocks with boundary scan which
can test only partial paths, or selectively transparent scan which can test complete paths.
Enhanced
scan
Enhanced
scan
the system designers can generate tests for these complete paths using conventional ATPG methods for
path-delay faults. This is because UDB1's implementation details are unknown to the IP providers, while
IPB1's implementation is hidden from the system designers. For stuck-at fault testing, extra logic such as
boundary scan registers [3, 8, 21] or multiplexers are often inserted between UDB1 and IPB1. However,
precomputed tests applied to the IP circuit via such extra logic cannot detect a delay fault involving a complete
path from R1 to R2. For example, precomputed tests applied via boundary scan in Fig. 1 can sensitize
only a partial path such as the one indicated by the thick dashed line.
To allow testing of the complete paths linking UD and IP circuits, the STSTEST method [12] we
proposed previously employs a new type of scan register called a selectively transparent scan (STS) regis-
ter. With the STS register in Fig. 1 replacing the boundary scan, any complete path like the highlighted one
can be tested. In the test mode, part of the STS register on the path is made transparent, while other parts of
the STS register hold values pre-selected to satisfy the conditions required for the path sensitization. An IP
modeling technique for STSTEST is defined in [12] that can test complete paths of a specified delay range
and protect the implementation details of the IP circuits. The overhead of the STS registers can limit their
use in high-performance or area-critical circuits. This overhead tends to be more significant in designs like
Fig. 2(a) where complete paths traverse more than one IP and UD block, and STS registers need to be
inserted between every two blocks.
The SPMTEST method proposed here and illustrated in Fig. 2(b) can test complete paths without
needing extra scan registers. As in STSTEST, we require the IP providers to supply IP models that allow
system designers to generate test vectors for complete paths. Unlike STSTEST which specifies a test cube
(a) (b)
Fig. Testing complete paths that traverse multiple IP and UD blocks using (a) STSTEST which requires
extra STS registers between every two blocks, and (b) SPMTEST which requires no extra logic.
Test
IP models
based
IP models
BDD-based
(Control)
(ALU)
(ALU)
(Control)
register
register
register
Enhanced scan
registers
for each selected path in its IP models, SPMTEST abstracts all the conditions required to compute tests for
the selected paths by means of an efficient form of BDD [2]. This symbolic IP modeling technique eliminates
the need for STS registers. To handle large IP circuits, we propose a circuit partitioning technique
that decomposes the BDDs and leads to IP models of practical size. We also present an ATPG algorithm
that acts directly on the decomposed BDDs and thus can protect the IP circuit's implementation details.
Given symbolic IP models, SPMTEST finds 2-pattern robust tests for complete paths of a specified delay
range, if the tests exist. Finally, we present a CAD tool implementing SPMTEST and experimental results
which show that SPMTEST is a cost-efficient solution for the delay testing problem of IP-based designs.
The remainder of the paper is organized as follows. Section 2 introduces the BDD-based IP modeling
procedure, while Sec. 3 describes the circuit partitioning technique for large IP designs. Section 4 presents
the ATPG procedure that computes the final test vectors using the IP models. Section 5 describes
experimental results obtained with the ISCAS benchmark circuits.
Modeling
In order to allow the system designers to generate test vectors, an IP model should specify the sensitization
conditions for selected paths in the IP circuit. First we show how we construct such a model using
BDDs for the selected paths. Then we describe a path selection scheme that yields all complete paths
whose delays exceed some specified threshold.
Symbolic path modeling: The basic idea of our symbolic path modeling approach is inspired by (1)
the conditional delay model proposed by Yalcin and Hayes [23] which employs BDDs to find a critical
path, and (2) the BDD-based path-delay fault testing method in [2]. The conditional delay model demonstrates
that a hierarchical representation of path sensitization conditions can efficiently identify many false
paths. SPMTEST also exploits hierarchical structures consisting of IP and UD blocks to identify untestable
paths and to generate test vectors.
Bhattacharya et al. [2] show that BDDs can be successfully used for delay-fault ATPG, and report
promising results for many benchmark circuits. They represent each path's sensitization conditions by a
BDD from which a test vector can be derived. To avoid the difficulty of representing the rising and falling
transition values by BDDs (which can represent only 1 and 0 values explicitly), they assume that all the
off-path primary inputs have stable 0 or 1 values. This assumption allows a BDD to represent the conditions
required to sensitize the path and avoid static hazards on the path. This assumption cannot be made
for IP modeling, however, since any primary input of the IP circuits can receive transition or hazard signals
from other blocks that drive the IP circuit's inputs. Therefore, we employ an encoding technique that represents
4each signal by 2-bit values so that any signal transitions and hazard conditions can be represented by
BDDs.
For example, Fig. 3 shows the ISCAS-85 benchmark circuit c17 regarded as an IP circuit. Suppose
we want to model the highlighted paths P IP1 and P IP2 using the robust test conditions proposed in [13]. To
test a path robustly, the following conditions on the side input values of the gates along the path that need
to be satisfied. When the on-path input has a controlling final value, the side inputs must have non-controlling
stable values; when the on-path input has a non-controlling final value, the side inputs must have non-controlling
final values with arbitrary initial values. Here we use a 7-valued logic to represent the signal
values as in [5, 11, 13]; this logic is defined in Fig. 4.
. Robust test condition for
. Robust test condition for
Figure
4 also shows how the seven values are encoded for BDD representation. A similar encoding
technique was employed earlier in a delay fault testing method based on a CNF formulation of the
satisfiability problem [5]. Here v f represents a signal's final value, while v s represents the stability of the
signal, that is, v the signal is stable. Let R(P IPi ) denote the robust test condition for path P IPi . The
robust test conditions for P IP1 and P IP2 are encoded as follows:
F
R
Fig. 3 : The ISCAS benchmark circuits c17 viewed as a small IP circuit.
R
F
F
Logic values BDD encoding
(final value v f , stability v s )
Value Interpretation
F Falling transition (0,
R Rising transition (1,
f Unknown-to-0 transition (0, X)
r Unknown-to-1 transition (1, X)
Unknown (X, X)
Fig. 4 : The 7-valued logic for robust tests and the corresponding 2-bit
encoding used for BDD representation.
. R(P IP1
. R(P IP2
Note that R(P IPi ) is constructed by ANDing all v f 's and v s 's of non-X values. In order to construct
BDDs representing R(P IPi ), the primitive logic operations AND, OR, and NOT are modified to apply to
signal values of the form (v f , v s ); see Fig. 5. The same encoding scheme is found [5]. Each encoded output
value (z f , z s ) in the tables of Fig. 5 is obtained by applying the indicated logic operations to x f , x s , y f , and y s .
For example, in the AND case, z
We apply the modified logic operations to every gate in the IP circuit recursively, starting from the
primary inputs until all BDDs representing each encoded signal are obtained. Then, for each selected path
path model is constructed by ANDing the BDDs representing each component of R(P IPi ).
Figure
6 shows symbolic path models constructed for Fig. 3 in this way. The variables
of these BDDs listed at the left are the primary inputs in the form of encoded value pairs (I if , I is ). It is
not possible to reverse-engineer the symbolic path models to recover the circuit's gate-level structure, so
this modeling method protects the intellectual content of the IP circuits. Symbolic path modeling also can
easily identify untestable IP paths a priori and exclude them from the IP model. This follows from the fact
that if P IPi is untestable, the BDD for R(P IPi ) must denote the zero function. The foregoing technique can
be easily extended to handle other delay fault test conditions by using different encoding schemes. Our
Fig. Encoded logic operations for BDD construction
z f z s
defined by
is defined by (x f , x s )
is defined by
CAD tool implementing SPMTEST can also handle the hazard-free robust test conditions [10] using a 3-
bit signal encoding scheme. We focus only on robust testing with 2-bit encoding in this paper.
The ATPG procedure which we discuss later computes tests by justifying the robust test conditions
given by the symbolic path models of an IP block B via other IP or UD blocks that drive B. Consequently,
the IP block's output functions are needed for test generation, so the IP models also contain BDDs representing
functions of O jf and O js for all outputs O j 's of the IP block. The output functions of IP blocks often
must be provided to the system designers for simulation and verification of the entire system, and are not
intellectual content for many circuits such as arithmetic circuits whose functions are well known. Finally,
for each selected IP path, we include in the IP model the following path information: (1) the input and output
terminals of the path, (2) the transition direction R (rising) or F (falling) at the path terminals, and (3)
the delay of the path. Figure 7 shows an IP model constructed for the example of Fig. 3. It consists of
Fig. representing the robust test conditions for (a) P IP1 and (b) P IP2 in Fig. 3.
R(P IP1 )(a) (b)
Branch to the high child
Branch to the low child
Branch to the low child
with complement
Fig. 7 : IP model for c17 when two IP paths are selected.
)Selected IP path information
Path
ID
I/O
terminal
I/O
transition
Delay
(ns)
BDDs for the four output functions and the two selected paths, and the associated path information. We
next describe the path selection method for constructing IP models.
STSTEST introduced a path selection method for IPB's that derives all complete paths of a certain
delay range in (UDB, IPB) block pairs. The same method is used by SPMTEST. However, SPMTEST can
be also applied to any combination of IPBs and UDBs with only minor modifications to the path selection
scheme. We first describe path selection for the (UDB, IPB) case, and then generalize to other cases.
Due to the enormous number of paths in large circuits, we only test paths whose delays exceed some
specified threshold, an approach commonly employed by delay fault testing methods [13, 22]. To test such
complete paths in (UDB, IPB), therefore, we consider all IP paths that can potentially yield complete paths
exceeding the threshold when combined with certain UD paths. Figure 8 shows an example (UDB, IPB)
pair consisting of the smallest ISCAS-89/85 benchmark circuits cs27 and c17. I k denotes the k-th input port
of IPB. We compute the path delays using the Synopsys cell library [20], and treat each path as two separate
paths with rising and the falling transitions, as in [13, 17, 18].
If the IP models just include paths that meet a certain path-length threshold, they may not yield all
required complete paths. For example, suppose only the critical IP path P marked by the dashed line
exceeds the threshold delay and so is included in the IP model. Then, the critical complete path of (UDB,
IPB) indicated by thick solid line cannot be derived from path P. To avoid such problems, we select IP
paths by assuming that all UD paths have their maximum allowable delay (slack limit), which is the delay
margin for each IPB input I k left after subtracting from the clock period the longest delay of the IP paths
starting from I k . Figure 9(a) shows the maximum allowable delays (the length of the thick arrows) for one
clock period, which is formed by positioning all the IP paths to align the longest IP paths starting from
every I k with the right end of the critical IP path. From Fig. 9(a) we select IP paths that extend beyond the
IP path threshold denoted by the dashed line; in this example P 1:6 is selected. It follows from Fig. 9(a) that
all complete paths exceeding the complete-path threshold of Fig. 9(b) can be derived from the six selected
I 2
I 4
I 1
I 0
Critical path P of IPB
F
R
R
F
Fig. consisting of the ISCAS benchmark circuits cs27 and c17.
The critical complete path
R F
R
F
F
R
R F .
. g10
R
IP paths. For example, Fig. 9(b) shows six such complete paths, which are guaranteed to be tested. This
approach yields all complete paths longer than the threshold delay determined by the clock period and the
IP path threshold delay. For convenience, we represent the IP path threshold by T IP - critical IP-path delay,
where T IP denotes a threshold factor, 0 - T IP - 1. For example, if the IP provider chooses T for the
IPB of Fig. 8, a total of paths will be included in the IP model in the form shown in Fig. 7.
Next we discuss path selection for a few other IPB and UDB combinations. The above selection
scheme for (UDB, IPB) can be modified for the (IPB, UDB) pair by reversing the IP paths in Fig. 9. Since
the IPB drives the UDB in (IPB, UDB), we position all the IP paths to align the longest IP paths ending at
every output port of the IPB with the left end of the critical IP path. Then we select IP paths whose left
ends extend beyond the specified threshold. The path selection scheme for (UDB, IPB) can also be easily
extended to the case of (UDB1, IPB, UDB2), where an IPB is surrounded by two UDBs. In this case, we
assume that all paths within the UDBs have their maximum allowable delays. We position all the IP paths
such that the longest IP paths having the same I/O terminals are aligned with the right or left end of the
critical IP path. Then we select IP paths that exceed the specified threshold.
Fig. 9 : (a) IP path selection using the method in STSTEST [12]; (b) all the complete paths corresponding to
the IP paths in (a), which also exceed the complete path threshold delay.
(a)
I 1
I 2
I 4
n1-n10-n22
n3-n11-n16-n22
Slack limits
(b)
Complete path threshold delay
Clock period for (UDB, IPB)
for UD paths
I 3
n6-n11-n16-n22
Critical IP path delay
IP path threshold delay
n3-n11-n16-n22
I 1
I 2
Clock period for (UDB, IPB)
Complete path threshold delay
I 3
n6-n11-n16-n22
UD paths IP paths
n3-n11-n16-n22
n6-n11-n16-n22
I
3 Circuit Partitioning
The fact that BDD size can explode when handling large circuits limits the applications of many
BDD-based methods [2] to control circuits or relatively small datapath circuits. In order to enable
SPMTEST to handle a broad range of IP circuits, we use circuit partitioning to reduce the BDDs to a manageable
size. Functional decomposition techniques that reduce BDD size have been previously proposed
for formal design verification [15]. Here we use a structural BDD decomposition technique that partitions
an IP circuit into a set of blocks and constructs BDDs for the partitioned blocks. This approach has the
advantage of reducing the number of paths that must be included in the IP models, since a few partitioned
paths often cover a large number of paths. We can also easily adapt existing structural ATPG algorithms to
deal with partitioned BDDs.
Symbolic models for the partitioned IP paths may not identify all untestable IP paths a priori. To
alleviate this drawback, we propose an algorithm dubbed SPM-PART that maximizes the chance of untestable
paths being identified by exploiting a property of untestable paths. A untestable path P contains a
fanout point F i and a reconvergence point R j that conflict with the robust test conditions for P. Symbolic
path modeling is guaranteed to identify P as untestable, if for every on P, all paths linking F i
and R j are in the same partition. SPM-PART partitions an IP circuit in a way that maximizes the number of
and the paths linking F i and R j contained in the same partition, while limiting the partition to
a specified size. We describe SPM-PART below.
Let N i,j be the number of fanin lines to R j that have a path from F i , and let D i,j be the distance
between F i and R j in terms of the number of levels. SPM-PART first computes for every pair
defined as N i,j / D i,j . Combining an of large G(F i , R j ) can lead to a
small partition (due to a small D ij ) that contains a large number of paths linking F i and R j . SPM-PART creates
each partition B k by adding such one at a time. It first selects an F i from the primary
inputs of IPB and inserts F i into the current partition B k . SPM-PART then selects an R j that maximizes the
sum of the G(F i , R j )'s for all the F i 's in B k that have a path to R j . It inserts into B k all non-partitioned gates
in the transitive fanin region of R j to maximize the number of paths in B k linking the F i 's and R j . If the current
partition exceeds a specified size, B k+1 is set to the current partition. In this way, SPM-PART continues
to insert the next R j 's into B k , until no R j remains. The complexity of SPM-PART including the gain factor
computation is O(N 2 ).
Figure
10(a) shows a 2-bit adder viewed as an IPB and partitioned into two blocks by SPM-PART.
Figure
10(b) shows a graph whose nodes represent the F i 's and R j 's in IPB and whose edges represent the
Figure
10(c) lists the gain factors computed for every limits of each partition in
terms of I/O line numbers is set to 3/2. With a 0 as the first F i inserted in B1, we select p 0 as R i , since p 0
yields the largest gain factor G(a 0 , partition B1 indicated by A is formed by including p 0
and its transitive fanin node b 0 . Next s 0 is selected which yields the largest gain factor G(a 0 , s 0 )+G(b 0 ,
and by including s 0 and c in , B1 now becomes the partition indicated by B in Fig. 10(b).
After including c 1 , B1 exceeds the size limit, so the next nodes are added to B2. Figure 10(a) indicates by
dashed lines the final two partitions created in this way.
Observe that in Fig. 10(a), most like the ones marked by X and the paths linking F i and
R j are contained in the same partition. In this example, the symbolic path modeling can identify all the
untestable paths such as the ones highlighted. Figure 11(a) shows the same circuit but partitioned arbitrarily
without using SPM-PART. In this case, symbolic path modeling cannot identify any untestable
a 0
c in
a 1
c out
A
Fig. partition produced by the proposed algorithm; (b) fanout-reconvergence
graph and partition steps; (c) gain values computed for every
a 0
c in
a 1
c out
(a)
(b)
(c)
a 0
c in
a 0
. s 0
c out
(a) (b)
Fig. Arbitrary partition of the 2-bit adder obtained without using our algorithm;
(b) comparison of the partitions.
Partition
type
No. of
paths
in IP
model
No. of
untestable
paths
identified
BDD
size
of IP
model
partitioning
Partition of
Fig. 10(a)
Partition of
Fig. 11(a)
paths, because for all the that make the paths untestable, F i and R j are in different partitions.
Figure
11(b) compares the unpartitioned IPB with the partitions of Fig. 10(a) and 11(a). Note that the partition
of Fig. 10(a) allows all untestable paths to be identified, so the test generation procedure needs to
be run for only 74 testable paths. On the other hand, the partition of Fig. 11(a) requires the test generation
procedure to be run for all 90 paths in the IPB. Although the BDD size reduction looks minor in this small
example, very significant reductions are obtained for larger circuits, as our experiments show.
Test Generation
Assuming that the IP models for all IPBs are constructed by the method described above, system
designers can generate test vectors using the ATPG procedure SPMTEST (Fig. 12) which is an extension
of the PODEM algorithm to handle BDDs in a block framework. SPMTEST takes symbolic models of
IPBs as inputs, and creates symbolic models for UDBs. For example, Fig. 13 shows the (UDB, IPB) pair of
Fig. 8 where each block is treated as a black box specified by its symbolic model. Let P Bi denote a partial
path derived from (partitioned) block B i . SPMTEST selects a complete path P B1 -P B2 -P Bn that exceeds
the complete path threshold delay derived by the method describe in Sec. 2. For example, Fig. 13 shows
one such complete path P UD1 -P IP1 . To speed up the test generation, SPMTEST simplifies the BDDs of the
symbolic models by setting v primary input, and v
(Note that enhanced scan can assign stable values to the primary inputs.) For example,
Fig. 13 shows such values assigned for the case of P UD1 -P IP1 . Figures 14(a) and (b) show IPB's BDDs
Symbolic IP models
Create symbolic models
Select a complete path P B1 -P B2 -P Bn
Simplify BDDs by assigning initial input values
BDD-based PODEM algorithm
Select objective: one input variable of P Bi
Backtrace to a target primary input and assign a value
Evaluate BDDs using cofactor-based implication
Backtrack, if any R(P Bi
Repeat until - i =1:N R(P Bi
Repeat until all target complete paths are tested
2-pattern test cubes
Fig. test generation algorithm
UDBs
from IP providers
substantially simplified with these assigned values; compare with the original BDDs in Figs. 6(a) and 7,
respectively.
Next the BDD-based PODEM algorithm attempts to satisfy the condition - i=1:N R(P Bi which is
represented by conjunction of the BDDs for all P Bi 's symbolic path models. For each R(P Bi ), it first selects
as an objective support variable s i from R(P Bi )'s maximal cube. The backtrace step then finds a primary
input as follows: for the output function f i corresponding to s i , select f i 's support variable s i from f i 's maximal
cube; repeat this step until s i is a primary input. In the example of Fig. 13, first we select n6 as an
objective from R(P IP1 )'s maximal cube n3 f n3 s n6 f next we select g0
cube stop backtracing, since g0 f is a primary input. The next phase of SPMTEST
is a ternary implication step that evaluates all the BDDs with their variables assigned the values 0, 1, and
X. (Note that the initial values of all the BDD variables are X.) We implement ternary implication by computing
the cofactors of a BDD with respect to its non-X input values. If a resulting cofactor is constant 1
(0), the BDD evaluates to 1 (0); otherwise, the result is X. For example, given input values n3 f n3 s n6 f
1X0X, the cofactor of R(P IP1 ) with respect to n3 f n6 shown in Fig. 14(c). Since this cofactor is not
constant, R(P IP1 ) is found to be X. These steps are repeated until - i=1:N R(P Bi test
cube is obtained. For the example of Fig. 13, the test cube
Fig. 13 : The (UDB, IPB) pair of Fig. 8 represented by black boxes.
F
F
Simplified BDDs for (a) the output functions of IPB and (b) the symbolic path model of P IP1 in
Fig. 13; (c) the cofactor of (b) with respect to n3 f n6
Cofactor of
(c)
(b)
(a)
For each complete path, SPMTEST either computes a robust test or concludes that the path is
robustly untestable. Since it acts on a structure consisting of multiple (partitioned) blocks in symbolic
form, SPMTEST can handle any combination of IPBs and UDBs without needing extra scan registers.
5 Experimental Results
We have implemented the SPMTEST method in a CAD tool composed of 17,000 lines of C++ code
and an existing BDD package CUBDD [19]. We have applied it to a number of benchmark circuits, including
ISCAS-85, ISCAS-89 (combinational versions), and datapath circuits, that have been artificially paired
as UD and IP blocks. Figure 15 compares the symbolic IP models constructed by SPMTEST with and
without circuit partitioning. The first column lists the benchmark circuits regarded as IPBs, while the next
three columns give the circuit partitioning results. For the specified limits on the number of I/O lines of
each partition, the number of resulting partitioned blocks and the CPU time spent for partitioning are
listed. Next, the results of symbolic IP modeling using the partitioned IP circuits are listed for the specified
IP path delay threshold factor T IP . The untestable path identification ratio UPI is given for circuits that have
a large number of untestable paths. UPI is defined as the number of untestable IP paths identified by the IP
models divided by the number of all untestable IP paths. Then the BDD size in terms of the number of
Fig. modeling with and without circuit partitioning.
Bench-
mark
circuit
Circuit partitioning for
symbolic IP modeling
Symbolic IP modeling
With partitioning Without partitioning
I/O
limits
No. of
partitions
CPU
time
Untestable paths
identification (UPI) BDD size
CPU
cs1423 20/15 14 1.43 0.8 70.8% 10425 4.8 541812 ~2 hours
hours
Exploded >12 hours
c2670 25/25 17 2.57 0.8 93.7% 44425 29.69 Exploded >12 hours
c3540 25/25 26 14.67 0.9 72.9% 68502 62.86 Exploded >12 hours
Exploded >12 hours
c7552 25/25 38 20.61 0.8 56.1% 164055 171.2 Exploded >12 hours
nodes, and the CPU time spent for IP modeling are listed. The last two columns list the results of symbolic
IP modeling without partitioning, in which case UPI is 100%.
In all cases, IP modeling with partitioning finishes within reasonable CPU time with relatively small
BDDs. For example, modeling the largest ISCAS-85 circuit c7552 is completed in 171 seconds with BDDs
containing a total of 164K nodes. On the other hand, IP modeling for most large circuits without partitioning
either takes several hours or cannot finish due to the excessive BDD size. It is well known that BDDs
representing larger ISCAS-85 benchmark circuits such as c2670 and c7552 tend to explode, so BDD-based
methods like that of [2] have not been applied to these circuits. Furthermore, IP modeling with partitioning
can identify a large number of the untestable paths in most circuits. For example, cs9234's IP model identifies
99.9% of untestable paths with 20 partitioned blocks, which indicates that the proposed partition
algorithm is highly efficient. Some low UPI ratios for circuits like cs1196 can be explained by their structural
property that the separation of most fanout-reconvergence pairs is very large, so it is difficult to contain
such pairs within the same partition.
Figure
gives the results of applying SPMTEST to a number of (UDB, IPB) pairs whose IP models
appear in Fig. 15. Although we limit our attention to (UDB, IPB) pairs, other combinations of IPBs and
UDBs show similar results. The first two columns list the (UDB, IPB) pairs tested. The next two columns
Circuit pair UDB partition results UD/IP pair test results
UDB IPB
I/O limits
of each
partition
No. of
partitions
Complete
path
threshold T C
No. of complete paths
Tried for test
generation
Robustly
tested
28 19.22
cs1238 shift32 20/15 23 0.801 343 150 91.92
shif32 cs1238 30/30 8 0.860* 347
shift32
shift16 c5315 25/25 2 0.839 4118 558 1491.3
shift32
Fig. test generation results for benchmark circuits configured as (UDB, IPB) pairs.
show the symbolic modeling results for the UDBs. The next column lists the complete path threshold delay
factor T C for each (IPB, UDB) pair. Given T IP and the clock period T clock , T C is determined by T
D IP (1- T IP ) / T clock , where D IP is IPB's critical path delay; see Fig. 9. All testable complete paths exceeding
the threshold delay T C - T clock are guaranteed to be tested. In the cases indicated by * in column T C , we
have chosen values of T C smaller than the values calculated in the above way, because either the (IPB,
UDB) pairs have too many complete paths, or T IP = 0. The next column lists the number of complete paths
tried for test generation and the number of complete paths robustly tested. In most cases, the tried complete
paths are much fewer in number than all complete paths meeting T C , because many untestable paths are
eliminated a priori in the IP modeling step, which speeds up test generation. The fact that only a few complete
paths are robustly tested in many cases is not surprising, because the artificial (functionally meaning-
less) connections between UDBs and IPBs tend to make a large number of complete paths untestable. The
time listed in the last column of Fig. 16 is reasonable for most cases except the circuit pair containing
c2670. This large ISCAS-85 benchmark circuit is well known to have very few robustly testable paths due
to its large amount of reconvergent fanout, and so path delay testing for it is inherently very difficult.
In STSTEST [12], some untestable complete paths are robustly testable due to the STS registers,
whereas in SPMTEST, only robustly testable complete paths are considered as testable and counted in Fig.
16. Therefore, the results of SPMTEST cannot be directly compared with those of STSTEST. Comparison
with other methods is also difficult, since most delay testing methods are not aimed at IP-based designs; in
the case of [16], no experimental results are provided.
6 Conclusions
We have presented the SPMTEST method for path delay testing of designs containing IP cores, a
difficult problem not addressed by existing methods. SPMTEST can test complete paths linking IP and
user-defined blocks via a symbolic modeling technique that abstracts an IP block's paths in a compact
form. Hence it does not require extra scan logic, an advantage over STSTEST. The ATPG algorithm in
SPMTEST generates tests for the complete paths using only symbolic models, and hence protects the
implementation details of the IP blocks. Our experimental results show that for all the benchmark circuits
chosen, SPMTEST constructs compact symbolic IP models, and robustly tests all testable complete paths
of a specified delay range. Therefore SPMTEST appears to an ideal approach to path delay testing of IP-based
designs. SPMTEST has a limitation that some complex circuits such as multipliers can require IP
models of excessive size. To address this problem, we are investigating alternative symbolic modeling
approaches.
--R
"Scan Chain Design for Test Time Reduction in Core-Based ICs,"
"Test Generation for Path Delay Faults Using Binary Decision Diagrams,"
"Hierarchical Test Access Architecture for Embedded Cores in an Integrated Circuit,"
"On Variable Clock Methods for Path Delay Testing of Sequential Circuits,"
"A Satisfiability-Based Test Generation for Path Delay Faults in Combinational Circuits,"
"Robust Delay-Fault Test Generation and Synthesis for Testability Under a Standard Scan Design Methodology,"
"A Partial Enhanced-Scan Approach to Robust Delay-Fault Test Generation for Sequential Circuits,"
"Test Methodology for Embedded Cores which Protects Intellectual Property,"
"Design for Testability: Using Scanpath Techniques for Path-Delay Test and Measurement,"
"Synthesis of Robust Delay-Fault-Testable Circuits: Theory,"
"High-Coverage ATPG for Datapath Circuits with Unimplemented Blocks,"
"Delay Fault Testing of Designs with Embedded IP Cores,"
"On Delay Fault Testing in Logic Circuits,"
"Testing ICs: Getting to the Core of the Problem,"
"Partitioned ROBDDs-A Compact, Canonical and Efficiently Manipulable Representation for Boolean Functions,"
"Path Delay Fault Testing of ICs with Embedded Intellectual Property Blocks,"
"NEST: A Nonenumerative Test Generation Method for Path Delay Faults in Combinational Circuits,"
"Advanced Automatic Test Pattern Generation Techniques for Path Delay Faults,"
CUDD: CU Decision Diagram Package
Synopsys Inc.
"Testing Embedded Cores Using Partial Isolation Rings,"
A Path-Delay Test Generator for Standard Scan Designs,"
"Hierarchical Timing Analysis Using Conditional Delays,"
--TR
Robust delay-fault test generation and synthesis for testability under a standard scan design methodology
Hierarchical timing analysis using conditional delays
A satisfiability-based test generator for path delay faults in combinational circuits
Partitioned ROBDDsMYAMPERSANDmdash;a compact, canonical and efficiently manipulable representation for Boolean functions
Path delay fault testing of ICs with embedded intellectual property blocks
Logic Synthesis and Verification Algorithms
Testing ICs
Test Generation for Path Delay Faults Using Binary Decision Diagrams
Fastpath
A Partial Enhanced-Scan Approach to Robust Delay-Fault Test Generation for Sequential Circuits
Design for Testability
Scan chain design for test time reduction in core-based ICs
Testing embedded-core based system chips
High-coverage ATPG for datapath circuits with unimplemented blocks
Testing Embedded Cores Using Partial Isolation Rings
1.1 Test methodology for embedded cores which protects intellectual property
Delay Fault Testing of Designs with Embedded IP Cores | automatic test pattern generation ATPG;binary decision diagram BDD decomposition;system-on-a-chip SOC;delay fault testing;intellectual property IP core testing |
505588 | An algebraic approach to IP traceback. | We present a new solution to the problem of determining the path a packet traversed over the Internet (called the traceback problem) during a denial-of-service attack. This article reframes the traceback problem as a polynomial reconstruction problem and uses algebraic techniques from coding theory and learning theory to provide robust methods of transmission and reconstruction. | Introduction
A denial of service attack is designed to prevent legitimate access to a resource. In
the context of the Internet, an attacker can "flood" a victim's connection with random
packets to prevent legitimate packets from getting through. These Internet denial of service
attacks have become more prevalent recently due to their near untraceability and
relative ease of execution [9]. Also, the availability of tools such as Stacheldraht [11]
and TFN [12] greatly simplify the task of coordinating hundreds or even thousands of
compromised hosts to attack a single target.
These attacks are so difficult to trace because the only hint a victim has as to the
source of a given packet is the source address, which can be easily forged. Although
ingress filtering can help by preventing a packet from leaving a border network without
a source address from the border network [14], attackers have countered by choosing
legitimate border network addresses at random. The traceback problem is also difficult
because many attacks are launched from compromised systems, so finding the source
of the attacker's packets may not lead to the attacker. Disregarding the problem of
finding the person responsible for the attack, if a victim was able to determine the path
of the attacking packets in near real-time, it would be much easier to quickly stop the
attack. Even finding out partial path information would be useful because attacks could
be throttled at far routers.
This paper presents a new scheme for providing this traceback data by having
routers embed information randomly into packets. This is similar to the technique
used by Savage, et al [24], with the major difference being that our schemes are based
on algebraic techniques. This has the advantage of providing a scheme that offers more
flexibility in design and more powerful techniques that can be used to filter out attacker
generated noise and separate multiple paths. Our schemes share similar backwards
compatibility and incremental deployment properties to the previous work.
More specifically, our scheme encodes path information as points on polynomials.
We then use algebraic methods from coding theory to reconstruct these polynomials at
the victim. This appears to be a powerful new approach to the IP traceback problem.
We note that although the study of traceback mechanisms was motivated by denial
of service attacks, there are other applications as well. These methods might be useful
for the analysis of legitimate traffic in a network. For example, congestion control,
robust routing algorithms, or dynamic network reconfiguration might benefit from real-time
traceback mechanisms.
The rest of the paper is organized as follows: Section 2 discusses related work,
Section 3 contains an overview of the problem and our assumptions, Section 4 presents
our approach for algebraically coding paths, Section 5 gives detailed specifications for
some of our schemes, Section 6 provides a mathematical analysis of the victim's re-construction
task, Section 7 discusses the issue of encoding marking data in IP packets,
and 8 gives conclusions and future work.
Related Work
The idea of randomly encoding traceback data in IP packets was first presented by
Savage, et al [24]. They proposed a scheme in which adjacent routers would randomly
insert adjacent edge information into the ID field of packets. Their key insight was
that traceback data could be spread across multiple packets because a large number
of packets was expected. They also include a distance field which allows a victim to
determine the distance that a particular edge is from the host. This prevents spoofing
of edges from closer than the nearest attacker. The biggest disadvantage of this scheme
is the combinatorial explosion during the edge identification step and the few feasible
parameterizations. The work of Song and Perrig provides a more in depth analysis of
this scheme [25].
There have been many other notable proposals for IP traceback since the original
proposal. Bellovin has proposed having routers create additional ICMP packets with
traceback information at random and a public key infrastructure to verify the source of
these packets [4]. This scheme can also be used in a non-authenticated mode, although
the attackers can easily forge parts of routes that are farther from the victim than the
closest of the attackers.
Song and Perrig have an improved packet marking scheme that copes with multiple
attackers [25]. Unfortunately, this scheme requires that all victims have a current map
of all upstream routers to all attackers (although Song and Perrig describe how such
maps can be maintained). Additionally, it is not incrementally deployable as it requires
all routers on the attack path to participate (although Song and Perrig note that it also
suffices for the upstream map to indicate which routers are participating).
Doeppner, Klein, and Koyfman proposed adding traceback information to an IP option
[13]. Besides the large space overhead, this solution would cause serious problems
with current routers, as they are unable to process IP packets with options in hardware.
R 4 R 5 R 6 R 7
A 3
A 4
Figure
1: Our example network.
It also causes others issues, for example, adding the option may require the packet to
be fragmented.
Burch and Cheswick have a scheme that uses UDP packets and does not require the
participation of intermediate ISPs [8]. This scheme, however, assumes that the denial
of service attack is coming from a single source network. This differs from us as we
aim to distinguish multiple attacking hosts.
Lee and Park have analyzed packet marking schemes in general [19]. Their paper
contains general tradeoffs between marking probability, recovered path length, and
packets received, that can be applied to any of the probabilistic marking schemes, including
the one in this paper.
We refer the reader to Savage's paper for a discussion of other methods to detect
and prevent IP spoofing and denial of service attacks.
The algebraic techniques we apply were originally developed for the fields of coding
theory [15] and machine learning [2]. For an overview of algebraic coding theory,
we refer the reader to the survey by Sudan [27] or the book by Berlekamp [6].
Overview
This paper addresses what Savage, et al call the approximate traceback problem. That
is, we would like to recover all paths from attacker to victim, but we will allow for
paths to have invalid prefixes. For example, for the network shown in Figure 1, the
true path from the attacker A 1 to the victim V is R 4 R 2 R 1 . We will allow our technique
to also produce paths of the form R 2 R 6 R 4 R 2 R 1 because the true path is a suffix of the
recovered path.
Our family of algebraic schemes was motivated by the same assumptions as used
in previous work:
1. Attackers are able to send any packet
2. Multiple attackers can act together
3. Attackers are aware of the traceback scheme
4. Attackers must send at least thousands of packets
5. Routes between hosts are in general stable, but packets can be reordered or lost
6. Routers can not do much per-packet computation
7. Routers are not compromised, but not all routers have to participate
Algebraic Coding of Paths
We will now present a series of schemes that use an algebraic approach for encoding
traceback information. All of these schemes are based on the principal of reconstructing
a polynomial in a prime field. The basic idea is that for any polynomial f (x) of
degree d in the prime field GF(p), we can recover f (x) given f (x) evaluated at (d +1)
unique points. Let A 1 ; A be the 32-bit IP addresses of the routers on path P.
We associate a packet id x j with the
jth packet. We then somehow evaluate f P as the packet travels along the path,
accumulating the result of the computation in a running total along the way. When
enough packets from the same path reach the destination, then f P can be reconstructed
by interpolation. The interpolation calculation might be a simple set of linear equa-
tions, if all of the packets received at the destination traveled the same path. Otherwise,
we will need to employ more sophisticated interpolation strategies that succeed even in
the presence of incorrect data or data from multiple paths [5, 28, 15, 7]. These methods
were developed originally for use in coding theory and learning theory.
A naive way to evaluate f P (w) would be to have the jth router add A j w n j into
an accumulator that kept the running total. Unfortunately, this would require that each
router know its position in the path and the total length of the path. We could eliminate
the need for each router to know the total length of the path (while still requiring each
router to know its position in the path) by reordering the coefficients of f
However, we can do even better by sticking with our original
ordering, and using an alternative means of computing the polynomial. Specifically,
to compute f P (w), each router R j multiplies the amount in the accumulator by w, adds
returns the result to the accumulator, and passes the packet on to the next router
in the path (Horner's rule [18]). For example, ((((0 w)+R 1 )w+R 2 )w+R 3 )w+R
Notice that the router doesn't need to know the total length
of the path or its position in the path for this computation of f P .
4.1 Deterministic Path Encoding
The simplest scheme that uses this algebraic technique encodes an entire path. At
the beginning of a path, let FullPath 0; Each router i on the path calculates
FullPath is a random value passed in each
packet, R i is the router's IP address and p is the smallest prime larger than 2 32 1. The
value FullPath passed in the packet, along with x j , to the next router. At the
packet's destination FullPath will equal (R n x
which can be reconstructed by solving the following matrix equation over GF(p):B B B @
FullPath n;1
FullPath n;2
FullPath n;3C C C A
As long as all of the x i 's are distinct, the matrix is a Vandermonde matrix (and thus has
full rank) and is solvable in O(n 2 ) field operations [22].
Assuming that we get a unique x j in each packet, we can recover a path of length
d with only d packets. The downside, however, is that this scheme would require
log bits per packet (the first term is the encoding of the running
FullPath and the second term is the encoding of the x j and y j values). Even for modest
maximum path lengths of 16, the space required (68 bits, counting 4 bits for recording
the number of routers in the path, and 32 bits each for the x coordinate and y coordinate
of the point on the polynomial) far exceeds the number of bits available to us in an IP
header.
We could split a router's IP address into c chunks and add dlog 2 (c)e bits to indicate
which chunk was represented in a given packet. Another approach would be to have
each router add all of its chunks into each packet. That is, each router would update
FullPath c times, substituting each chunk of their IP address in order. The destination
could then trivially reconstruct the IP addresses by interpolating to recover -
R 1;1 +R 1;2 are the
successive chunks of R j . This would increase the degree of f by a factor of c, which
would impact the performance of the reconstruction algorithm.
4.2 Randomized Path Encoding
In the above schemes, we require FullPath 0; This implies that there is some
way for a router to know that it is the "first" participating router on a particular path.
In the current Internet architecture there is no reliable way for a router to have this
information. We must therefore extend our scheme to mitigate this problem.
In our revised scheme a router first flips a weighted coin. If it came up tails the
router would assume it was not the first router and simply follow the FullPath algorithm
presented above, adding its IP address (or IP address chunk) data. On the other
hand, if the coin came up heads, the router would assume it was the first router and
randomly choose a x j to use for the path. We will refer to this state as "marking mode."
This overall approach - which might be called the "reset paradigm" - was also used by
Savage et al. for their traceback solutions.
At the destination, we would receive a number of different polynomials, all representing
suffixes of the full path. In our example network, packets from A 1 could contain
R
We could change our marking strategy slightly. Whenever a router receives a
packet, it still flips a weighted coin. But now, instead of simply going into marking
mode for one packet when the coin comes up heads, the router could stay in marking
mode for the next t packets it receives. More generally, the reset behavior could follow
any Markov Process.
One problem is that attackers can cause more false paths than true paths to be
received at the victim. This is due to the fact the our choice of a small p creates large
number of packets in which no router on the packet's path is in marking mode. The
attacker can thus insert any path information he wishes into such packets. Because the
attacker can generally find out the path to his victim (using traceroute, for example) he
can compute FullPath 0;
This choice
will cause the victim to receive FullPath When trying to reconstruct
paths, the victim will have no indication as to which paths are real and which paths
are faked. Two solutions to this problem are to increase p or to store a hop count
(distance field) in the packet that each participating router would increment. Increasing
the probability makes it even harder to receive long paths. Adding a hop count would
prevent an attacker from forging paths (or suffixes of paths) that are closer than its
actual distance from the victim but would require dlog 2 (d)e more bits in the packet.
Our schemes could also make use of the HMAC techniques discussed by Song
and Perrig to ensure that edges are not faked, but this would require us to either use
additional space in the packets to store the hash or lose our incremental deployment
properties [25]. If we decided to make one of these tradeoffs, our scheme would be
comparably secure against multiple attackers.
4.3 Edge Encoding
We could add another parameter, ', that represents the maximum length of an encoded
path. The value of ' is set by the marking router and decremented by each participating
router who adds in their IP information. When the value reaches 0, no more routers
add in their information. For example, in the full path encoding scheme
encoding of edges between routers. When we call this an
"algebraic edge encoding" scheme.
The benefit of this change would be to decrease the maximum degree d of the
polynomials in order to reduce the number of packets needed out of a given set or
packets to recover a route. The cost of this change is that it would add dlog
bits to the packets.
Of course, if ' is less than the true path length, then reconstruction finds arbitrary
subsequences of the path (not just suffixes as in Full Path encoding). The victim still
has some work to do to combine these subsequences properly (as described in Savage
et al. Thus reconstruction in this scheme has an algebraic step followed by a
combinatorial step.
5 Pseudocode for Sample Algebraic Schemes
In this section, we present pseudocode for some sample algebraic marking schemes
that are based on the principles described in the previous section. Recall that each
router has a unique 32-bit id.
5.1 Algebraic Edge Encoding
Here is the router's pseudocode for Edge1, an algebraic edge encoding scheme. Each
packet is marked with is the number of x values. The
degree of the polynomial is one.
Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := R;
w.flag :=
otherwise if w.flag
w.yval := w.yval * w.xval
w.flag := 0
Here is Edge2, algebraic edge encoding with c "chunks" per "hop". Each packet is
marked with d32=ce+ dlogne+1 bits. The degree of the polynomial is 2c 1.
Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := R[c] w.xval-{c-1}
w.flag :=
otherwise if w.flag
w.yval := w.yval * w.xval-c
w.flag := 0
Here is Edge3, which is identical to Edge2 except that each packet also has a distance
field ("hop count"). Following Savage et al., we reserve five bits for the distance
field. Each packet is marked with d32=ce+ dlogne+6 bits. The degree of the polynomial
is 2c 1.
Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := R[c] w.xval-{c-1}
w.flag :=
w.dist := 0;
otherwise if w.flag
w.yval := w.yval * w.xval-c
w.flag := 0;
w.dist
Here is Edge4, which is identical to Edge3 except that the second router only contributes
half of the bits of its router id. This lowers the degree of the polynomial,
and introduces a little uncertainty into the reconstruction process (if two routers at the
same distance from the victim had router id's that agreed on all of the contributed bits).
Each packet is marked with d32=ce+ dlogne +6 bits. The degree of the polynomial is
1:5c 1.
Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := R[c] w.xval-{c-1}
w.flag :=
w.dist := 0;
otherwise if w.flag
w.yval := w.yval * w.xval-{c/2}
w.flag := 0;
w.dist
5.2 Algebraic Full Path Encoding
Here is the router's pseudocode for Full1, the full path encoding scheme. Each packet
is marked with 32+dlogne bits, where n is the number of possible x values. The degree
of the path polynomial is at most L, the length of the path.
Full1 Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := 0;
w.yval := w.yval * w.xval
Here is the router's pseudocode for Full2, the full path encoding scheme with a
distance field ("hop count"). Following Savage, we reserve five bits for the distance
field, so each packet is marked with 37+ dlogne bits.
Full2 Marking procedure at router R:
for each packet w
with probability p
w.xval := random;
w.yval := 0;
w.dist := 0;
w.yval := w.yval * w.xval
w.dist
6 Path Reconstruction by the Victim
In this section, we look more closely at the problem of path reconstruction by the
victim. Let k denote the number of attack paths. Let L denote the expected length of
an attack path. For simplicity, we will assume that all attack paths are very close to L
in length.
For the main scheme of Savage et al. (which uses a total of 16 bits), the complexity
of path reconstruction by the victim is O(Lk 8 ). The exponent of eight reflects a
combinatorial task that the victim must try by brute force. Of course, if they had more
room to work with in their marking scheme, then the reconstruction complexity would
go down. For example, if they used 23 bits for their marking scheme (and divided
the "padded" router id into four 16-bit chunks), then the victim's reconstruction task
reduces to O(Lk 4 ).
Our goal is to design algebraic schemes that improve on the reconstruction complexity
of Savage et al. There are two main algebraic reconstruction approaches that
we consider:
Reed-Solomon List Decoding: Given
polynomials of degree at most d that pass through at least m of these points. Guruswami-Sudan
give an algorithm to solve this problem in time O(N 3 ) when N < m 2 =d. An
improvement by Olshevsky and Shokrollahi reduces the time to O(N 2:5 ).
More precisely, the reconstruction algorithm due to Guruswami and Sudan [15]
can be implemented in a number of ways. The most straightforward implementation
would take time O(n 3d ) to recover all edges for which we received at least
dn out of
packets. However, this drops to O(n 3 requiring only slightly more packets:
dn(1+d) out of n, for any d 1. By scaling d appropriately, this allows us to trade
off computation time (and memory) for accuracy. A recent algorithmic breakthrough
by Olshevsky and Shokrollahi would reduce our reconstruction time even further, to
O(n 2:5 ) [21]. Moreover, this new algorithm is highly parallelizable (to up to O(n)
processors), which suggests that distributing the reconstruction task might speed things
up even more.
Noisy Polynomial Interploation: Given
at most m, find all polynomials f of degree at most d such that f
Bleichenbacher-Nguyen give an algorithm to solve this problem whenever m < n=d,
with running time identical to the Reed-Solomon List Decoding problem. They give
other algorithms that work even when the bound m < n=d is not met.
Types of Packets: Let us assume that each packet that the victim receives is one of
three possible types. A "true packet" contains a point on a polynomial that corresponds
to a real attack path. A "bogus packet" contains a point created by an attacker
outside the periphery, and never reset by any honest router along an attack path. A
"stray packet" contains a point on a polynomial that corresponds to normal non-attack
traffic. When a denial of service attack is underway, we assume that the fraction of
stray packets is very small compared to true and bogus packets.
False Positives: A "false positive" is a polynomial that is recovered by the reconstruction
algorithm, but does not correspond to part of an actual attack path. For Reed-Solomon
list decoding, the expected number of false positives in a random sample is
about (N!=(m!(N m)!)) (1=q) m d 1 . For noisy polynomial interpolation, the expected
number of false positives in a random sample is about m n =q n d 1 . For the main
scheme of Savage et al., the expected number of false positives is about m 8 =2
When the marking scheme has no distance field, then we must also be concerned
with "bogus edges" or "bogus paths" that the attacker can cause to appear in our sam-
ple. We will consider this separately from the issue of false positives that arise at
random.
A moderate number of false positives is not a serious problem. Consider our marking
scheme Edge3. The victim reconstructs a set of candidate edges for each distance.
Each set of candidate edges includes true edges and "false positive" (but no "bogus
edges" from the attacker assuming that no attacker is within this distance of the vic-
tim). Now the victim attempts to assemble paths by connecting edges from distance '
with edges from distance ' + 1. There is certainly no problem unless the first endpoint
of a false positive edge from some distance ' matches the second endpoint of a false
positive or true edge from distance ' + 1.
Let f be the expected number of false positives at each distance, and let k be the
number of true edges at each distance. Then there are f expected false positives at
distance expected false positives and true edges at distance ' + 1. Let M
be the number of distinct router id's or partial router id's that are possible (e.g., 2
for Edge3). The probability of an accidental match is less than 1 ((M f )=M) f +k
which is very close to 1 e f ( f +k)=M . (When f is close to
this probability is
unacceptably high.) The probability of an accidental match at any distance is less than
is the length of the longest path.
We now analyze the effectiveness of these approaches to path reconstruction. For
each approach, the best known algorithms impose constraints on the design parameters
for our marking schemes. It will be convenient for us to consider separately marking
schemes that have a distance field and marking schemes that do not.
6.1 Reconstruction With a Distance Field
When the marking scheme has a distance field, the task of the victim is simplified. The
victim can select a sample of packets for which w.dist = ', for any given '. As long as
no attacker is within distance ' of the victim, this sample will contain only true packets
with points on polynomials that were last reset by routers at distance '.
6.1.1 Guruswami-Sudan Reconstruction With a Distance Field
The path reconstruction problem faced by the victim can be viewed as a Reed-Solomon
list decoding problem. The distinct points are chosen from a random sample of the
distinct points in packets that reach the victim.
The victim can filter out packets that were last reset at distance ', for every '. This
simplifies the Reed-Solomon list decoding problem, by creating a smaller problem
instance for each distance. We need nk packets from distance ' to have n distinct
points from each of k polynomials. The victim collects the largest possible sample
of distinct points from packets with w.dist = ' for every '. We need N < n 2 =d to
reconstruct the polynomials using the Guruswami-Sudan algorithm. Lastly, we need
for the efficiency of reconstruction to improve on Savage et al.
This has at least a few solutions, but the improvements are not so compelling. For
example, using Edge3 with three 11-bit chunks can be competitive with Savage et al.
for certain values of k.
6.1.2 Bleichenbacher-Nguyen Reconstruction With a Distance Field
The problem faced by the victim can be viewed as a noisy polynomial interpolation
problem. The values x are all of the possible x values. Each set S i contains
all of the distinct y values such that occurs in some packet within a random
sample of all received packets. The polynomial f could be any of the polynomials that
corresponds to a true attack path or a stray path.
The victim could proceed as follows. He looks at a sample of N packets (for suitably
large N), and for each x i he chooses a set S i of size m from all of the
in the sample. If the number of distinct y values for which occurs in the sample
is greater than m, then the victim chooses which m values to include in S i at random.
The victim can filter out packets that were last reset at distance ', for every '. This
creates a smaller problem instance for each distance. For each problem instance, the
number of S i sets is equal to n, the number of possible x values. The size of each S i
is k, the number of attack paths. The degree of each polynomial is at most d, which
depends on the particular algebraic encoding method we are using.
False Positives: There are k n ways of taking one x value from each set. Each of these
will actually be a polynomial of degree d or less with probability at most 1=q n d 1 .
Here q is the size of the finite field, which is essentially the number of distinct y values.
We need the expected number of false positives k n =q n d 1 to be reasonably small.
For the basic reconstruction algorithm of Bleichenbacher-Nguyen, we need k <
n=d. Three other algorithms by Bleichenbacher-Nguyen work for many k;n;d even if
they do not satisfy k < n=d.
A "meet-in-the-middle" algorithm has running time (n d)m n=2 with precomputation
that uses memory which is O(m n=4 logq). Note that this is independent
of k.
A "Grobner basis reduction" algorithm computes a Grobner basis reduction on a
system of k polynomial equations in d +1 unknowns. The best known Grobner
basis algorithms are super-exponential in d, but reasonably efficient for small d
(e.g., d < 20).
A "lattice basis reduction" algorithm performs a lattice basis reduction on an
1)-dimensional lattice in Z nk over a finite field of size about n. This
method will be ineffective for our application because the size of the finite field
is too small.
For efficiency over Savage et al., we need (nk) 2:5 < k 8 . Here are some interesting
instantiations of our schemes with respect to this method of reconstruction:
Example 1: Edge3 encoding with 12 distinct x values (represented in a 4-bit xval
field) and 8-bit yval field. Then the noisy polynomial problem has 12 S i sets, where
the size of each S i is k, the number of attack paths. The degree of each polynomial is
at most 7. The size of the finite field is 256. The meet-in-the-middle algorithm takes
time 8k 6 , which compares favorably to the k 8 required by Savage et al. The Grobner
basis reduction algorithm should also be reasonably efficient here. The total size of
this marking scheme is bits. However, the number of false positives is unacceptably
high here: k 12 =2
Example 2: Edge4 encoding with 12 distinct x values (in a 4-bit xval field) and 8-
bit yval field. Then the degree of each polynomial is at most 5. The running time
for the meet-in-the-middle algorithm is about 8k 6 . The running time for the Grobner
basis reduction algorithm is faster than in the previous example. The expected number
of false positives is lower than in the previous example: k 12 =2 48 . If
we expect about one false positive at each distance. (Of course, the risk from false
positives is slightly greater than in the previous case, because the number of possible
partial router id's is only 2 22 . Thus there will be slightly more accidental matches of
endpoints involving a bogus edge, but it is not significantly worse.) The total size of
this marking scheme is bits.
Example 3: Edge4 encoding with
Then the degree of each polynomial is at most 4 (using 22 bits of second router id).
Running time for meet-in-the-middle is about 6k 5 , versus k 8 for Savage et al. Number
of false positives is about Savage. For example, this is expected
positives for false positives for are quite
manageable. The total size of this marking scheme is 21 bits.
Example 4: Edge4 encoding with
of each polynomial at most 4. Running time for meet-in-the-middle is about 4k 4 .
Number of false positives is about k 8 =2 33 (e.g., 1=2 when
The total size of this marking scheme is 20 bits.
Example 5: Edge4 encoding with
of each polynomial at most 4. Running time for meet-in-the-middle is about 8k 6 . Number
of false positives is about k 12 =2 77 (e.g., 2 5 when
both of which are quite manageable). The total size of this marking scheme is 21 bits.
6.2 Reconstruction Without a Distance Field
For reconstruction when the marking scheme does not have a distance field, we do not
achieve schemes that are competitive with Savage et al. Our analysis will begin with
some facts and simplifying assumptions about the distribution of received packets by
the victim.
6.2.1 Distribution of Received Packets
be the fraction of packets arriving on the ith attack path that reach the victim
as bogus packets. Let T i be the fraction of packets on the ith attack path that reach the
victim as true packets. Let F i be the fraction of packets on the ith attack path that reach
the victim as true packets that were only reset by the furthest router on that path. By
Assumption
For all of the encoding schemes (unless "marking mode" is used), we have F
Viewed as a function of p over fraction takes on its maximum
value at implies that F
For all of the encoding schemes, we have
When this implies that B . The fact that there can be
such a large fraction of bogus packets arriving on each path has serious consequences
for our marking schemes without a distance field.
Let B;T;F be the fractions of bogus packets, true packets, and furthest packets for
all paths to the victim. If we assume that the arrival rate of packets on all attack paths
is approximately the same, then
When "marking mode" is used, the probability that a router is not in reset mode is
Coupon Collector's Bound: A sample of lC logC elements, drawn with replacement
from according to the uniform distribution, is very likely to contain all C possible
values, for some small constant l.
6.2.2 Guruswami-Sudan Reconstruction Without a Distance Field
The victim can choose a random sample of distinct points in the packets that reach him.
Without a distance field, he cannot partition the packets into smaller samples by last
Assume that the routers are using an edge encoding scheme, and assume that we
succeed if we can reconstruct all of the furthest edge polynomials. Let us also assume
that we will search for polynomials that pass through n distinct points, where n is the
number of distinct x values.
There are actually three distinct levels of reconstruction success that can be con-
sidered: (a) The sample of N points contains at least n points on every furthest edge
polynomial with overwhelming probability; (b) The sample of N points contains at
least n points on some furthest edge polynomial with overwhelming probability; (c)
The sample of N points contains at least n points on some furthest edge polynomial
with non-negligible probability q.
For case (a), the Guruswami-Sudan algorithm needs to be applied only once to a
random sample of N points. For case (b), the algorithm needs to be applied lklogk
times to independent random samples of N points (coupon collector's problem on the
set of k furthest edge polynomials). For case (c), the algorithm needs to be applied
lklogk=q times to independent random samples of N points.
For case (a), it suffices to have N (lnklog(nk))=(p(1 p) L 1 ). That is because
we are very likely to get a complete set of all n possible x values for all k edge poly-
nomials. This implies that lnklog(nk) "samples" are sufficient. By the analysis of
the preceding subsection, a "sample" from a furthest edge polynomial is expected in
fraction of all of the packets. When combined with the Guruswami-Sudan
bound, we get n 2 =d > (lnklog(nk))=(p(1 p) L 1 ). Assuming that
we have
For case (b), it suffices to have N M n;k =(p(1 p) L 1 , where M n;k is the answer
to the following "occupancy problem": Throw M n;k balls into k bins, and expect to
find lnlogn balls in the bin with the most balls. By the Pigeonhole Principle, it is
certainly true that M n;k < lnklogn. In fact, the actual value for M n;k is quite close to
this. Combined with the Guruswami-Sudan bound, we get n 2 =d > lnklogn=(p(1
Assuming that
ln=logn > dk(L 1)e:
For case (c), we can reduce the value of M n;k a little, but it doesn't appear to be
significant for our purposes.
Of course, for any of (a), (b), or (c), we can reduce N by eliminating from the
sample any duplicate points. Since as many as 1=e of all packets in the sample could
be bogus packets from the attacker, removing duplicate points will have limited benefit.
We can find no solution that yields a marking scheme that is more efficient than Savage
et al. Moreover, for any plausible instantiation, the number of false positives and bogus
edges (or bogus paths) is unacceptably high.
6.2.3 Bleichenbacher-Nguyen Reconstruction Without a Distance Field
The victim can proceed as described at the start of Section 6.1.2, although without a
distance field the packets cannot be partitioned by last reset distance.
Version H. Length Type of Service (8-bit) Total Length
Fragment ID (16-bit) Flags
Fragment Offset
Time to Live Protocol Header Checksum
Source IP Address
Destination IP Address
Figure
2: The IP Header. Darkened areas represent underutilized bits.
Suppose that N is chosen to be large enough that all of the points on some furthest
polynomial are included with high probability. The probability that a given S i
includes a point from f is at least m=n. The probability that reconstruction succeeds is
at least (m=n) n . The basic algorithm of Bleichenbacher and Nguyen solves the noisy
polynomial interpolation whenever m < n=k.
This approach does not seem too promising when k > 1. In this case, (m=n) n < 2 n .
Thus reconstruction is unlikely to succeed.
the victim can choose positive integer c. The
probability that reconstruction succeeds is at least (1 c=n) n which is about e c . Un-
fortunately, either the number of false positives is unacceptably large, or the success
probability is unacceptably small.
Another approach would be to have the victim bias his sample with respect to how
frequently different points occurred in the packets that reached him. Unfortunately, this
does not appear to work well either. Since the victim will not be able
to recognize the true packets that contain points from the furthest polynomials.
We conclude that when the marking scheme does not have a distance field, we do
not see how to use the Bleichenbacher-Nguyen method of polynomial reconstruction,
at least using their simplest algorithm. It is possible that their other algorithms, e.g.,
based on Grobner basis reduction, might be more effective.
7 Encoding Path Data
We now need a way to store our traceback data in IP packets. We will try to maximize
the number of bits available to us while preserving (for the most part) backwards
compatibility.
7.1 IP options
An IP option seems like the most reasonable alternative for storing our path informa-
tion. Unfortunately, most current routers are unable to handle packets with options in
hardware [3]. Even if future routers had this ability, there are a number of problems associated
with this approach as presented by Savage, et al [24]. For all of these reasons
we have concluded that storing data in an IP option is not feasible.
7.2 Additional Packets
Instead of trying to add our path data to the existing IP packets, we could instead send
the data out of band using a new protocol that would encapsulate our data. While this
may have limited uses for special cases (such as dealing with IP fragments), a general
solution based on inserting additional packets requires a means of authenticating these
packets. This is because, presumably, the number of inserted packets is many orders
of magnitude less than the number of packets inserted by the attacker. Thus, because
we assume that an attacker can insert any packet into the network, the victim can be
deluged with fake traceback packets, preventing any information to be gained from the
legitimate packets.
7.3 The IP Header
Our last source of bits is the IP header. There are several fields in the header that may
be exploited for bits, with varying tradeoffs. As shown in Figure 2, we have found 25
bits that might possibly be used.xs
7.3.1 The TOS Field
The type of service field is an 8 bit field in the IP header that is currently used to allow
hosts a way to give hints to routers as to what kind of route is important for particular
packets (maximized throughput or minimized delay, for example) [1]. This field has
been little used in the past, and, in some limited experiments, we have found that
setting this field arbitrarily makes no measurable difference in packet delivery. There
is a proposed Internet standard [20] that would change the TOS field to a "differentiated
services field." Even the proposed DS field has unused bits, however, there are already
other proposed uses for these bits (e.g. [23]).
7.3.2 The ID Field
The ID field is a field used by IP to permit reconstruction of fragments. Naive
tampering with this field breaks fragment reassembly. Since less than 0:25% of all
Internet traffic is fragments [26], we think that overloading this field is appropriate.
A more in-depth discussion of the issues related to its overloading can be found in
Savage's work [24].
7.3.3 The Unused Fragment Flag
There is an unused bit in the fragment flags field that current Internet standards require
to be zero. We have found that setting this bit to one has no effect on current imple-
mentations, with the exception that when receiving the packet, some systems will think
it is a fragment. The packet is still successfully delivered however, because it looks to
those systems as though it is fragment 1 of 1.
Our Selection
We could choose to use up to 25 bits out of the ID, flag, and TOS fields. This would
suffice for all of the examples given in Section 6.1.2. The implications of using multiple
fields in the IP header simultaneously are modest, since the lost functionality appears to
be the union of what would break due to overwriting each field separately. The impact
on header checksum calculation is modest, as this can be done in hardware using the
standard algorithm.
Of course, the algebraic marking scheme is independent of the choice of bits. The
decision of where to put the marking data must be seen as conditional, subject to change
as new standards arise.
7.4 IPsec
The interoperability of a traceback scheme with IPsec should be considered. The Encapsulated
Payload (ESP) [17], which encrypts a datagram for confidentiality,
provides no problem for a traceback scheme, as it does not assume anything about the
surrounding datagram's headers. The Authentication Header (AH) [16], does present
an issue for IPv4. Using the AH, the contents of the surrounding datagram's headers
are hashed. Certain header fields are considered mutable (e.g., Fragment Offset), and
not included in the hash computation. Unfortunately, the mutable fields in the IPv4
header are unusable for traceback: they either are necessary for basic IP functionality,
or reusing them breaks backward compatibility with current IP implementations.
7.5 IPv6
Since IPv6 does not have nearly as many backwards compatibility issues as IPv4,
the logical place to put traceback information is a hop-by-hop option in the IPv6
header [10]. However, schemes such as those presented here are still valuable because
they use a fixed number of bits per packet thereby avoiding the generation of
fragments. Unlike the case in IPv4, we can set the appropriate bit in the Option Type
field to indicate that the data in the option is mutable, and should be treated as zero for
the purposes of the Authentication Header.
We have not worked out the best way to accommodate IPv6's 128-bit addresses,
but note that due to alignment issues, one is likely to select an option length of 8n+6
bytes, n 0. It would likely be the case that 0
8 Conclusion and Future Work
We have presented a new algebraic approach for providing traceback information in IP
packets. Our approach is based on mathematical techniques that were first developed
for problems related to error correcting codes and machine learning. Though we have
proposed it in the context of a probabilistic packet marking scheme, our algebraic approach
could also be applied to an out-of-packet scheme. The resulting scheme would
have the desirable property of allowing multiple routers to act on the extra packet while
it remains at a small constant size. Our marking schemes have applications for other
network management scenarios besides defense against denial of service.
One important open problem is to find better instantiation of the specific methods
we have proposed. In particular, a successful approach based on full path tracing would
be attractive. More generally, it would be interesting to explore resource and security
tradeoffs for the many parametrizations of our schemata. Lower bounds on the size of
any marking scheme would be most helpful. It would also be interesting to explore the
use of algebraic geometric codes in marking schemes.
Acknowledgments
We would like to thank David Goldberg and Dan Boneh for valuable discussions. We
would also like to thank Dawn Song, Adrian Perrig, Ramarathnam Venketesan, Glenn
Durfee, and the anonymous referees for helpful comments on earlier versions of this
paper.
--R
Type of service in the internet protocol suite.
Reconstructing algebraic functions from mixed data.
Personal Communications
ICMP traceback messages.
correction of algebraic block codes.
Algebraic Coding Theory.
Bleichenbacher and Nguyen.
Tracing anonymous packets to their approximate source.
CERT coordination center denial of service attacks.
Internet protocol
"stacheldraht"
"Tribe Flood Network"
Using router stamping to identify the source of IP packets.
Network ingress filtering: Defeating denial of service attacks which employ IP source address spoofing.
Improved decoding of Reed-Solomon and algebraic-geometric codes
IP authentication header
IP encapsulating security payload (ESP)
The Art of Computer Programming
On the effectiveness of probabilistic packet marking for ip traceback under denial of service attack.
Definition of the Differentiated Services field (DS field) in the IPv4 and IPv6 headers.
A displacement approach to efficient decoding of algebraic-geometric codes
Numerical Recipes in FORTRAN: The Art of Scientific Computing.
A proposal to add Explicit Congestion Notification (ECN) to IP.
Practical network support for IP traceback.
Advanced and authenticated marking schemes for IP traceback.
Providing guaranteed services without per flow management.
Algorithmic issues in coding theory.
Decoding of Reed Solomon codes beyond the error-correction bound
--TR
Numerical recipes in FORTRAN (2nd ed.)
The art of computer programming, volume 2 (3rd ed.)
Decoding of Reed Solomon codes beyond the error-correction bound
A displacement approach to efficient decoding of algebraic-geometric codes
Providing guaranteed services without per flow management
Practical network support for IP traceback
Using router stamping to identify the source of IP packets
Algorithmic Issues in Coding Theory
--CTR
Karthik Lakshminarayanan , Daniel Adkins , Adrian Perrig , Ion Stoica, Taming IP packet flooding attacks, ACM SIGCOMM Computer Communication Review, v.34 n.1, January 2004
Hikmat Farhat, Protecting TCP services from denial of service attacks, Proceedings of the 2006 SIGCOMM workshop on Large-scale attack defense, p.155-160, September 11-15, 2006, Pisa, Italy
Florian P. Buchholz , Clay Shields, Providing process origin information to aid in computer forensic investigations, Journal of Computer Security, v.12 n.5, p.753-776, September 2004
David G. Andersen, Mayday: distributed filtering for internet services, Proceedings of the 4th conference on USENIX Symposium on Internet Technologies and Systems, p.3-3, March 26-28, 2003, Seattle, WA
Katerina Argyraki , David R. Cheriton, Loose source routing as a mechanism for traffic policies, Proceedings of the ACM SIGCOMM workshop on Future directions in network architecture, August 30-30, 2004, Portland, Oregon, USA
Haining Wang , Danlu Zhang , Kang G. Shin, Change-Point Monitoring for the Detection of DoS Attacks, IEEE Transactions on Dependable and Secure Computing, v.1 n.4, p.193-208, October 2004
Andrey Belenky , Nirwan Ansari, On deterministic packet marking, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2677-2700, July, 2007
Sherif Khattab , Rami Melhem , Daniel Moss , Taieb Znati, Honeypot back-propagation for mitigating spoofing distributed Denial-of-Service attacks, Journal of Parallel and Distributed Computing, v.66 n.9, p.1152-1164, September 2006
Brent Waters , Ari Juels , J. Alex Halderman , Edward W. Felten, New client puzzle outsourcing techniques for DoS resistance, Proceedings of the 11th ACM conference on Computer and communications security, October 25-29, 2004, Washington DC, USA
Hassan Aljifri, IP Traceback: A New Denial-of-Service Deterrent?, IEEE Security and Privacy, v.1 n.3, p.24-31, May
Zhiqiang Gao , Nirwan Ansari, A practical and robust inter-domain marking scheme for IP traceback, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.3, p.732-750, February, 2007
Christos Siaterlis , Vasilis Maglaris, One step ahead to multisensor data fusion for DDoS detection, Journal of Computer Security, v.13 n.5, p.779-806, October 2005
Christos Douligeris , Aikaterini Mitrokotsa, DDoS attacks and defense mechanisms: classification and state-of-the-art, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.5, p.643-666, 5 April 2004 | traceback;internet protocol |
505628 | Extraction and Optimization of B-Spline PBD Templates for Recognition of Connected Handwritten Digit Strings. | Recognition of connected handwritten digit strings is a challenging task due mainly to two problems: poor character segmentation and unreliable isolated character recognition. In this paper, we first present a rational B-spline representation of digit templates based on Pixel-to-Boundary Distance (PBD) maps. We then present a neural network approach to extract B-spline PBD templates and an evolutionary algorithm to optimize these templates. In total, 1,000 templates (100 templates for each of 10 classes) were extracted from and optimized on 10,426 training samples from the NIST Special Database 3. By using these templates, a nearest neighbor classifier can successfully reject 90.7 percent of nondigit patterns while achieving a 96.4 percent correct classification of isolated test digits. When our classifier is applied to the recognition of 4,958 connected handwritten digit strings (4,555 2-digit, 355 3-digit, and 48 4-digit strings) from the NIST Special Database 3 with a dynamic programming approach, it has a correct classification rate of 82.4 percent with a rejection rate of as low as 0.85 percent. Our classifier compares favorably in terms of correct classification rate and robustness with other classifiers that are tested. | Introduction
In the field of researching on automated handwritten document processing system, hand-written
digit recognition still poses a very challenge to scientists and practitioners. The
difficulties arise not only from the different ways in which the single digit can be written,
but also from varying requirements imposed by the specific applications, such as recognition
of connected digit strings in ZIP codes reading and cheque reading.
The techniques for recognition of connected digit strings can be categorized into two
classes: Segmentation-based algorithms [1], [2], [3], [4], [5], [6], and Segmentation-free
algorithm [7], [8], [9], [10]. Recently, there are also some techniques reported to combined
the two techniques together to achieve a more reliable performance [11], [12]. In the first
class of algorithms, the segmentation procedures were applied before the recognition, and
in the later class algorithms, they combined the segmentation and recognition together.
The segmentation-based techniques were first applied in the recognition of connected char-
acters, while in recent years, more attention have been focused on the segmentation-free
techniques. The reason is that segmentation is ambiguous and prone to failure. Anther
reason that researchers paying more attention to segmentation-free techniques, especially
September 16, 1998
Fig. 1. Some samples selected randomly from NIST Special Database 3: (a), single digits; (b), connected
digit strings: from left to right, two connected digit strings, three connected digit strings, four
connected digit strings.
in handwritten word recognition, is that they believe the lexical information being used in
post-processing is powerful enough to achieve a reliable performance, and only need a not
very robust but fast classifier [13]. However, in the recognition of connected digit strings,
little lexical information can be applied in post-processing to help in choosing the correct
result from a set of recognition candidates, as in the word recognition. And from experience
gained from our previous research in segmentation-based handwritten digit string
recognition [14], [5], [6], we believe that, for both techniques, a reliable and robust classifier
that can distinguish legible digits from unsure patterns is utmost important to the system
performance, and this is the motivation that we developed the new classifier, which will
be introduced in this paper.
Fig. 1 are some samples of single and connected digits. We can see that degradation
caused by connection, overlap increase the difficult of the recognition.
Many new techniques have been presented in recently years for the recognition of isolated
digits, they are differ in the feature extraction and classification methods employed. Some
comprehensive reviews were given by Govindan [15] and Impedovo [16]; S. Lee et al gave
a classifier based on Radial Basis Network and Spatio-Temporal Features [17]; Z. Chi et
al proposed a classifier with Combined ID3-Derived Fuzzy Rules and Markov Chains [18];
another classifier based on Multilayer Cluster Neural Network and Wavelet features was
presented by S. W. Lee et al [19]; and D. H. Cheng et al presented a classifier based
on morphology [20]. Besides these, Trier presented a survey of the feature extraction
techniques [21].
Another technique applied in classifier designing is template matching, which is one of
the earliest pattern recognition techniques applied in digit recognition. In these years, it
re-attracted a lot of attention from many researchers because it is intuitive and maybe
have the ability to achieve a practical recognition performance. A number of studies
have been reported in the literature which have applied template-based techniques to
digit recognition. H. Yan proposed an Optimized Nearest-Neighbor Classifier for the
recognition of handprint digits [22], the templates are 8 \Theta 8 gray-scale images, rescaled
from the original 160 \Theta 160 normalized binary image. Wakahara uses iterated local affine
transformation (LAT) operations to deform binary images to match templates [23], the
templates here are 32 \Theta 32 binary images. Nishida presented a structure-based template
matching algorithm [24]. The templates are composite with straight lines, arcs and
corners, and the matching is based on the geometrical deformation of the templates.
Cheung et al proposed a kind of templates based on splines [25], he model digits with
a spline, and assume the spline parameters have a multivariate Gaussian distributions.
Revow et al gives another digit recognition algorithm based on spline templates [26], the
templates are elastic spline curves with ink-generating Gaussian "beads" strung along
their length. K. Jain et al presented a deformable templates matching algorithm based on
object contours [27], [28], the recognition procedure is to minimize the objective function
by iteratively updating the transformation parameters to alter the shape of the templates
so that the best match between the templates and unknown digits.
In this paper, we present a new template presentation scheme, a neural network approach
5for extracting templates and a evolutionary algorithm based templates optimization
approach based on this new presentation. Each template presented by a distance-
to-boundary distribution map is approximated by a rational B-spline surface with a set
of control knots. In training, first a cost function of the amplitude and gradient of the
distribution map is minimized by a neural network; then, the templates were optimized by
a optimization algorithm based on evolutionary algorithm. While in the matching step,
a similarity measure that takes into account of both the amplitude and gradient of the
distribution map is adopted to match an input pattern to templates.
The rest of this paper is organized as follows. The presentation of new templates is
given in section II; section III introduced the neural network for templates extraction; the
templates optimization part based on evolutionary algorithm is proposed in section IV;
section V is the experimental results and discussions; and the conclusion is given in section
VI.
II. Presentation of Templates
In our algorithm, the templates are presented by a rational B-spline surface with a set
of control knots. Curve and surface presentation and manipulation using the B-spline
form (non-rational or rational) are widely used in geometric design. In 1975, Versprille
proposed a rational B-splines to geometric design of curves and surfaces [29]. The work
outlined the important properties of rational B-splines, such as continuity, local control
property, etc. In recent years, B-spline curve (snake) have been used in the description
of objects [30], [31], as well as digits [25], [26]. With the B-spline estimation, information
concerning the shape of desired objects can be incorporated into the control parameters
of the curve or surface based templates.
However, there is a shortcoming in these approaches: the estimation of B-spline curve
exist a lot of uncertainties and prune to failure, because the corresponding parameters of
templates and input data should reflect the information in the same aspect of image. So, in
our algorithm, we use B-spline surface to estimate the distribution map of the digit image,
The number of the control points of surface, and the position of them are pre-specified, so
for each control point, the control area is certain, and the shortcoming can be avoided.
O fg
O bg
O b
Fig. 2. Definition of the distance-to-boundary distribution.
Input Binary Data
Distance Map
Distribution Map
Fig. 3. Converting binary data into a distribution map (one-dimensional example).
A. Presentation of Binary Digit Images
A binary digit image can be converted into a distribution map according to the distance
from pixel O (x;y) to the boundary between the foreground and background, as shown in
Fig. II-A. For a foreground pixel, we have d(x; y) 0, for a background pixel, we assume
it has a negative value, that is d(x;
\GammajD bg
(1)
September 16, 1998
Fig. 4. Control points of a template.
Let I(x; y) denote the value of pixel O (x;y) in the distribution map, it can be defined via
the following transformation:
where d max is the maximum value in distribution map, and is a constant to make
sure that a point on the boundary, O b with I(O b shown in Fig. 3. Obviously, the
value in distribution map reflect the posterior possibility of pixels in input binary image
belonging to foreground, and ridge of the map form the skeleton of digit.
B. Template Presented by rational B-spline Surface
Assume that the rational B-spline surfaces for the templates are fS(P j
where
are control points of template j, as shown in Figure II-B.
The rational B-spline surface can be defined as:
Normally, r is the order of the B-spline basis, and B r (x) is a vector of base
function given by
BN \Gamma1;r (x)C C C C C C C C A
For each basis, we have
and
is a knot vector of length N u , It is assumed that
The value of u i determine the position of control knots and should be pre-
defined. Here, to simplify to the B-spline surface, we define a knot vector as
f
r
z -
r
z -
In most applications,
the number of input data N i AE (N \Theta N ), the number of control points in a templates.
III. Template Extraction
The templates extraction algorithm is adapted from Yan's optimized nearest neighbor
classifier [32], which can be presented as a multi-layer neural network, as shown in Figure
5. The input data is a distribution map I. The hidden layer contains the patterns to be
selected. Each hidden node corresponds to a pattern. is the output
of the network, corresponding to N classes.
Input image
Output layer
Hidden Layer
Input Layer
Fig. 5. The template extraction model.
The output of the hidden node is given by:
OE j is a distance function that measure the similarity between the input feature I and
pattern S(P j ).
The connection weight from hidden node j to output node m is w jm , so the output of
the node is:
here, w jm can be a constance or linear function, or even non-linear function, but must
be pre-defined.
Assume that the desired and actual activities of the output node m are z m0 and z m
respectively for input I. Then for each j, we need to find p kl
so that the following energy
function is minimized:
According to generalized delta rule, a locally minimized p jk
j can be found by successively
applying an increment given by
September 16, 1998
\Deltap kl
@p kl
@zm
@zm
@p kl
@p kl
where ff is a learning factor.
In this algorithm, not only the value of the distribution map, but also the gradient
directions are considered:
1. Assuming the input distribution map is continuous, we have
where C s is a smooth factor, and
and fi j (x; y) is the angle between the gradient of prototype S j (x; y) and the gradient of
the distribution map I(x; y) at point O (x;y) . Obviously, 0 OE 1;j 1 and 0 OE 2;j 1. If
When an input image is given, its distribution map I can be obtained, so are @I
@x
@x
and
I 2
dy, therefore we have
sum
Z 1Z 1i @S j
@x I 0
@y I 0
x
I 0
x
Considering the differentiation property of the B-spline [33],
@x
r (x)
@x
similarly,
@y
@y
r (x)P y
where
ur \Gammau 0
ur \Gammau 0
ur \Gammau 0
ur \Gammau 0
ur \Gammau 0
ur \Gammau 0
The gradients of templates are also rational B-spline surfaces. Therefore, we have
September 16, 1998
kl
kl
kl
kl
I 2
sum
I
e kl
(B i;r (x)B j;r (y)) dx dy (24)
kl
I
x
dx dy (25)
where e kl is the effective control region of p kl and
nh
x I 0
y
(S y
io
io
In order to simplify the extraction algorithm, clustering algorithm was adapted: At
iteration t of the templates extraction procedure, as to an input distribution map, instead
of calculate all templates, only the templates with maximum OE j are upgraded. So, w jm
can be defined as:
where Sm is the set in which each index corresponds to a hidden node that represents
a prototype that belongs to the class represented by output node m, and m is the class
that the input distribution map belongs to. So, \Deltap j
kl can be given by:
kl
kl
kl
IV. Template Optimization by Evolutionary Algorithm
Evolutionary algorithms for optimization have been studied for more than three decades,
since L. J. Fogel et al published the first works on evolutionary simulation. Many years of
research and application have demonstrated that Evolutionary Algorithms, which simulate
the search process of natural evolution, are powerful for the global optimization. Current
Evolutionary Algorithms (EAs) include Evolutionary Programming (GP), Evolutionary
Strategies (ES), Genetic Algorithm (GA) and Genetic Programming (GP) [34]. T. Back
et al gave us a comparative study of the first three approaches [35]. And an introduction
of evolutionary techniques on optimization is given by D. B. Fogel [36].
Several applications of evolutionary templates optimization have been reported in [37],
[38], [39], [40]. However, in these algorithms, only one best template is extraction from
one set of training samples. Obviously, for most of object recognition problems, only
one template for a class of object is not enough for a reliable recognition system, more
templates should be extracted to achieve a better performance. Sarkar [41] presented
a fitness based clustering algorithm, which can be adapted for templates optimization.
In the algorithm, one opponent (parent or offspring) is a set of clustering centers with
different number. In its selection procedure, the fitness values of parents and offsprings are
compared. In comparison, a number of opponents are selected randomly as the comparison
reference. The opponent with better performance to each reference opponent can receive
a win. Based on the wins, some opponents are selected as the parents of next generation.
The shortcoming of the algorithm is that the computation requirement is in a exponential
relationship with the cluster number. If the number of cluster number is very large, the
algorithm ceases to be feasible.
Unlike Sarkar [41] use a set of cluster centers as a component in evolution and select
only one set as the winner, we use templates as the components directly in our algorithm
and the selected survivors are a group of templates. The evolutionary processing is to
September 16, 1998
simulate the evolution of a nature social system, such as ants, bees and human society, or
even the nature system.
The study of social system, as a part of Sociobiology, began at the middle of 19th cen-
tury, after Charles Darwin published his most famous book, On the Origin of Species by
Means of Natural Selection in 1859. Sociobiology seeks to extend the concept of natural
selection to social systems and social behavior of animals, including humans. The sociobi-
olists regard a social system, in simple, as a system consists in a plurality of individuals,
heterogenous and homogeneous, interacting with each other in a situation which has at
least a physical or environmental aspect, individuals who are motivated in terms of a tendency
to the "optimization of gratification" and who relation to their situations, including
each other, is defined and mediated in terms of a system of culturally structured and
shared symbols [42]. The social behavior of the individuals can be categorized into two
classes: competition and cooperation, which prompt the development of the system and
keep the system.
In our algorithm, we define a small, simple and homogeneous social system. Each
template is just like a individual in it, and the society can only tolerate a limited number of
individuals, but the number of the individuals can generate a larger number of offsprings in
each generation, so only some of the offsprings with better abilities, based on the "selection
rule", can be kept and others must be discarded, that is, "survival of the fittest". Moreover,
the relationship between the individuals is not only competition: only winners can survive,
but also cooperation: the properties of individuals are reflected by their performance in
all. After the evolution by generations and generations, a group of templates are selected
and the whole of they can achieve a good performance of the recognition system.
Our templates optimization algorithm works as fellow:
1. Initially, generate a population N P of templates P (call them parents). In our al-
gorithm, they are just the templates extracted by neural networks, as introduced in
section. III.
2. Then, create a set of offsprings O in a number of NO based on the parents set P by
evolutionary operations, which will describe in section IV-A.
3. Compare all offsprings and select a subset of templates in a number of N P as the
September 16, 1998
parents of next generation, as present in section IV-B.
4. if the number of generation is less than a pre-specified constant, go to step2.
In our algorithm, the templates in each digit class are optimized separately, that is, in
each computation, only one class of templates and training samples are considered. The
advantage of the algorithm can be easily adapted to parallel computer.
A. Generation of Offsprings
For better explanation of the selection procedure, we define:
S tr the training sample;
tr the ith training sample;
P the parent set;
k the kth component in set P ;
O the offspring set;
O k the kth component in set O;
the selected subset of O as the next generation;
k the control parameter in position (x; y) of O k ;
k the control parameter in position (x; y) of P k .
The generation of offsprings includes three ways: replication, mutation and recombination
Replication is just to copy the value from parents to generate new offsprings.
O
The number of replicated offsprings is N r
is the number of parents.
Mutation is to add some perturbation on the parents. In our algorithm, the perturbation
G are Gaussian noises.
O
The number of mutated offsprings is N m
is a positive integer.
Recombination mechanisms are used in EAs either in their usual form, producing one
new individual from two randomly selected parents, or in their global form, allowing of
taking components for one new individual from potentially all individuals available in the
parent population [35]. In our algorithm, the values of recombined offsprings are selected
randomly from the two randomly selected parents, some Gaussian noise are added as
perturbations. The positions for adding perturbation are also controlled by a random
value.
R is a continues random value of 0 and 1, R p is a discrete random value of 0 or 1 to
control perturbation position, j1 and j2 are index of two randomly selected parents. The
number of recombined templates are N c
O , which is pre-specified.
The number of offsprings
O +N c
O .
B. Selection Procedure
The selection procedure is to select a subset of templates P 0 in a number of N P from the
the offspring set O. We set the output from equation 8 between template j and training
sample i is OE i
j , and fitness of P 0 is:
The selected subset should satisfy that the fitness(P 0 ) is bigger than the fitness of any
other subsets of the offspring set O. Considering the number of possible subsets of O is
NO , it is a extremely large number if we set N In order to save
the computation resource to make this algorithm practical, a fast searching technology is
developed, which is directly based on the value OE i
.
The whole selection procedure can be divided into following steps:
1. For each training sample S i
tr , a number of N top templates with top function OE outputs
are recorded and sorted in the decreasing order. The others are discarded. We
let fT i;j be the set of the recorded output values
and fT i;j
be the index from T i;j to the
September 16, 1998
templates, so the relation between them can be presented as:
index
and for training sample S i
tr
l (34)
for 8 l 2 O and l =
index g.
Moreover, a flag is also set to each training sample: T i
2. For each template O k , add the values of remaining outputs together:
index =k
3. Find the highest Sum k and move the template O k 0
(max(Sum k )) to the parent
set P 0 for next generation:
. The training samples, which
index
are given a flag
4. If there still exist some flags of training samples that T i
for each template
O k left in set O, get the new value of Sum k
index =k
where,
f lag 0 and
f lag
f lag 0 and
f lag
September 16, 1998
if the flags of all training samples satisfy T i
f lag 0, replace equation 37 with equation
35.
5. Similar to Step 3, find the template O k 0
with highest Sum k in O and move it to P 0 ,
re-set the flag of training samples which
f lag 0, replace it
with
lag
6. Repeat step 4, 5 till the number of templates in set P 0 reach N P .
Finally, the selected subset of templates can be considered as the desired output, and the
computation resource required by the selection procedure is available to most of current
scientific calculating devices.
V. Experimental Results and Discussion
In total, 10,426 numeral samples from NIST Special Database 3 were extracted as the
training set. Each numeral was normalized into a 48 \Theta 48 binary image with 8-pixel-
wide background borders for the distribution transform. So the actual image size is 64 \Theta
64. Another independent 10,426 numeral samples were also extracted from NIST Special
Database 3 as the test set.
First, 1000 templates were first extracted from the training set by the neural network
approach discussed in section III with each numeral class having 100 templates. Every
template contain 11 \Theta 11 control points. Because the amplitude of a pixel on four borders
is close to 0, we set the control points on borders to 0, only the inner control points that
is, 9 \Theta 9 = 81 control points were upgraded in each iteration.
In this step, we test samples with different fl values. Table I is the results of the
comparison, we use three sets of templates extracted with different fl values:
0:5 the algorithm achieves a reasonable good performance.
Then, the 1000 templates were used as the input of our optimization algorithm to get
another 1000 new templates with Because each numeral class has 100
templates, in each optimization computation N
and In simulation, we also set N top = 5.
Figure
6 shows some of the optimized templates, which are presented in grey-scale
images.
For a comparison, we have also applied a few other algorithms to recognize the same
September 16, 1998
Fig. 6. Extracted templates in grey-scale images.
Fig. 7. Some examples of unsure patterns.
Experimental results of test samples with different fl values.
ff Without rejection With rejection
II
A performance comparison of our approach with other numeral recognition
algorithms by using test set.
Techniques With Rejection Without Rejection
Rejection
Optimized Templates 26.7 98.9 0.0 96.4
Extracted Templates 27.9 98.7 0.0 95.8
MLP 34.4 99.9 0.0 96.9
numeral samples. These algorithms include a three-layer MLP (Multi-Layer Perceptron),
the ONNC (Optimized Nearest-Neighbor Classifier) algorithm proposed by Yan [22], and
initial templates extracted from an MLP. Sixty-four intensities on the 8 \Theta 8 image, rescaled
from the original 160 \Theta 160 normalized binary image, were used as the inputs of the MLP
based approach and ONNC. The size of the MLP is 64-75-10, As to the ONNC algorithm,
it returns both the assigned class j and the distance D j of the image from the closest
class prototype. The quotient of the distance D j and M i , termed as the "recognition
were used as the estimation of the reliability of a classification [43], where, M i
is the mean value of the recognition measures for the correctly classified
numerals in the training samples that belong to class i. Table II shows the test results of
out algorithm together with MLP, ONNC, and the approach with the initial templates.
As it is expected, the performance with new templates from the evolutionary algorithm
September 16, 1998
Experimental results of unsure patterns with different classifiers (the threshold used
here is same as the threshold we used in Table II "With Rejection").
Techniques Rejection
Optimized Templates 90.7
Previous Templates 87.5
MLP 44.5
ONNC 69.3
is better than that with the templates extracted from an MLP. However, compared to the
MLP and ONNC classifiers, the performance of our approach is slightly lower in dealing
with such isolated numerals.
To further comparing these algorithms, we also used 10,426 unsure patterns (see the
examples shown in Figure 7) to verify the reliability of the classifiers in rejecting illegible
patterns. This ability is very important for connected character recognition. The unsure
patterns used in the experiment were generated by merging two parts of isolated numerals,
the left part of the right numeral and the right part of the left numeral. The choosing
of two numerals, the width and relative position of each parts and the overlap degree are
set randomly. Table III shows the experimental results of our algorithm together with the
other two techniques. The threshold used for each classifier is the same as the setting for
obtaining the results shown in Column 1 of Table II on isolated numerals.
We can see that the rejection rate of optimized templates is the highest on the unsure
patterns, that is, our approach can achieve a more reliable performance in rejecting illegible
numerals, which is a desirable property for connected handwriting numeral recognition.
We can also see from the experimental results that the optimized templates by using
the evolutionary algorithm can achieve a better performance than the initial templates
extracted by a neural network. It seems that the new approach is one step closer in
achieving global optimization. As any other application with evolutionary algorithms,
our approach also took long to find the solution. Two PentiumII-333 and one PentiumII-
300 with Linux operation system were used for the optimization computation. With 200
iterations, the computation took about 14 days for the ten classes of numerals. However,
the training is required once only, the approach is still feasible.
For further verification of the performance of the templates, we also applied them to
recognize some 4-connected digit strings with a dynamic-programming approach. The
main idea of the dynamic-programming approach is to apply the classifier move through
digit string image with rectangle windows in different width, and match the containing
in windows with optimized templates. In the experiment, digit string images were pre-processed
in advance by the method presented in [44]. We use Ww and W h to present the
width and height of window, and W s as the step length. Ww and W s are determined by
the W h . We use nine windows in different width fW i
w g with values:
hIn the matching between the window and templates, for the consideration of possible
include parts of neighboring digits in window, we do not simply use the equation 8, some
techniques are applied to decrease their effect. A normalization method is first applied:
find the maximal and minimal y value of the foreground, y max and y min , between the region
and at the same time, find y
j and y min
j at region in template
that fit S j (x; y) 0:5; then, scale and translate the window so that y
. Except normalization, we also make a variation to equation 8, instead of
executing the calculation in whole image, it happens only at
is a threshold between 0 and 1. That is, equation 14 and equation 15 are replace by:
H\Omega dx dy
H\Omega
I 2 (x; y) dx dy
In experiment, we set
The whole approach can be simply descript by several steps:
(a) (b)
Fig. 8. Examples of correctly recognized digit string, (a) original coonected digit strings; (b) separated
digits.
1. from left of the image, apply rectangle windows with different widths on the image,
match the containing in windows with templates;
2. if the output from a window is good enough (?0.5, for example), keep the result and
move windows to the right, the begin place is at some distance (three steps in our
experiment) left to the right edge of the previous window and of course, the windows
should be in various width;
3. if the outputs of all the windows are not good enough, move the begin place of
windows a step to right, and go to step 2;
4. repeat step 2,3, till the window reach the right end of the digit string image.
It is highly possible that more than one sequence of digits will be generated from this
approach. A score value is assigned to each sequence by:
are the results in this sequence, and L is the length. We choose the
sequence with maximal score value as the output of the approach.
Figure
8 is an example of correctly recognized digit string. The first row is the original
September 16, 1998
Fig. 9. Examples of recognition-failed digit string.
connected digit string image, and the second row is the separated and recognized digits.
And figure 9 are some recognition-failed examples. The reasons are that: 1) the writing
style of the sample is not included in trainging sets (left sample); 2) degradation (right
sample).
Recognizing this samples cost about 2-7 hours in our PentiumII-333 with linux operation
system. Obviously, this does not make for a practical approach for recognizing
connected digit strings. One way to reduce this computational burden would to bind a
simple classifier with it. Windows are first classified by the simple classifier, those can not
be satisfactorily assigned to a class are passed to the optimized templates classifier. Another
way is to re-design the dynamic-programming scheme, obviously it is not a efficient
method.
VI. Conclusion
In this paper we first propose a new presentation of handwritten numeral templates
with the rational B-spline surfaces of distance distribution maps in order to develop a
classifier that can reliablly rejects illegible patterns while achiving high recognition rate.
A two-step templates extraction algorithm is then presented. First, a multi-layer neural
network approach is adapted to extract templates from a set of training samples, then we
use an evolutionary algorithm to optimize the extracted templates. In the evolutionary
algorithm, instead of making use of a fitness measure, we directly used the function that
measures the difference between a template and a training sample for offspring selection
directly.
Experimental results on NIST Special Database 3 show that the optimized templates can
achieve better performance than the intitial templates. When compared to the MLP and
ONNC classifiers, the recognition rates from our new template presentation are slightly
lower in dealing with isolated numerals. However, the classifiers with the new template
September 16, 1998
presentation can achieve a more reliable performance in rejecting illegible patterns, which
is highly desirable in connected handwritten character recognition.
--R
Vertex directed segmentation of handwritten numerals.
Segmentation and recognition of connected handwritten numeral strings.
Separation of single- and double-touching numeral strings
Recognition of handwritten script: A hidden markov model based approach.
Character recognition without segmentation.
Handprinted word recognition on a nist data set.
Modeling and recognition of cursive words with hidden Markov models.
Handwritten word recognition using segmentation-free hidden Markov modeling and segmentation-based dynamic programming technique
A lexion driven approach to handwritten word recognition for real-time appli- cations
Handwritten word recognition with character and inter-character neural networks
Character recognition - a review
Optical character recognition - a survey
Unconstrained handwritten numeral recognition based on radial basis competitive and cooperative networks with spatio-temporal features representation
Handwritten digit recognition using combined id3-derived fuzzy rules and Markov chains
Multiresolution recognition of unstrained handwritten numerals with wavelet transform and multilayer cluster neural network.
Recognition of handwritten digits based on contour information.
Feature extraction methods for character recognition - a survey
Design and implementation of optimized nearest neighbor classifiers for handwritten digit recognition.
Shape matching using lat and its application to handwritten numeral recognition.
A structural model of shape deformation.
A unified framework for handwritten character recognition using deformable models.
Using generative models for handwritten digit recognition.
Object matching using deformable templates.
Representation and recognition of handwritten digits using deformable templates.
On modelling
An affine-invariant active contour model (ai-snake) for model-based segmenta- tion
Handwritten digit recognition using optimized prototypes.
Multiobjective optimization and multiple constrain handling with evolutionary algorithms part i: A unified formulation.
An overview of evolutionary algorithms for parameter optimization.
An introduction to simulated evolutionary optimization.
Learned deformable templates for object recognition.
Genetic algorithm and deformable geometric models for anatomical object September
Human face location in image sequences using genetic templates.
A clustering algorithm using an evolutionary programming-based approach
The Social System.
Separation of single- and double-touching handwritten numeral strings
Length estimation of digits strings using a neural network with struture based features.
--TR
Context-directed segmentation algorithm for handwritten numeral strings
Character recognitionMYAMPERSANDmdash;a review
An overview of evolutionary algorithms for parameter optimization
Using Generative Models for Handwritten Digit Recognition
A Lexicon Driven Approach to Handwritten Word Recognition for Real-Time Applications
Representation and Recognition of Handwritten Digits Using Deformable Templates
A clustering algorithm using an evolutionary programming-based approach
Postprocessing of Recognized Strings Using Nonstationary Markovian Models
Shape Matching Using LAT and its Application to Handwritten Numeral Recognition
Character Recognition Without Segmentation
Computer-aided design applications of the rational b-spline approximation form.
--CTR
Chenn-Jung Huang, Clustered defect detection of high quality chips using self-supervised multilayer perceptron, Expert Systems with Applications: An International Journal, v.33 n.4, p.996-1003, November, 2007 | template optimization;evolutionary algorithm;nearest neighbor classifier;digit templates;pixel-to-boundary distance map;connected handwritten digit recognition;multilayer perceptron classifier;b-spline fitting |
505929 | Local search characteristics of incomplete SAT procedures. | Effective local search methods for finding satisfying assignments of CNF formulae exhibit several systematic characteristics in their search. We identify a series of measurable characteristics of local search behavior that are predictive of problem solving efficiency. These measures are shown to be useful for diagnosing inefficiencies in given search procedures, tuning parameters, and predicting the value of innovations to existing strategies. We then introduce a new local search method, SDF ("smoothed descent and flood"), that builds upon the intuitions gained by our study. SDF works by greedily descending in an informative objective (that considers how strongly clauses are satisfied, in addition to counting the number of unsatisfied clauses) and, once trapped in a local minima, "floods" this minima by re-weighting unsatisfied clauses to create a new descent direction. The resulting procedure exhibits superior local search characteristics under our measures. We show that this method can compete with the state of the art techniques, and significantly reduces the number of search steps relative to many recent methods. Copyright 2001 Elsevier Science B.V. | Introduction
Since the introduction of GSAT (Selman, Levesque, &
Mitchell 1992) there has been considerable research on local
search methods for finding satisfying assignments for
CNF formulae. These methods are surprisingly effective;
they can often find satisfying assignments for large CNF formulae
that are far beyond the capability of current systematic
search methods (however see (Bayardo & Schrag 1997;
competitive systematic search
results). Of course, local search is incomplete and cannot
prove that a formula has no satisfying assignment when
none exists. However, despite this limitation, incomplete
methods for solving large satisfiability problems are proving
their worth in applications ranging from planning to circuit
design and diagnosis (Selman, Kautz, & McAllester 1997;
Kautz & Selman 1996; Larrabee 1992).
Significant progress has been made on improving the
speed of these methods since the development of GSAT. In
fact, a series of innovations have led to current search methods
that are now an order of magnitude faster.
Copyright c
2000, American Association for Artificial Intelligence
(www.aaai.org). All rights reserved.
Perhaps the most significant early improvement was to
incorporate a "random walk" component where variables
were flipped from within random falsified clauses (Selman
& Kautz 1993). This greatly accelerated search and led to
the development of the very successful WSAT procedure
(Selman, Kautz, & Cohen 1994). A contemporary idea was
to keep a tabu list (Mazure, Sa - is, & Gr-egoire 1997) or break
ties in favor of least recently flipped variables (Gent &Walsh
1993; 1995) to prevent GSAT from repeating earlier moves.
The resulting TSAT and HSAT procedures were also improvements
over GSAT, but to a lesser extent. The culmination
of these ideas was the development of the Novelty
and R Novelty procedures which combined a preference for
least recently flipped variables in a WSAT-type random walk
(McAllester, Selman, & Kautz 1997), yielding methods that
are currently among the fastest known.
A different line of research has considered adding clause-
weights to the basic GSAT objective (which merely counts
the number of unsatisfied clauses) in an attempt to guide the
search from local basins of attraction to other parts of the
search space (Frank 1997; 1996; Cha & Iwama 1996; 1995;
Morris 1993; Selman & Kautz 1993). These methods have
proved harder to control than the above techniques, and it
has only been recent that clause re-weighting has been developed
to a state of the art method. The series of "discrete
Lagrange multiplier" (DLM) systems developed in (Wu &
Wah 1999; Shang & Wah 1998) have demonstrated competitive
results on benchmark challenge problems in the DIMACS
and SATLIB repositories.
Although these developments are impressive, a systematic
understanding of local search methods for satisfiability
problems remains elusive. Research in this area has been
largely empirical and it is still often hard to predict the effects
of a minor change in a procedure, even when this results
in dramatic differences in search times.
In this paper we identify three simple, intuitive measures
of local search effectiveness: depth, mobility, and coverage.
We show that effective local search methods for finding satisfying
assignments exhibit all three characteristics. These,
however, are conflicting demands and successful methods
are primarily characterized by their ability to effectively
manage the tradeoff between these factors (whereas ineffective
methods tend to fail on at least one measure). Our goal
is to be able to distinguish between effective and ineffective
search strategies in a given problem (or diagnose problems
with a given method, or tune parameters) without having to
run exhaustive search experiments to their completion.
To further justify our endeavor, we introduce a new local
search procedure, SDF ("smoothed descent and flood")
that arose from our investigation of the characteristics of effective
local search procedures. We show that SDF exhibits
uniformly good depth, mobility, and coverage values, and
consequently achieves good performance on a large collection
of benchmark SAT problems.
Local search procedures
In this paper we investigate several dominant local search
procedures from the literature. Although many of these
strategies appear to be only superficial variants of one an-
other, they demonstrate dramatically different problem solving
performance and (as we will see) they exhibit distinct
local search characteristics as well.
The local search procedures we consider start with a random
variable assignment
make local moves by flipping one variable x 0
a time, until they either find a satisfying assignment or time
out. For any variable assignment there are a total of n possible
variables to consider, and the various strategies differ
in how they make this choice. Current methods uniformly
adopt the original GSAT objective of minimizing the number
of unsatisfied clauses, perhaps with some minor variant
such as introducing clause weights or considering how many
new clauses become unsatisfied by a flip (break count) or
how many new clauses become satisfied (make count). The
specific flip selection strategies we investigate (along with
are as follows.
GSAT() Flip the variable x i that results in the fewest total
number of clauses being unsatisfied. Break ties randomly.
(Selman, Levesque, & Mitchell 1992)
HSAT() Same as GSAT, but break ties in favor of the least
recently flipped variable. (Gent & Walsh 1993)
WSAT-G(p) Pick a random unsatisfied clause c. With
probability p flip a random x i in c. Otherwise flip the
variable in c that results in the smallest total number of
unsatisfied clauses. (McAllester, Selman, & Kautz 1997)
WSAT-B(p) Like WSAT-G except, in the latter case, flip
the variable that would cause the smallest number of new
clauses to become unsatisfied. (McAllester, Selman, &
Kautz 1997)
WSAT(p) Like WSAT-B except first check whether some
variable x i would not falsify any new clauses if flipped,
and always take such a move if available. (Selman, Kautz,
Pick a random clause c. Flip the variable x i in
c that would result in the smallest total number of unsatisfied
clauses, unless x i is the most recently flipped variable
in c. In the latter case, flip x i with probability 1 p and
otherwise flip the variable x j in c that results in the second
smallest total number of unsatisfied clauses. (McAllester,
that after the clause c
is selected, flip a random x i in c with probability h, otherwise
continue with Novelty. (Hoos 1999)
Note that, conventionally, these local search procedures
have an outer loop that places an upper bound, F , on the
maximum number of flips allowed before re-starting with a
new random assignment. However, we will not focus on random
restarts in our experiments below because any search
strategy can be improved (or at the very least, not dam-
aged) by choosing an appropriate cutoff value F (Gomes,
In fact, it is straightforward and
well known how to do this optimally (in principle): For a
given search strategy and problem, let the random variable
f denote the number of flips needed to reach a solution in
a single run, and let f F denote the number of flips needed
when using a random restart after every F flips. Then we
have the straightforward equality (Parkes & Walser 1996)
Note that this always offers a potential improvement since
for any cutoff F > 0. In particular, one could choose the optimal
cutoff value F We report this optimal
achievable performance quantity for every procedure
below, using the empirical distribution of f over several runs
to estimate EfF . Thus we will focus on investigating the
single run characteristics of the various variable selection
policies, but be sure to report estimates of what the optimum
achievable performance would be using random restarts.
Measuring local search performance
In order to tune the parameters of a search strategy, determine
whether a strategic innovation is helpful, or even debug
an implementation, it would be useful to be able to measure
how well a search is progressing without having to run it to
completion on large, difficult problems.
To begin, we consider a simple and obvious measure of
local search performance that has no doubt been used to tune
and debug many search strategies in the past.
Depth measures how many clauses remain unsatisfied as
the search proceeds. Intuitively, this indicates how deep
in the objective the search is remaining. To get an overall
summary, we take a depth average over all search steps.
Note that it is desirable to obtain a small value of depth.
Although simple minded, and certainly not the complete
story, it is clear that effective search strategies do tend to descend
rapidly in the objective function and remain at good
objective values as the search proceeds. By contrast, strategies
that fail to persistently stay at good objective values
usually have very little chance of finding a satisfying assignment
in a reasonable number of flips (McAllester, Selman,
To demonstrate this rather obvious effect, consider the
problem of tuning the noise parameter p for the WSAT procedure
on a given problem. Here we use the uf100-0953
100 runs on Avg. Avg. Est. opt.
uf100-0953 depth flips w/cutoff
Figure
1: Depth results
Flips Depth rank
rank best 1 2 3 worst 4
best 1 .82 .09 .05 .04
worst 4 .03 .16 .25 .57
Mobility rank
best worst 4
Figure
2: Large scale experiments (2700 uf problems)
problem from the SATLIB repository to demonstrate our
point. 1
Figure
1 shows that higher noise levels cause WSAT
to stay higher in the objective function and significantly increase
the numbers of flips needed to reach a solution. This
result holds both for the raw average number of flips but also
for the optimal expected number of flips using a maximum
flips cutoff with random restarts, ^
EfF . The explanation is
obvious: by repeatedly flipping a random variable in an unsatisfied
clause, WSAT is frequently "kicked out" to higher
objective values-to the extent that it begins to spend significant
time simply re-descending to a lower objective value,
only to be prematurely kicked out again.
Although depth is a simplistic measure, it proves to be
very useful for tuning noise and temperature parameters in
local search procedures. By measuring depth, one can determine
if the search is spending too much time recovering
from large steps up in the objective and not enough time exploring
near the bottom of the objective. More importantly,
maintaining depth appears to be necessary for achieving reasonable
search times. Figure 2 shows the results of a large
experiment conducted over the entire collection of 2700 uf
problems from SATLIB. This comparison ranked four comparable
methods-SDF (introduced below), Novelty, Nov-
elty+, and WSAT-in terms of their search depth and average
flips. For each problem, the methods were ranked in
terms of their average number of flips and average depth.
Each (flips rank, depth rank) pair was then recorded in a ta-
ble. The relative frequencies of these pairs is summarized in
Figure
2. This figure shows that the highest ranked method
in terms of search efficiency was always ranked near the best
(and almost never in the bottom rank) in terms of search
depth.
Although useful, depth alone is clearly not a sufficient criterion
for ensuring good search performance. A local search
1 The uf series of problems are randomly generated 3-CNF formulae
that are generated at the phase transition ratio of 4:3 clauses
to variables. Such formulae have roughly a 50% chance of being
satisfiable, but uf contains only verified satisfiable instances.
Avg.
Distance
Time Lag
Avg.
Distance
Time Lag
Avg.
Distance
Time Lag
Novelty
Avg.
Distance
Time Lag
Novelty
WSAT
Avg.
Distance
Time Lag
Novelty
WSAT
WSAT-G
Avg.
Distance
Time Lag
Novelty
WSAT
WSAT-G
GSAT
100 runs on Avg. Avg. Avg. Est. opt.
uf100-0953 mobility depth flips w/cutoff
Figure
3: Mobility results
could easily become stuck at a good objective value, and yet
fail to explore widely. To account for this possibility we introduce
another measure of local search effectiveness.
Mobility measures how rapidly a local search moves in the
search space (while it tries to simultaneously stay deep
in the objective). We measure mobility by calculating
the Hamming distance between variable assignments that
are k steps apart in the search sequence, and average this
quantity over the entire sequence to obtain average distances
at time lags etc. It is desirable to
obtain a large value of mobility since this indicates that
the search is moving rapidly through the space.
Mobility is obviously very important in a local search. In
fact, most of the significant innovations in local search methods
over the last decade appear to have the effect of substantially
improving mobility without damaging depth. This is
demonstrated clearly in Figure 3, again for the uf100-0953
problem. It appears that the dramatic improvements of these
methods could have been predicted from their improved mobility
scores (while maintaining comparable depth scores).
Figure
3 covers several highlights in the development of
local search methods for satisfiability. For example, one of
the first useful innovations over GSAT was to add a preference
for least recently flipped variables, resulting in the
superior HSAT procedure. Figure 3 shows that one benefit
of this change is to increase mobility without damaging
search depth, which clearly corresponds to improved solution
times. Another early innovation was to incorporate
"random walk" in GSAT. Figure 3 shows that WSAT-G also
delivers a noticeable increase in mobility-again resulting
in a dramatic reduction in solution times. It is interesting to
note that the apparently subtle distinction between WSAT-
G and WSAT in terms of their definition is no longer sub-
100 runs on Avg. Avg. Avg. Avg. Est.
uf100-0953 cover. rate mob. dep. flips opt.
26 4.1 1,355 1,355
Figure
4: Coverage results
tle here: WSAT offers a dramatic improvement in mobility,
along with an accompanying improvement in efficiency. Fi-
nally, the culmination of novelty and random walk in the
Novelty procedure achieves even a further improvement in
mobility, and, therefore it seems, solution time.
We have observed this effect consistently over the entire
range of problems we have investigated. Thus it appears
that, in addition to depth, mobility also is a necessary characteristic
of an effective local search in SAT problems. To
establish this further, Figure 2 shows the results of a large
experiment on the entire collection of 2700 uf problems in
SATLIB. The same four procedures were tested (SDF, Nov-
elty, Novelty+, WSAT) and ranked in terms of their search
mobility and solution time. The results show that the highest
ranked in terms of mobility is almost always ranked near
the top in problem solving efficiency, and that low mobility
tends to correlate with inferior search efficiency.
A final characteristic of local search behavior that we consider
is easily demonstrated by a simple observation: Hoos
presents a simple satisfiable CNF formula with five
variables and six clauses that causes Novelty to (sometimes)
get stuck in a local basin of attraction that prevents it from
solving an otherwise trivial problem. The significance of
this example is that Novelty exhibits good depth and mobility
in this case, and yet fails to solve what is otherwise an
easy problem. This concern led Hoos to develop the slightly
modified procedure Novelty+ in (Hoos 1999). The characteristic
that Novelty is missing in this case is coverage.
Coverage measures how systematically the search explores
the entire space. We compute a rate of coverage by first
estimating the size of the largest "gap" in the search space
(given by the maximum Hamming distance between any
unexplored assignment and the nearest evaluated assign-
ment) and measuring how rapidly the largest gap size is
being reduced. In particular, we define the coverage rate
to be (n max gap)=search steps. Note that it is desirable
to have a high rate of coverage as this indicates that
the search is systematically exploring new regions of the
space as it proceeds.
Figure
4 shows that Hoos's modified Novelty+ procedure
improves the coverage rate of Novelty on the uf100-0953
problem. Space limitations do not allow a full description,
but Novelty+ demonstrates uniformly better coverage than
Novelty while maintaining similar values on other measures,
and thus achieves better performance on nearly every problem
in the benchmark collections.
our results lead us to hypothesize that local
search procedures work effectively because they descend
quickly in the objective, persistently explore variable assignments
with good objective values, and do so while moving
rapidly through the search space and visiting very different
variable assignments without returning to previously
explored regions. That is, we surmise that good local search
methods do not possess any special ability to predict whether
a local basin of attraction contains a solution or not-rather
they simply descend to promising regions and explore near
the bottom of the objective as rapidly, broadly, and systematically
as possible, until they stumble across a solution. Although
this is a rather simplistic view, it seems supported
by our data and moreover it has led to the development of
a new local search technique. Our new procedure achieves
good characteristics under these measures and, more impor-
tantly, exhibits good search performance in comparison to
existing methods.
A new local search strategy: SDF
Although the previous measures provide useful diagnostic
information about local search performance, the main contribution
of this paper is a new local search procedure, which
we call SDF for "smoothed descent and flood." Our procedure
has two main components that distinguish it from previous
approaches. First, we perform steepest descent in a
more informative objective function than earlier methods.
Second, we use multiplicative clause re-weighting to rapidly
move out of local minima and efficiently travel to promising
new regions of the search space.
Recall that the standard GSAT objective simply counts
the number of unsatisfied clauses for a given variable as-
signment. We instead consider an objective that takes into
account how many variables satisfy each clause. Here it will
be more convenient to think of a reversed objective where we
seek to maximize the number of satisfied clauses instead of
minimize the number of unsatisfied clauses. Our enriched
objective works by always favoring a variable assignment
that satisfies more clauses, but all things being equal, favoring
assignments that satisfy more clauses twice (subject
to satisfying the same number of clauses once), and so on.
In effect, we introduce a tie-breaking criterion that decides,
when two assignments satisfy the same number of clauses,
that we should prefer the assignment which satisfies more
clauses on two distinct variables, and if the assignments are
still tied, that we should prefer the assignment that satisfies
more clauses on three distinct variables, etc. This tie-breaking
scheme can be expressed in a scalar objective function
that gives a large increment to the first satisfying vari-
able, and then gives exponentially diminishing increments
for subsequent satisfying variables for a given clause. For
k-CNF formulas with m clauses, such a scoring function is
c
's that satisfy c)
Our intuition is that performing steepest ascent in this objective
should help build robustness in the current variable
assignment which the search can later exploit to satisfy new
clauses. In fact, we observe this phenomenon in our exper-
iments. Figure 5 shows that following a steepest ascent in
Avg.
Depth
Time Steps
Avg.
Depth
Time Steps
GSAT
Figure
5: Descent results
f ABE descends deeper in the original GSAT objective than
the GSAT procedure itself (before either procedure reaches
a local extrema or plateau). This happens because plateaus
in the GSAT objective are not plateaus in f ABE ; in fact, such
plateaus are usually opportunities to build up robustness in
satisfied clauses which can be later exploited to satisfy new
clauses. This effect is systematic and we have observed it
in every problem we have examined. This gives our first
evidence that the SDF procedure, by descending deeper in
the GSAT objective, has the potential to improve the performance
of existing local search methods.
The main outstanding issue is to cope with local maxima
in the new objective. That is, although f ABE does not contain
many plateaus, the local search procedure now has to deal
with legitimate (and numerous) local maxima in the search
space. While this means that plateau walking is no longer a
significant issue, it creates the difficulty of having to escape
from true traps in the objective function. Our strategy for
coping with local maxima involves the second main idea behind
the SDF procedure: multiplicative clause re-weighting.
Note that when a search is trapped at a local maxima the
current variable assignment must leave some subset of the
clauses unsatisfied. Many authors have observed that such
local extrema can be "filled in" by increasing the weight
of the unsatisfied clauses to create a new search direction
that allows the procedure to escape (Wu & Wah 1999;
Frank 1996; Morris 1993; Selman & Kautz 1993). How-
ever, previous published re-weighting schemes all use additive
updates to increment the clause weights. Unfortunately,
additive updates do not work very well on difficult search
problems because the clauses develop large weight differences
over time, and this causes the update mechanism to
lose its ability to rapidly adapt the weight profile to new regions
of the search space. Multiplicative updating has the
advantage of maintaining the ability to swiftly change the
weight profile whenever necessary.
One final issue we faced was that persistently satisfied
clauses would often lose their weight to the extent that
they would become frequently falsified, and consequently
the depth of search (as measured in the GSAT objective)
would deteriorate. To cope with this effect, we flattened the
weight profile of the satisfied clauses at each re-weighting by
shrinking them towards their common mean. This increased
the weights of clauses without requiring them to be explicitly
falsified and had the overall effect of restoring search
depth and improving performance. The final SDF procedure
we tested is summarized as follows.
the variable x i that leads to the greatest increase
in the weighted objective
c
w(c) score(# x i 's that satisfy c)
If the current variable assignment is a local maximum and
not a solution, then re-weight the clauses to create a new
ascent direction and continue.
Multiplicatively re-weight the unsatisfied
clauses and re-normalize the clause weights so that the
resulting largest difference in the f WABE objective (when
flipping any one variable) is -. (That is, create a minimal
greedy search direction.) Then flatten the weight profile
of the satisfied clauses by shrinking them 1 of the distance
towards their common mean (to prevent the weights
from becoming too small and causing clauses to be falsified
gratuitously).
One interesting aspect of this procedure is that it is almost
completely deterministic (given that ties are rare in the ob-
jective, without re-starts) and yet seems to perform very well
in comparison to the best current methods, all of which are
highly stochastic. We claim that much of the reason for this
success is that SDF maintains good depth, mobility, and coverage
in its search. This is clearly demonstrated in Figures
1-3 which show that SDF obtains superior measurements in
every criteria.
Evaluation
We have conducted a preliminary evaluation of SDF on several
thousand benchmark SAT problems from the SATLIB
and DIMACS repositories. The early results appear to be
very promising. Comparing SDF to the very effective Nov-
elty+ and WSAT procedures, we find that SDF typically
reduces the number of flips needed to find a solution over
the best of Novelty+ and WSAT by a factor of two to four
on random satisfiable CNF formulae (from the uf, flat, and
aim collections), and by a factor of five to ten on non-random
CNF formulae (from the SATLIB blocks-world and
ais problem sets). These results are consistent across the vast
majority of the problems we have investigated, and hold up
even when considering the mean flips without restart, median
flips, and optimal expected flips using restarts estimated
from (1). However, our current implementation of SDF is
not optimized and does not yet outperform current methods
in terms of CPU time. Details are reported below.
In all experiments, each problem was executed 100 times
and results averaged over these runs. All problems were
tried with SDF( :2
WSAT(:5). Furthermore, smaller problems were also tried
with HSAT, GSAT, and simulated annealing. 2 There are
We have as yet been unable to replicate the reported results
for the DLM procedures from the published descriptions (Wu &
Wah 1999; Shang & Wah 1998), and so did not include them in our
study. This remains as future work.
Avg. Est. opt. Avg. Est. opt. Est. opt.
uf125 1906 1563 5160 2712 5876
Failed
Avg. flips Est. opt. Est. opt. SDF / Nov+
huge 2561 2560 11104 0 / 0
logistics.c
SDF WSAT % Failed
Avg. flips Est. opt. Est. opt. SDF / WSAT
Figure
Search results
three sets of experiments reported in Figure 6. The first
set covers a wide array of random SAT problems from both
the SATLIB and DIMACS repositories. The results shown
are averaged over all problems in the respective problem set
and are shown for the runs with SDF, Novelty+ (which was
2nd best), and WSAT. The second set covers large planning
problems. The results are shown for SDF and Novelty+ (2nd
best), and the failure rates of each are compared. The third
set covers the ais (All-Interval Series) problem set and shows
results for SDF and WSAT (2nd best). In all experiments,
the mean flips without restart and optimal expected flips are
reported for SDF, and the optimal expected flips is reported
for the other algorithms (when significantly smaller than the
mean without restarts).
The results for the non-random blocks-world and ais
problems are particularly striking. These problems challenge
state of the art local search methods (verified in Figure
and yet SDF appears to solve them relatively quickly.
This suggests that, although SDF shares many similarities to
other local search methods currently in use, if might offer a
qualitatively different approach that could yield benefits in
real world problems.
The current implementation of SDF is unfortunately not
without its limitations. We are presently using a non-optimized
floating-point implementation, which means that
even though SDF executes significantly fewer search steps
(flips) to solve most problems, each search step is more expensive
to compute. The overhead of our current implementation
is about factor of six greater than that of Novelty or
WSAT per flip, which means that in terms of CPU time,
SDF is only competitive with the current best methods in
some cases (e.g. bw large.b). However, the inherent algorithmic
complexity of each flip computation in SDF is no
greater than that of GSAT, and we therefore expect that an
optimized implementation in integer arithmetic will speed
SDF up considerably-possibly to the extent that it strongly
outperforms current methods in terms of CPU time as well.
--R
Using CSP look-back techniques to solve real-world SAT instances
Performance test of local search algorithms using new types of random CNF formulas.
Adding new clauses for faster local search.
Weighting for Godot: Learning heuristics for GSAT.
Learning short-tem weights for GSAT
Towards an understanding of hill-climbing procedures for SAT
Unsatisfied variables in local search.
In Hallam
Boosting combinatorial search through randomization.
On the run-time behavior of stochastic local search algorithms for SAT
Pushing the envelope: Plan- ning
Test pattern generation using boolean satisfia- bility
Evidence for invariants in local search.
The breakout method for escaping from local minima.
Tuning local search for satisfiability testing.
Noise strategies for improving local search.
Ten challenges in propositional reasoning and search.
A new method for solving hard satisfiability problems.
A discrete Lagrangian based global search method for solving satisfiability problems.
Trap escaping strategies in discrete Lagrangian methods for solving hard satisfiability and maximum satisfiability problems.
--TR
Noise strategies for improving local search
Boosting combinatorial search through randomization
On the run-time behaviour of stochastic local search algorithms for SAT
Trap escaping strategies in discrete Lagrangian methods for solving hard satisfiability and maximum satisfiability problems
Heavy-Tailed Phenomena in Satisfiability and Constraint Satisfaction Problems
A Discrete Lagrangian-Based Global-Search Method for Solving Satisfiability Problems
An Efficient Global-Search Strategy in Discrete Lagrangian Methods for Solving Hard Satisfiability Problems
Generating Satisfiable Problem Instances
Local Search Characteristics of Incomplete SAT Procedures
--CTR
Monte Lunacek , Darrell Whitley , James N. Knight, Measuring mobility and the performance of global search algorithms, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Alex Fukunaga, Automated discovery of composite SAT variable-selection heuristics, Eighteenth national conference on Artificial intelligence, p.641-648, July 28-August 01, 2002, Edmonton, Alberta, Canada
Holger H. Hoos, A mixture-model for the behaviour of SLS algorithms for SAT, Eighteenth national conference on Artificial intelligence, p.661-667, July 28-August 01, 2002, Edmonton, Alberta, Canada
Benjamin W. Wah , Yixin Chen, Constraint partitioning in penalty formulations for solving temporal planning problems, Artificial Intelligence, v.170 n.3, p.187-231, March 2006 | constraint satisfication;experimental analysis;satisfiability;local search |
505996 | Improving Latency Tolerance of Multithreading through Decoupling. | AbstractThe increasing hardware complexity of dynamically scheduled superscalar processors may compromise the scalability of this organization to make an efficient use of future increases in transistor budget. SMT processors, designed over a superscalar core, are therefore directly concerned by this problem. This work presents and evaluates a novel processor microarchitecture which combines two paradigms: simultaneous multithreading and access/execute decoupling. Since its decoupled units issue instructions in-order, this architecture is significantly less complex, in terms of critical path delays, than a centralized out-of-order design, and it is more effective for future growth in issue-width and clock speed. We investigate how both techniques complement each other. Since decoupling features an excellent memory latency hiding efficiency, the large amount of parallelism exploited by multithreading may be used to hide the latency of functional units and keep them fully utilized. Our study shows that, by adding decoupling to a multithreaded architecture, fewer threads are needed to achieve maximum throughput. Therefore, in addition to the obvious hardware complexity reduction, it places lower demands on the memory system. Since one of the problems of multithreading is the degradation of the memory system performance, both in terms of miss latency and bandwidth requirements, this improvement becomes critical for high miss latencies, where bandwidth might become a bottleneck. Finally, although it may seem rather surprising, our study reveals that multithreading by itself exhibits little memory latency tolerance. Our results suggest that most of the latency hiding effectiveness of SMT architectures comes from the dynamic scheduling. On the other hand, decoupling is very effective at hiding memory latency. An increase in the cache miss penalty from 1 to cycles reduces the performance of a 4-context multithreaded decoupled processor by less than 2 percent. For the nondecoupled multithreaded processor, the loss of performance is about 23 percent. | Introduction
The gap between the speeds of processors and memories has kept increasing in the past decade
and it is expected to sustain the same trend in the near future. This divergence implies, in terms of
clock cycles, an increasing latency of those memory operations that cross the chip boundaries. In
addition, processors keep growing their capabilities to exploit parallelism by means of greater
issue widths and deeper pipelines, which makes even higher the negative impact of memory
latencies on the performance. To alleviate this problem, most current processors devote a high
fraction of their transistors to on-chip caches, in order to reduce the average memory access time.
Several prefetching techniques have also been developed, both hardware and software [3].
Some processors, commonly known as out-of-order processors [40, 20, 18, 8, 9], include
dynamic scheduling techniques, most of them based on Tomasulo's algorithm [34] or variations of
it, that allow them to tolerate both memory and functional unit latency, by overlapping it with
useful computations of other independent instructions. To implement it, the processor is capable of
filling issue slots with independent instructions by looking forward in the instruction stream, into a
limited instruction window. This is a general mechanism that aggressively extracts the instruction
parallelism available in the instruction window.
As memory latencies continue to grow in the future, out-of-order processors will need larger
instruction windows to find independent instructions to fill the increasing number of empty issue
slots, and this number will grow even faster with greater issue widths. The increase in the
instruction window size will have an obvious influence on the chip area, but its major negative
impact will strike at the processor clock cycle time. As reported recently [21], the networks
involved in the issue wake-up and bypass mechanisms, and also - although to a less extent - those
of the renaming stage, are in the critical path that determines the clock cycle time. In their
analysis, the authors of that study state that the delay function of these networks has a component
that increases quadratically with the window length. And, although linearly, it also depends
strongly on the issue width. Moreover, higher density technologies only accelerate the increase in
these latencies. Their analysis suggest that out-of-order architectures could find in the future a
serious boundary on their clock speeds. Different kinds of architectures have been proposed
recently, either in-order or out-of-order, which address the clock cycle problem by partitioning
critical components of the architecture and/or providing less complex scheduling mechanisms [30,
6, 16, 21, 41]. They follow different partitioning strategies. One of them is the access/execute
paradigm, which was first proposed for early scalar architectures to provide them with dual issue
and a limited form of dynamic scheduling that is especially oriented to tolerate memory latency.
We believe that decoupled access/execute architectures can regain progressively interest as far as
issue widths and memory latencies keep growing and demanding larger instruction windows,
because these trends will make it worth trading issue complexity for clock speed.
Typically, a decoupled access/execute architecture [26, 27, 7, 39, 38, 23, 2, 12] splits, either
statically or dynamically, the instruction stream into two. The access stream is composed of all
those instructions involved in the fetch of data from memory, and it runs asynchronously with
respect to the execute stream, which is formed by the instructions that process these data. Both
streams are executed on independent processing units (called AP and EP respectively, in this
paper). The AP is expected to execute in advance of the EP and to prefetch data from memory into
the appropriate buffering structures, so that the EP can consume them without any delay. This
anticipation or slippage, may involve multiple conditional branches. However, the amount of
slippage between the AP and the EP highly depends on the program ILP, because data and control
dependences can force both units to synchronize - the so called Loss of Decoupling events [2, 35]
- producing a serious performance degradation.
The decoupling model presented in this paper performs dynamic code partitioning, as in [27,
12], by following a simple scheme which is based on the instruction data types, i.e. integer or fp.
Although this rather simplistic scheme mostly benefits to numerical programs, it still provides a
basis for our study which is mainly focused on the latency hiding potential of decoupling and its
synergy with multithreading. Recent studies [22, 24] have proposed other alternative compiler-assisted
partitioning schemes that address the partitioning of integer codes. Since one of the main
arguments for the decoupled approach is the reduced issue logic complexity, it has been chosen to
issue instructions in-order within each processing unit. Such a decoupled architecture adapts to
higher memory latencies by scaling much simpler structures than an out-of-order, i.e. scaling at a
lower hardware cost, or conversely scaling at a higher degree with similar cost. It may be argued
that in-order processors have a limited potential to exploit ILP. However, current compiling
techniques can extract much ILP and thus, the compiler can pass this information to the hardware
instead of using run-time schemes. This is the approach that emerging EPIC (Explicitly Parallel
Instruction Computing) architectures take [10].
We propose a new decoupled architecture which provides both the AP and the EP with a
powerful dynamic scheduling mechanism: simultaneous multithreading [37, 36]. Each processing
unit has several contexts, each issuing instructions in the above mentioned decoupled mode,
which are active simultaneously and compete for the issue slots, so that instructions from different
contexts can be issued in the same cycle. We show in this study that the combination of
decoupling and mulithreading takes advantage of their best features: while decoupling is a simple
but effective technique for hiding high memory latencies with a reduced issue complexity,
multithreading provides enough parallelism to hide functional unit latencies and keep functional
units busy. In addition, multithreading also helps to hide memory latency when a program
decouples badly. However, as far as decoupling succeeds in hiding memory latency, few threads
are needed to keep the functional units busy and achieve a near-peak issue rate. This is an
important result, since having few threads reduces the memory pressure, which has been reported
to be the major bottleneck in multithreading architectures, and reduces the hardware cost and
complexity.
The rest of this paper is organized as follows. Section 2 describes the base decoupled
architecture. It is then analyzed in Section 3, providing justification for multithreading. Section 4
describes and evaluates the proposed multithreaded decoupled architecture. Finally, we
summarize our conclusions in Section 5.
2. The basic decoupled architecture model
The baseline decoupled architecture considered in this paper (Figure 1) consists of two superscalar
decoupled processing units: the Address Processing unit (AP) and the Execute Processing unit
(EP). The decoupled processor executes a single instruction stream, based on the DEC-alpha ISA
[5], by splitting it dynamically and dispatching the instructions to either the AP or the EP. There
are two separate physical register files, one in the AP with 64 integer registers, the other in the EP
with 96 FP registers. Both units share a common fetch and dispatch stage, while they have
separate issue, execute and write-back stage pipelines. Next, there is a brief description of each
stage:
Memory Subsystem
Store
Addres
s
Figure
1: Scheme of the base decoupled processor
Fetch Decode & Rename
Instruction
Reg.
File
Reg.
File
Map
Table
Register
The fetch stage reads up to 4 consecutive instructions per cycle (but less than 4 if there is a
taken branch among them) from an infinite I-cache. Notice that I-cache miss ratios for SPEC FP95
are usually very low, so this approximation introduces a small perturbation. It is also provided
with a conditional branch prediction scheme based on a 2K entry Branch History Table, with a 2-
bit saturating counter per entry [25].
The dispatch stage decodes and renames up to 4 instructions per cycle and sends them to either
the AP or to the instruction queue IQ (48 entries) of the EP, depending on whether they are integer
or floating point instructions. All memory instructions are dispatched to the AP. The IQ allows the
AP to execute ahead of the EP, providing the necessary slippage between them to hide the memory
latency. Exceptions are kept precise by means of a reorder buffer, a graduation mechanism, and
the register renaming map table [13, 28]. Other decoupled architectures [27] had chosen to steer
memory instructions to both units to allow copying data from the load queue to registers. Since
preliminary studies showed that such code expansion would significantly reduce the performance,
we implemented dynamic register renaming, which avoids any duplication. That is, data fetched
from memory is written into a physical register rather than into a data queue, eliminating the need
for copying. It is also a convenient way to manage the disordered completion of loads when a
lockup-free cache is present. Duplication of conditional branch instructions, also used in [27], may
be avoided by incorporating similar speculation and recovery mechanisms as it uses the MIPS
R10000 to identify the instructions to squash in case of a misprediction.
Both the AP and the EP are provided with 2 general purpose, fully pipelined functional units
whose latencies are 1 cycle (AP) and 4 cycles (EP), respectively. Each processing unit can read
and issue up to 2 instructions per cycle. To better exploit the parallelism between the AP and the
EP, the instructions can issue and execute speculatively beyond up to four unresolved branches (as
the MIPS R10000 [40] or the PowerPC 620 [20]). This feature may become sometimes a key
factor to enable the AP to slip ahead of the EP. Store addresses are held in the SAQ queue (32
entries) until the stores graduate. Loads are issued to the cache after being disambiguated against
all the addresses held in the SAQ. Whenever a dependence is encountered, the data from the
pending store is immediately bypassed to the register if it is available. Otherwise, the load is put
aside until this data is forwarded to it.
The primary data cache is on-chip, 2-ported [31], direct-mapped, 64 KB sized, with a
block length, and it implements a write-back policy to minimize off-chip bus traffic. It is a
lockup-free cache [17], modelled similarly to the MAF of the Alpha 21164 [5]. It can hold up to
outstanding (primary) misses to different lines, each capable to merge up to 4 (secondary)
per pending line. We assume that L1 cache misses always hit in an infinite multibanked
off-chip L2 cache, and they have a 16 cycle latency plus any penalty due to bus contention. The
L1-L2 interface consists of a fast 128-bit wide data bus, capable to deliver 16 bytes per cycle, like
that of the R10000 (the bus is busy during 2 cycles for each line that is fetched or copied back).
3. Quantitative evaluation of a decoupled processor
In this section it is first characterized the major sources of wasted cycles in a typical single-threaded
decoupled processor. Next, the latency hiding effectiveness of this architecture is
evaluated, identifying the main factors that influence the latency tolerance of the architecture.
Other studies on decoupled machines have been carried out before [1, 26, 7, 29, 27, 39, 38, 19,
14], but they did not incorporate techniques like store-load forwarding, control speculation or
lockup-free caches. This section also provides the motivation for the multithreaded decoupled
architecture that is analyzed in Section 4.
3.1. Experimental framework
The experiments were carried out with a trace driven simulator. The binary code was obtained by
compiling the SPEC FP95 benchmark suite [33], for a DEC AlphaStation 600 5/266, with the
DEC compiler applying full optimizations. The trace was generated by running this code
previously instrumented with the ATOM tool [32]. The simulator modelled, cycle-by-cycle, the
architecture described in the previous section, and run the SPEC FP95 benchmarks, fed with their
largest available input data sets. Since it is very slow, due to the detail of the simulations, we run
only a portion of 100M instructions of each benchmark., after skipping an initial start-up phase. To
determine the appropriate initial discarded offset we compared the instruction-type frequencies of
such a fragment starting at different points, with the full run frequencies. We found that this phase
has not the same length for all the benchmarks: about 5000 M instructions for 101.tomcatv and
1000 M for 104.hydro2d and 146.wave5; and just 100 M for the rest of the
benchmarks.
3.2. Sources of wasted cycles
Figure
2 shows the throughput of the issue stage in terms of the percentage of committed
instructions over the total issue slot count (i.e. percent of issue slots where it is really doing useful
work) for the AP and the EP. The wasted throughput is also characterized, by identifying the cause
for each empty issue slot. Four different configurations have been evaluated, which differ in
whether lockup-free cache is included and whether the store-load forwarding mechanism is
enabled. To stress the memory system, in this section we assume an 8 KB L1 data cache.
As shown in Figure 2, when a lockup-free cache is not present (first and second bars), the AP
is stalled by load misses and the EP is starved for most of the time. Miss latency increases the AP
cycle count far above the EP cycle count. The AP execution time becomes the bounding limit of
the global performance, and decoupling can hardly hide memory latencies. The nature of these
Figure
2: Issue slot breakdown for several decoupled architectures, that show the effects of a lockup-free
cache and a store-load forwarding mechanism (8 KB L1 cache size).
none forwd l-free l-free
+forwd
none forwd l-free l-free
+forwd
Configuration1030507090%of
Issue
slots
wrong-path
instr. or idle
wait operand
from FU
wait operand
from memory
blocking miss
st/ld hazard
other
useful work
stalls is a structural hazard. When a lockup-free cache is used, this kind of stalls are almost
eliminated (third and fourth bars). Of course, this uncovers other overlapped causes, but the
overall improvement in performance achieves an impressive 2.3 speed-up (from 0.98 to 2.32 IPC).
A memory data hazard can occur between a store and a FP load, and it is detected during
memory disambiguation. When store-load forwarding is not enabled (first and third bars), a
memory hazard produces a stall on the AP until the store is issued to the cache. In addition, it
causes a slippage reduction between the two units - we call this event a loss of decoupling, or LOD
[2, 35] - that may expose the EP to be penalized by the memory latency in case of a subsequent
load miss. The amount of slippage reduction between the AP and the EP caused by a memory
hazard depends on how close the load is scheduled after the matching store. The results depicted
in
Figure
2 show that the AP stalls (labelled st/ld hazards) are almost completely removed when
the store-load forwarding is enabled. However, the average improvement on the EP performance
is almost negligible (overall IPC increases just by 1.8%). This latter fact suggests that either the
stores are scheduled enough in advance of the matching loads, or there is little probability to get a
subsequent miss.
Finally, for a full featured configuration (fourth bar in the graph), it can be observed that the
major source of wasted slots in the EP are true data dependences between register operands
(labelled wait operand from FU), and that these stalls are less than those caused by misses
(labelled wait operand from memory). Notice that although there are many more loads to the EP
registers than loads to the AP registers, the stalls caused by misses are similar on both processor
units because each integer load miss produces a higher penalty, as this will be more clearly
illustrated in the next section.
3.3. Latency hiding effectiveness
The interest of a decoupled architecture is closely related to its ability to hide high memory
latencies without resorting to other more complex issue mechanisms. The latency hiding potential
of a decoupled processor depends strongly on the decoupling behaviour of the programs being
tested. For some programs, the scheduling ability of the compiler to remove LOD events, which
force the AP and the EP to synchronize, is also a key factor. However, the compiler we have used
(Digital f77) is not especially tailored to a decoupled processor. Therefore, since the latency
hiding effectiveness of decoupling provides the basis for our proposed multithreaded decoupled
architecture, in order to validate our conclusions, we are interested in having an assessment of it in
our base architecture, without any specific compiler support. For this purpose, we have run the 10
benchmarks with the external L2 cache latency varying from 1 to 256 cycles. The simulations
assume all the architectural parameters described in Section 2 except that all the architectural51525
L2 Latency (cycles)51525
Average
Perceived
FP-Load
Miss
Latency
(cycles) tomcatv
su2cor
hydro2d
mgrid
applu
turb3d
apsi
tomcat swim su2cor hydro mgrid applu turb3d apsi fpppp wave5
Benchmark2060100
Miss
latency
stores
loads20601001401
L2 Latency (cycles)2060100140Average
Perceived
I-Load
Miss
Latency
(cycles) tomcatv
su2cor
hydro2d
mgrid
applu
turb3d
apsi
L2 Latency (cycles)
loss
su2cor
hydro2d
mgrid
applu
turb3d
apsi
Figure
3-a: Perceived miss latency of FP loads. Figure 3-b: Perceived miss latency of Integer loads.
Figure
3-c: Miss Ratios of Loads and Stores, when
L2 latency is 256 cycles.
Figure
3-d: Impact of latency on performance (loss
relative to the 1-cycle L2 latency case).
queues and physical register files are scaled up proportionally to the L2 latency. In addition to the
performance, we have also measured separately the average "perceived" latency of integer and FP
load misses. Since we are interested in the particular benefit of decoupling, independently of the
cache miss ratio, this average does not include load hits.
The perceived latency of FP load misses measures the EP stalls caused by misses, and reveals
the "decoupled behavior" of a program, i.e. the amount of slippage of the AP with respect to the
EP. As shown in Figure 3-a, except for fpppp, more than 96% of the FP load miss latency is
always hidden. The perceived latency of integer load misses measures the AP stalls caused by
misses, and it depends on the ability of the compiler to schedule integer loads ahead of other
dependent instructions. As shown in Figure 3-b, fpppp, su2cor, turb3d and wave5 are the
programs that experience the largest integer load miss stalls.
Regarding the impact of the L2 latency on performance (see Figure 3-d), although programs
like fpppp or turb3d have quite high perceived load miss latencies, they are hardly performance
degraded due to their extremely low miss ratios (depicted in Figure 3-c). The most performance
degraded programs are those with both high perceived miss latencies and significant miss ratios:
hydro2d, wave5 and su2cor.
To summarize, performance is little affected by the L2 latency when either it can be hidden
efficiently (tomcatv, swim, mgrid, applu and apsi), or when the miss ratio is low (fpppp and
turb3d), but it is seriously degraded for programs that lack both features (su2cor, wave5 and
hydro2d). The hidden miss latency of FP loads depends on the good decoupling behavior of the
programs, while that of integer loads relies exclusively on the static instruction scheduling.
4. A multithreaded decoupled architecture
As shown in the previous section, most of the stalls of a decoupled processor may be removed,
except those caused by true data dependences between register operands in the EP (Figure 2 right,
labelled wait operand from FU), because of the restricted ability of the in-order issue model to
exploit ILP. If both the AP and the EP were provided with some dynamic scheduling capability,
most of these stalls could also be removed. Simultaneous multithreading (SMT) is a dynamic
scheduling technique that increases processor throughput by exploiting thread level parallelism.
Multiple contexts simultaneously active compete for issue slots and functional units. Previous
studies of SMT focused on several dynamic instruction scheduling mechanisms [4, 11, 37, 36,
among others] other than decoupling. In this paper, we analyze its potential when implemented on
a decoupled processor. We still refer to it as simultaneous although there are obvious substantial
differences from the original SMT, because it retains the key concept of issuing from different
threads during a single cycle. Since decoupling provides excellent memory latency tolerance, and
multithreading supplies enough amounts of parallelism to remove the remaining stalls, we expect
important synergistic effects in a new microarchitecture which combines these two techniques. In
this section we present and evaluate the performance and memory latency tolerance of the
multithreaded decoupled access/execute architecture, and we analyze the mutual benefits of both
techniques, especially when the miss latency is large.
4.1. Architecture overview
Our proposal is a multithreaded decoupled architecture (Figure 4). That is, each thread executes in
a decoupled mode, sharing the functional units and the data cache with other threads. The base
Memory Subsystem
Store
Addres
s
Figure
4: Scheme of the multithreaded decoupled processor
Instruction
Reg.
Files
Reg.
Files
Map
Tables
Register Fetch Dispatch & Rename
multithreaded decoupled architecture is based on the decoupled design of the previous section
with some extensions: it can run up to 6 threads and issue up to 8 instructions per cycle (4 at the
AP and 4 at the EP) to 8 functional units. The L1 lockup-free data cache is augmented to 4 ports.
The fetch and dispatch stages - including branch prediction and register map tables - and the
register files and queues are replicated for each context. The issue logic, functional units and the
data cache are shared by all the threads.
In our model, all the threads are allowed to compete for each of the 8 issue slots each cycle,
and priorities among them are determined in pure round-robin order (similar to the full
simultaneous issue scheme reported in [37]). Each cycle, only two threads have access to the I-
cache, and each of them can fetch up to 8 consecutive instructions (up to the first taken branch).
The chosen threads are those with less instructions pending to be dispatched (similar to the RR-2.8
with I-COUNT schemes, reported in [36]).
4.2. Experimental evaluation
The multithreaded decoupled simulator is fed with t different traces, corresponding to t
independent threads. The trace of every thread is built by concatenating the first 10 million
instructions of the 10 traces used in the previous section - each thread using a different
permutation - thus totalling 100 million instructions per thread. In this way, all threads have
different traces but balanced workloads, similar miss-ratios, etc. Figure 5 shows the wasted issue
slots when varying the number of threads from 1 to 6. Since different threads may be candidates
for the same slot, and each can lose it because of a different cause, in order to characterize the loss
of performance, we have classified the wasted issue slots proportionally to the causes that prevent
individual threads from issuing.
4.3. Wasted issue slots in the multithreaded decoupled architecture
The first column in Figure 5 represents the case with a single thread, and it reveals, as expected,
that the major bottleneck is caused by the EP functional units latency (caused by the lack of
parallelism of the in-order issue policy, as discussed in Section 3). When two more contexts are
added, the multithreading mechanism reduces drastically these stalls in both units, and produces a
2.31 speed-up (from 2.68 IPC to 6.19 IPC). Since with 3 threads the AP functional units are nearly
saturated (90.7%), negligible additional speed-ups are obtained by adding more contexts (6.65
IPC is achieved with 4 threads).
Notice that although the AP almost achieves its maximum throughput, the EP functional units
do not saturate due to the load imbalance between the AP and the EP. Therefore, the effective peak
performance is reduced by 17%, from 8 to 6.65 IPC. This problem could be addressed with a
different choice of the number of functional units in each processor unit, but this is beyond the
scope of this study.
Another important remark is that when the number of threads is increased, the combined
working set is larger, and the miss ratios increase progressively, putting greater demands on the
external bus bandwidth. On average, there are more pending misses, thus increasing the effective
load miss latency, and increasing the EP stalls caused by waiting operands from memory (see
rightmost graph of Figure 5). On the other hand, the AP stalls due to integer load misses (see
operands from memory in the leftmost graph of Figure 5) are almost eliminated by multithreading
since these loads do not benefit from decoupling.
Number of threads1030507090%of
Issue
Cycles
idle
wait operand
from FU
wait operand
from memory
other
useful work
Number of threads1030507090%of
Issue
Cycles
empy i-queue
wait operand
from FU
wait operand
from memory
other
useful work
Figure
5: AP (left) and EP (right) issue slots breakdown for the multithreaded decoupled architecture.
4.4. Latency hiding effectiveness
Multithreading and decoupling are two different approaches to tolerate high memory latencies. We
have run some experiments, similar to those of Section 3.3, for a multithreaded decoupled
processor having from 1 to 4 contexts to quantify its latency tolerance. In addition, some other
experiments are also carried out to reveal the contribution of each mechanism to the latency hiding
effect. They consist of a set of identical runs on a degenerated version of our multithreaded
architecture where the instruction queues are disabled (i.e. a non-decoupled multithreaded
architecture).
Figure
6-a shows the average perceived load miss latency from the point of view of each
individual thread, for the 8 configurations mentioned above, by varying L2 latency from 1 to 256
cycles. This metric expresses the average number of times an instruction of a scheduled thread
cannot issue because its operand depends on a pending load miss. Figure 6-b shows the
corresponding relative performance loss (with respect to the 1-cycle L2 latency) of each of the 8
configurations. Notice that this metric compares the tolerance of these architectures to memory
latency, rather than their absolute performance. Several conclusions can be drawn from these
graphs.
First, we can observe in Figure 6-a that the average load miss latency perceived by an
individual thread is quite low when decoupling is enabled (less than 6 cycles, for a L2 latency of
256 cycles) but it is much higher when decoupling is disabled. Second, the load miss latency
perceived by an individual thread is slightly longer when more threads are running. Although
having more threads effectively reduces the number of stall cycles of each thread, it also increases
the miss ratio (due to the larger combined working set) and produces longer bus contention delays,
which becomes the - slightly - dominant effect.
Third, it is shown in Figure 6-b that when the L2 memory latency is increased from 1 cycle to
cycles, the decoupled multithreaded architecture experiences performance drops of less than
3.6% (less than 1.5%, with 4 threads), while the performance degradation observed in all non-
-decoupled configurations is greater than 23%. Even for a huge memory latency of 256 cycles, the
performance loss of all the decoupled configurations is lower than 39% while it is greater than
79% for the non-decoupled configurations. Fourth, multithreading provides some additional
latency tolerance improvements, especially in the non-decoupled configurations, but it is much
lower than the latency tolerance provided by decoupling.
Some other conclusions can be drawn from Figure 6-c. While multithreading raises the
performance curves, decoupling makes them flatter. In other words, while the main effect of
L2 Latency (cycles)1030507090110
Perceived
Load
Miss
L2 Latency (cycles)
loss
(relative
tocycle
latency)
Figure
6-a: Average perceived load miss latency of
individual threads.
Figure
6-b: Latency tolerance: performance loss is
relative to the 1-cycle L2 latency case
Figure
6-c: Contribution of decoupling and
multithreading to performance.13571
L2 Latency (cycles)1357IPC
4 T, decoupled
3 T, decoupled
T, decoupled
4 T, non-decoupled
3 T, non-decoupled
T, non-decoupled
non-decoupled
multithreading is to provide more throughput by exploiting thread level parallelism, the major
contribution to memory latency tolerance, which is related to the slope of the curves, comes from
decoupling, and this is precisely the specific role that decoupling plays in this hybrid architecture.
4.5. Hardware context reduction and the external bus bandwidth bottleneck
Multithreading is a powerful mechanism that highly improves the processor throughput, but it has
a cost: it needs a considerable amount of hardware resources. We have run some experiments that
illustrate how decoupling reduces the hardware context requirements. We have measured the
performance of several configurations having from 1 to 8 contexts, both with a decoupled
multithreaded architecture and a non-decoupled multithreaded architecture (see Figure 7-a). While
the decoupled configuration achieves the maximum performance with just 3 or 4 threads, the non-
decoupled configuration needs 6 threads to achieve similar IPC ratios.
One of the traditional claims of the multithreading approach is its ability to sustain a high
processor throughput even in systems with a high memory latency. Since hiding a longer latency
may require a higher number of contexts and, as it is well known, this has a strong negative impact
on the memory performance, the reduction in hardware context requirements obtained by
decoupling may become a key factor when L2 memory latency is high. To illustrate this, we have
run the previous experiment having a L2 memory latency of 64 cycles. As shown in Figure 7-b,13571 2 3 4 5 6 7 8
Number of Threads1357IPC
decoupled
non-decoupled
Number of Threads1357IPC
decoupled
non-decoupled
Figure
7-a: Decoupling reduces the
number of hardware contexts
Figure
7-b: Maximum performance without decoupling cannot
be reached due to external bus saturation.
while the decoupled architecture achieves the maximum performance with just 4 or 5 threads, the
non-decoupled architecture cannot reach similar performance with any number of threads,
because it would need so many that would they saturate the external L2 bus: the average bus
utilization is 89% with 12 threads, and 98% for threads. Moreover, notice that the decoupled
architecture requires just 3 threads to achieve about the same performance as the non-decoupled
architecture with 12 threads. Thus, decoupling significantly reduces the amount of parallelism
required to reach a certain level of performance.
The previous result suggests that the external L2 bus bandwidth is a potential bottleneck in this
kind of architectures. To further describe its impact, we have measured the performance and bus
utilization of several configurations having from 1 to 6 hardware contexts, for three different
external bus bandwidths of 8, 16 and 32 bytes/cycle. Results are shown in Figure 8-a and
Figure
8-b. For an 8 bytes/cycle bandwidth, the bus becomes saturated when more than 3 threads
are running, and performance is degraded beyond this point.
To summarize, decoupling and multithreading complement each other to hide memory latency
and increase ILP with reduced amounts of thread-level parallelism and low issue logic complexity.
Figure
8-a: IPC, for several bus bandwidths Figure 8-b: External L2 bus utilization, for several
bus bandwidths2060100
Number of Threads2060100
External
Bus
Utilization
cycles)
8 bytes/cycle
Number of Threads1357IPC
8 bytes/cycle
5. Summary and conclusions
In this paper we have analized the synergy of multithreading and access/execute decoupling. A
multithreaded decoupled architecture aims at taking advantage of the latency hiding effectiveness
of decoupling, and the potential of multithreading to exploit ILP. We have analyzed the most
important factors that determine its performance and the synergistic effect of both paradigms.
A multithreaded decoupled architecture hides efficiently the memory latency: the average load
miss latency perceived by an individual thread is less than 6 cycles in the worst case (with 4
threads and a L2 latency of 256 cycles). We have also found that, for L2 latencies lower than
cycles, their impact on the performance is quite low: less than 3.5% IPC loss, relative to the 1-
cycle latency scenario, and it is quite independent of the number of threads. However, this impact
is greater than a 23% IPC loss if decoupling is disabled. This latter fact shows that the main
contribution to the memory latency tolerance corresponds to the decoupling mechanism.
The architecture reaches maximum performance with very few threads, significantly less than
in a non-decoupled architecture. The number of simultaneously active threads supported by the
architecture has a significant impact on the hardware chip area (e.g. number of registers,
instruction queues) and complexity (e.g. the instruction fetch and issue mechanisms) and
consequently in clock cycle.
Reducing the number of threads also reduces the cache conflicts and the required memory
bandwidth, which is usually one of the potential bottlenecks of a multithreaded architecture. We
have shown how the external L2 bus bandwidth becomes a bottleneck when the miss latency is 64
cycles, if decoupling is disabled, preventing it from achieving the maximum performance with
any number of threads.
In summary, we can conclude that decoupling and multithreading techniques complement each
other to exploit instruction level parallelism and to hide memory latency. This particular
combination obtains its maximum performance with few threads, has a reduced issue logic
complexity, and it is hardly performance degraded by a wide range of L2 latencies. All of these
features make it a promising alternative for future increases in clock speed and issue width.
6.
--R
A Decoupled Access/Execute Architecture for Efficient Access of Structured Data.
The Effectiveness of Decoupling.
A performance study of software and hardware data prefetching schemes.
The Concurrent Execution of Multiple Execution Streams on Super-scalar Processors
Alpha 21164 Microprocessor Hardware Reference Manual
The Multicluster Architecture: Reducing Cycle Time Through Partitioning.
PIPE: A VLSI Decoupled Architecture.
's P6 Uses Decoupled Superscalar Design.
Digital 21264 Sets New Standard.
HP Make EPIC Disclosure.
An Elementary Processor Architecture with Simultaneous Instruction Issuing from Multiple Threads.
Designing the TFP Microprocessor.
Superscalar Microprocessor Design.
A Limitation Study into Access Decoupling.
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers
PEWs: A Decentralized Dynamic Scheduler for ILP Processing.
Memory Latency Effects in Decoupled Architectures.
The PowerPC 620
Decoupling Integer Execution in Superscalar Processors.
Structured Memory Access Architecture.
Exploiting Idle Floating-Point Resources For Integer Execution
A Study of Branch Prediction Strategies.
Decoupled Access/Execute Computer Architectures.
Implementation of Precise Interrupts in Pipelined Processors.
A Simulation Study of Decoupled Architecture Computers.
Multiscalar Processors.
ATOM: A System for Building Customized Program Analysis Tools.
Standard Performance Evaluation Corporation.
An Efficient Algorithm for Exploiting Multiple Arithmetic Units.
Compiling and Optimizing for Decoupled Architectures.
Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor.
Simultaneous Multithreading: Maximizing On-Chip Par- allelism
MISC: A Multiple Instruction Stream Computer.
An Evaluation of the WM Architecture.
The Mips R10000 Superscalar Microprocessor.
--TR
A simulation study of decoupled architecture computers
The ZS-1 central processor
High-bandwidth data memory systems for superscalar processors
An elementary processor architecture with simultaneous instruction issuing from multiple threads
Evaluation of the WM architecture
MISC
The effectiveness of decoupling
ATOM
Designing the TFP Microprocessor
Compiling and optimizing for decoupled architectures
Simultaneous multithreading
Multiscalar processors
Decoupling integer execution in superscalar processors
Exploiting choice
Complexity-effective superscalar processors
Trace processors
The multicluster architecture
Exploiting idle floating-point resources for integer execution
Performance modeling and code partitioning for the DS architecture
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
Implementation of precise interrupts in pipelined processors
Decoupled access/execute computer architectures
The MIPS R10000 Superscalar Microprocessor
Memory Latency Effects in Decoupled Architectures
A Limitation Study into Access Decoupling
The PowerPC 620 microprocessor
Lockup-free instruction fetch/prefetch cache organization
A study of branch prediction strategies
A Cost-Effective Clustered Architecture
The Latency Hiding Effectiveness of Decoupled Access/Execute Processors | simultaneous multithreading;instruction-level parallelism;hardware complexity;latency hiding;access/execute decoupling |
506024 | Fault Detection for Byzantine Quorum Systems. | AbstractIn this paper, we explore techniques to detect Byzantine server failures in asynchronous replicated data services. Our goal is to detect arbitrary failures of data servers in a system where each client accesses the replicated data at only a subset (quorum) of servers in each operation. In such a system, some correct servers can be out-of-date after a write and can therefore, return values other than the most up-to-date value in response to a client's read request, thus complicating the task of determining the number of faulty servers in the system at any point in time. We initiate the study of detecting server failures in this context, and propose two statistical approaches for estimating the risk posed by faulty servers based on responses to read requests. | Introduction
Data replication is a well-known means of protecting against data unavailability
or corruption in the face of data server failures. When servers can suffer Byzantine
(i.e., arbitrary) failures, the foremost approach for protecting data is via state
machine replication [Sch90], in which every correct server receives and processes
every request in the same order, thereby producing the same output for each re-
quest. If the client then accepts a value returned by at least t
to t arbitrary server failures can be masked. Numerous systems have been built to
support this approach (e.g., [PG89, SESTT92, Rei94, KMM98]).
To improve the efficiency and availability of data access while still protecting the
integrity of replicated data, the use of quorum systems has been proposed. Quorum
preprint of paper to appear in Seventh IFIP International Working Conference on Dependable
Computing for Critical Applications (DCCA-7) January, 1999, San Jose, California
y Department of Computer Science, University of Texas, Austin, Texas;
lorenzo@cs.utexas.edu. This work was funded in part by a NSF CAREER award (CCR-
9734185), a DARPA/SPAWAR grant number N66001-98-8911 and a NSF CISE grant (CDA-
z AT&T Labs-Research, Florham Park, New Jersey; dalia@research.att.com
x Department of Computer Science, University of Texas, Austin, Texas; tumlin@cs.utexas.edu
- Bell Laboratories, Lucent Technologies, Murray Hill, New Jersey; reiter@research.bell-
labs.com
Alvisi, Malkhi, Pierce, Reiter
systems are a family of protocols that allow reads and updates of replicated data
to be performed at only a subset (quorum) of the servers. In a t-masking quorum
system, the quorums of servers are defined such that any two quorums intersect in
at least 2t In a system with a maximum of t faulty
servers, if each read and write operation is performed at a quorum, then the quorum
used in a read operation will intersect the quorum used in the last preceding write
operation in at least t+1 correct servers. With appropriate read and write protocols,
this intersection condition ensures that the client is able to identify the correct, up-
to-date data [MR97a].
A difficulty of using quorum systems for Byzantine fault tolerance is that detecting
responsive but faulty servers is hard. In state machine replication, any server
response that disagrees with the response of the majority immediately exposes the
failure of the disagreeing server to the client. This property is lost, however, with
quorum systems: because some servers remain out of date after any given write, a
contrary response from a server in a read operation does not necessarily suggest the
server's failure. Therefore, we must design specific mechanisms to monitor the existence
of faults in a quorum-replicated system, e.g., to detect whether the number
of failures is approaching t.
In this paper, we initiate the study of Byzantine fault detection methods for quorum
systems by proposing two statistical techniques for estimating the number of
server failures in a service replicated using a t-masking quorum system. Both of
our methods estimate the total number of faulty servers from responses to a client's
read requests executed at a quorum of servers, and are most readily applicable to
the threshold quorum construction of [MR97a], in which a quorum is defined as any
set of size d n+2t+1
e. The first method has the advantage of requiring essentially no
change to the read and write protocols proposed in [MR97a]. The second method
does require an alteration of the read and write protocols, but has the advantages
of improved accuracy and specific identification of a subset of the faulty servers.
Furthermore, the fault identification protocol of the second method is applicable
without alteration to all types of t-masking quorum systems, and indeed to other
types of Byzantine quorum systems as proposed in [MR97a].
Both methods set an alarm line t a ! t and issue a warning whenever the number
of server failures exceeds t a . We show how the system can use information from
each read operation to statistically test the hypothesis that the actual number of
faults f in the system is at most t a . As we will show, if t a is correctly selected and
read operations are frequent, both methods can be expected to issue warnings in a
timely fashion, i.e., while it is still the case that f ! t. The service can then be
repaired (or at least disabled) before the integrity of the data set is compromised.
As an initial investigation into the statistical monitoring of replicated data, this
paper adopts a number of simplifying assumptions. First, we perform our statistical
analysis in the context of read operations that are concurrent with no write oper-
ations, as observing partially completed writes during a read substantially complicates
the task of inferring server failures. Second, we assume that clients are
correct; distinguishing a faulty server from a server into which a faulty client has
Fault Detection for Byzantine Quorum Systems
written incorrect data raises issues that we do not consider here. Third, we restrict
our attention to techniques that modify the read and write protocols only minimally
or not at all and that exploit data gathered from a single read only, without aggregating
data across multiple reads. (As we will show in this paper, a surprising
amount of information can be obtained without such aggregation.) Each of these
assumptions represents an area for possible future research.
The goal of our work is substantially different from that of various recent works
that have adapted failure detectors [CT96] to solve consensus in distributed systems
that can suffer Byzantine failures [MR97b, DS97, KMM97]. These works focus on
the specification of abstract failure detectors that enable consensus to be solved.
Our goal here is to develop techniques for detecting Byzantine failures specifically
in the context of data replicated using quorum systems, without regard to abstract
failure detector specifications or the consensus problem. Lin et al. [LRM98] analyze
the process of gradual infection of a system by malicious entities. Their analysis
attempts to project when failures exceed certain thresholds by extrapolating from
observed failures onto the future, on the basis of certain a priori assumptions about
the communication patterns of processes and the infection rate of the system. Our
methods do not depend on these assumptions, as they do not address the propagation
of failures in the system; rather, they attempt to measure the current number of
failures at any point in time.
To summarize, the contributions of this paper are twofold: we initiate the direction
of fault monitoring and detection in the context of Byzantine quorum sys-
tems; and we propose two statistical techniques for performing this detection for
t-masking quorum systems under the conditions described above. The rest of this
paper is organized as follows. In Section 2 we describe our system model and
necessary background. In Sections 3-4 we present and analyze our two statistical
methods using exact formulae for alarm line placement in relatively small systems.
In Section 5 we present an asymptotic analysis for estimating appropriate alarm line
placement in larger systems for both methods. We conclude in Section 6.
Preliminaries
2.1 System model
Our system model is based on a universe U of n data servers. A correct server is
one that behaves according to its specification, whereas a faulty server deviates from
its specification arbitrarily (Byzantine failure). We denote the maximum allowable
number of server failures for the system by t, and the actual number of faulty servers
in the system at a particular moment by f . Because our goal in this paper is to
detect faulty servers, we stipulate that a faulty server does in fact deviate from its
I/O specification, i.e., it returns something other than what its specification would
dictate (or it returns nothing, though unresponsive servers are ignored in this paper
and are not the target of our detection methods). It is hardly fruitful to attempt to
detect "faulty" servers whose visible behavior is consistent with correct execution.
Alvisi, Malkhi, Pierce, Reiter
Our system model also includes some number of clients, which we assume to be
correct. Clients communicate with servers over point-to-point channels. Channels
are reliable, in the sense that a message sent between a client and a correct server
is eventually received by its destination. In addition, a client can authenticate the
channel to a correct server; i.e., if the client receives a message from a correct
server, then that server actually sent it.
2.2 Masking quorum systems
We assume that each server holds a copy of some replicated variable Z , on
which clients can execute write and read operations to change or observe its value,
respectively. The protocols for writing and reading Z employ a t-masking quorum
system [MR97a, MRW97], i.e., a set of subsets of servers Q ' 2 U such that
Intuitively, if each read and write is performed
at a quorum of servers, then the use of a t-masking quorum system ensures that a
read quorum Q 2 intersects the last write quorum Q 1 in at least t
which suffices to enable the reader to determine the last written value. Specifically,
we base our methods on threshold masking quorum systems [MR97a], defined by
i.e., the quorums are all sets of servers of size
d n+2t+1
2 e. These systems are easily seen to have the t-masking property above.
We consider the following protocols for accessing the replicated variable Z ,
which were shown in [MR97a] to give Z the semantics of a safe variable [Lam86].
Each server u maintains a timestamp T u with its copy Z u of the variable Z . A
client writes the timestamp when it writes the variable. These protocols require
that different clients choose different timestamps, and thus each client c chooses
its timestamps from some set T c that does not intersect T c 0
for any other client c 0 .
Client operations proceed as follows.
Write: For a client c to write the value v to Z , it queries each server in some
quorum Q to obtain a set of value/timestamp pairs
a timestamp T 2 T c greater than the highest timestamp value in A and greater than
any timestamp it has chosen in the past, and updates Z u and T u at each server u in
some quorum Q 0
to v and T , respectively.
Read: For a client to read a variable Z , it queries each server in some quorum Q to
obtain a set of value/timestamp pairs From among all pairs
returned by at least t servers in Q, the client chooses the pair !v; T? with the
highest timestamp T , and then returns v as the result of the read operation. If there
is no pair returned by at least t servers, the result of the read operation is ? (a
null value).
In a write operation, each server u updates Z u and T u to the received values !v; T?
only if T is greater than the present value of T u ; this convention guarantees the seri-
Fault Detection for Byzantine Quorum Systems
alizability of concurrent writes. As mentioned in Section 1 we consider only reads
that are not concurrent with writes. In this case, the read operation will never return
? (provided that the assumed maximum number of failures t is not exceeded).
2.3 Statistical building blocks
The primary goal of this paper is to draw conclusions about the number f of
faulty servers in the system, specifically whether f exceeds a selected alarm threshold
t a , where t, using the responses obtained in the read protocol of
the previous subsection. To do this, we make extensive use of a statistical technique
called hypothesis testing. To use this technique, we establish two hypotheses
about our universe of servers. The first of these is an experimental hypothesis HE
that represents a condition to be tested for, e.g., that f exceeds the alarm threshold
t a , and the second is a null hypothesis H 0 complementing it. The idea behind
hypothesis testing is to examine experimental results (in our case, read operations)
for conditions that suggest the truth of the experimental hypothesis, i.e., conditions
that would be "highly unlikely" if the null hypothesis were true. We define "highly
unlikely" by choosing a rejection level identifying a corresponding
region of rejection for H 0 , where the region of rejection is the maximal set of
possible results that suggest the truth of HE (and thus the falsity of H 0 ) and whose
total probability given H 0 is at most ff. For the purposes of our work, HE will be
. (Note that although these hypotheses are not
strictly complementary, the region of rejection for H 0 encompasses that of every
a , where
therefore the rejection level of the truly
complementary hypothesis f - t a is bounded by that of H 0 . This treatment of the
null hypothesis is a standard statistical procedure.)
In this paper we will typically choose t a to be strictly less than the maximum
assumed number t of failures in the system, for the reason
that it is of little use to detect a dangerous condition after the integrity of the
data has been compromised. The "safest" value for t a is 0, but a higher value
may be desirable if small numbers of faults are common and countermeasures are
expensive.
In order for our statistical calculations to be valid, we must be able to treat individual
quorums and the intersection between any two quorums as random samples
of the universe of servers. Given our focus on a quorum system consisting of all
sets of size d n+2t+1
2 e, this can be accomplished by choosing quorums in such a way
that each quorum (not containing unresponsive servers) is approximately equally
likely to be queried for any given operation.
As in any statistical method, there is some possibility of false positives (i.e.,
alarms sent when the fault level remains below t a ) and false negatives (failure to
detect a dangerous fault level before the threshold is exceeded). As we will show,
however, the former risk can be kept to a reasonable minimum, while the latter can
be made essentially negligible. 1
1 Except in catastrophically unreliable systems. Neither our method nor any other of which we
Alvisi, Malkhi, Pierce, Reiter
3 Diagnosis using justifying sets
Our first method of fault detection for threshold quorum systems uses the read
and write protocols described in Section 2.2. As the random variable for our statistical
analysis, we use the size of the justifying set for a read operation, which is
the set of servers that return the value/timestamp pair !v; T? chosen by the client
in the read operation. The size of the justifying set is at least 2t there are no
faulty servers, but can be as small as t. The justifying set may be as
large as d n+2t+1
e in the case where the read quorum is the same as the quorum used
in the last completed write operation.
Suppose that a read operation is performed on the system, and that the size of
the justifying set for that read operation is x. We would like to discover whether
this evidence supports the hypothesis that the number of faults f in the system
exceeds some value t a , where t a ! t. We do so using a formula for the probability
distribution for justifying set sizes; this formula is derived as follows.
Suppose we have a system of n servers, with a quorum size of q. Given f faulty
servers in the system, the probability of exactly j failures in the read quorum can
be expressed by a hypergeometric distribution as follows:
q\Gammaj
Given that the number of failures in the read quorum is j, the probability that there
are exactly x correct servers in the intersection between the read quorum and the
previous write quorum is formulated as follows: the number of ways of choosing x
correct servers from the read quorum is
x
, and the number of possible previous
write quorums that intersect the read quorum in exactly those correct servers (and
some number of incorrect ones) is
. The probability that the previous write
quorum intersects the read quorum in exactly this way is therefore:
x
To get the overall probability that there are exactly x correct servers in the intersection
between the read and most recent write quorums, i.e., that the justifying
set size (size) is x, we multiply the conditional probability given j failures in the
read quorum by the probability of exactly j failures in the read quorum, and sum
the result for
f
x
q\Gammaj
(1)
are aware will protect against sudden near-simultaneous Byzantine failures in a sufficiently large
number (e.g., greater than t) of servers.
Fault Detection for Byzantine Quorum Systems
This formula expresses the probability that a particular read operation in a t-masking
quorum system will have a justifying set size of x given the presence of f faults.
For a given rejection level ff, then, the region of rejection for the null hypothesis
a is defined as x - highreject, where highreject is the maximum value such
highreject X
t a
x
ji t a
ji n\Gammat a
q\Gammaj
The left-hand expression above represents the significance level of the test, i.e.,
the probability of a false positive (false alarm).
If there are in fact failures in the system, the probability of detecting this
condition on a single read is:
highreject X
x
q\Gammaj
If we denote this value by fl, then the probability that k consecutive reads fail to
detect the condition is As shown in the following examples, k need not
be very large for this probability to become negligible.
Example 1: Consider a system of
fault tolerance threshold In order to test whether there are any faults in the
system, we set t a = 0, so that the null hypothesis H 0 is and the experimental
hypothesis HE is f ? 0. Plugging these numbers into (1) over the full range of
x yields the results in Table 1. For all other values of x not shown in Table 1, the
probability of a justifying set of size x given
0:071, the region of rejection for defined as x - 53; if a read operation
has a justifying set of size 53 or less, the client rejects the null hypothesis and
concludes that there are faults in the system. This test has a significance level of
that is, there is a probability of 0.019 that the client will detect faults when
there are none. (If this level of risk is unacceptable for a particular system, ff can
be set to a lower value, thus creating a smaller region of rejection.)
Suppose that there are actually f failures in the system. The probability that this
experiment will detect the presence of failures during any given read is:X
x=26
f
x
f
101\Gammaf
Table
shows these values for 1 - f - 20.
Although the probability of detecting faults during a given read in this system is
relatively low for very small values of f , it would appear that this test is reasonably
Alvisi, Malkhi, Pierce, Reiter
x
54 .051857 67 9:03 \Theta 10 \Gamma07
Table
1: Probability distribution on justifying set sizes for Example 1
9 .739333 19 .997720
Table
2: Probability of detecting f ? 0 in Example 1
Fault Detection for Byzantine Quorum Systems
9 .130284
Table
3: Probability of detecting f ? 5 in Example 2
powerful. Even for fault levels as low as 4 or 5, a client can reasonably expect
to detect the presence of failures within a very few reads; e.g., if then the
probability of detecting that f ? t a in only 6 reads is already
:921. As the fault levels rise, the probability of such detection within a single read
approaches near-certainty.
Example 2: Consider a much smaller system consisting of
a quorum size q = 46 and a fault tolerance threshold Furthermore, suppose
that the administrator of this system has decided that no action is called for if only
a few failures occur, so that t a is set at 5 rather than 0. Given 0:05, the region
of rejection for the null hypothesis H a is x - 27. The probabilities
of detecting this condition for actual values of f between 8 and 12 inclusive are
shown in Table 3.
As one might expect, error conditions are more difficult to detect when they are
more narrowly defined, as the contrast between examples 1 and 2 shows. Even in
the latter experiment, however, a client can reasonably expect to detect a serious but
non-fatal error condition within a small number of reads. For the probability
that the alarm is triggered within six read operations is
approximately 96.5 percent. The probability that it is triggered within ten reads
is over 99.6 percent. We can therefore reasonably consider this technique to be a
useful diagnostic in systems where read operations are significantly more frequent
than server failures, particularly if the systems are relatively large.
While the ability to detect faulty servers in threshold quorum systems is a step
forward, this method leaves something to be desired. It gives little indication of
the specific number of faults that have occurred and provides little information toward
identifying which servers are faulty. In the next section we present another
diagnostic method that addresses both these needs.
4 Diagnosis using quorum markers
The diagnostic method presented in this section has two distinct functions. First,
it uses a technique similar to that of the previous section to estimate the fault distribution
over the whole system, but with greater precision. Second, it pinpoints
Alvisi, Malkhi, Pierce, Reiter
specific servers that exhibit detectably faulty behavior during a given read. The
diagnostic operates on an enhanced version of the read/write protocol for masking
quorum systems: the write marker protocol, described below.
4.1 The write marker protocol
The write marker protocol uses a simple enhancement to the read/write protocol
of Section 2.2: we introduce a write quorum marker field to all variables. That is,
for a replicated variable Z , each server u maintains, in addition to Z u and T u , a
third value W u , which is the name of the quorum (e.g., an n-bit vector indicating
the servers in the quorum) used to complete the write operation in which Z u and T u
were last written. The write protocol proceeds as in Section 2.2, except that in the
last step, in addition to updating Z u and T u to v and T at each server u in a quorum
, the client also updates W u with (the name of) Q 0 . Specifically, to update Z u ,
T u , and W u at all (correct) servers in Q 0 , the client sends a message containing
to each u 2 Q 0 . Because communication is reliable (see Section 2),
the writer knows that Z u , T u and W u will be updated at all correct servers in Q 0 .
As before, each server u updates Z u , T u , and W u to the received values !v; T;
only if T is greater than the present value of T u .
The read protocol proceeds essentially as before, except that each server returns
the triple !Z in response to a read request. From among all triples
returned from at least t servers, the client chooses the triple with the highest
timestamp.
Below we describe two ways of detecting faults by making use of the set of triples
returned by the servers.
4.2 Statistical fault detection
Our revised statistical technique uses the quorum markers to determine the set
S of servers whose returned values would match the accepted triple in the absence
of faults, and the set S 0 of servers whose returned values actually do match that
triple. Because of the size-based construction of threshold quorum systems and the
random selection of the servers that make up the quorum for a given operation, the
set S can be considered a random sample of the servers, of which jS nS 0 j are known
to be faulty. Taking a random variable y to be the number of faulty servers in the
sample, we can use similar calculations to those in Section 3 to analyze with greater
precision the probability that f exceeds t a .
As shown in Section 3, the probability of finding y faults in a sample of size s
given a universe of size n containing f faults is expressed by the hypergeometric
y
s\Gammay
s
Fault Detection for Byzantine Quorum Systems
9 .999660 19 .999999
Table
4: Probability of detecting f ? 0 in Example 3
For a rejection level ff, the region of rejection for the hypothesis a is therefore
defined by the lowest value lowreject such that:
s
y=lowreject
t a
y
n\Gammat a
s\Gammay
s
Again, the left-hand expression represents the parameterized probability of a false
alarm.
For this method, experiments in which t a = 0 are a degenerate case. The presence
of any faults in the intersection set is visible and invalidates the null hypoth-
esis; the probability of a false positive in such cases is zero, as the formula above
confirms. Likewise, as the number of faults increases, the probability of detecting
faults within one or two reads rapidly approaches certainty.
Example 3: Consider again the system of servers, with a fault tolerance
threshold of quorum size of q = 76, and t a = 0, and suppose that a given
read quorum overlaps the previous write quorum in servers (the most likely
overlap, with a probability of about 0.21). The probability of alarm on a single
read operation for various values of f ! t, is shown in Table 4.
A comparison of this table with Table 2 illustrates the dramatically higher precision
of the write-marker method over the justifying set method. This precision has
additional advantages when t a is set to a value greater than 0.
Example 4: Consider again the system of servers, with a fault tolerance
threshold of quorum size of q = 46, and t a = 5, and suppose that a given
read quorum overlaps the previous write quorum in the most common intersection
size servers. The region of rejection for the null hypothesis calculated
using the formula above, is y - 5. The probability of alarm on a single read
operation for various values of f , t a t, is shown in Table 5.
Alvisi, Malkhi, Pierce, Reiter
9
Table
5: Probability of detecting f ? 5 in Example 4
Again, the increased strength of the write-marker method is evident (see Table 3).
Like the method presented in Section 3, the write-marker technique also has the
advantage of flexibility. If we wish to minimize the risk of premature alarms (i.e.,
alarms that are sent without the alarm threshold being exceeded) we may choose a
smaller ff at the risk of somewhat delayed alarms. In fact, the greater precision of
this method decreases
the risks associated with such a course: even delayed alarms can be expected to
be timely.
4.3 Fault identification
The write marker protocol has an even stronger potential as a tool for fault de-
tection: it allows the client to identify specific servers that are behaving incorrectly.
By keeping a record of this list, the client can thereafter select quorums that do not
contain these servers. This allows the system to behave somewhat more efficiently
than it would otherwise, as well as gathering the information needed to isolate faulty
servers for repair so that the system's integrity is maintained.
The fault identification algorithm accepts as input the triples f!Z
that the client obtained from servers in the read protocol, as well as the triple
that the client chose as the result of the read operation. It then computes
the set S n S 0 where is the set of servers that returned
in the read operation. The servers in S n S 0 are identified as faulty.
Note that the fault identification protocol does not depend in any way on the specific
characteristics of threshold quorum systems, and is easily seen to be applicable
to masking quorum systems in general.
5 Choosing alarm lines for large systems
The analysis of the previous two sections is precise but computationally cumbersome
for very large systems. A useful alternative is to estimate the performance
of possible alarm lines by means of bound analysis. In this section we present an
asymptotic analysis of the techniques of Sections 3 and 4 that shows how to choose
an alarm line value for arbitrarily large systems.
Fault Detection for Byzantine Quorum Systems
Let us denote the read quorum Q, the write quorum Q 0 , the set of faulty servers by
B, and the hypothesized size of B (i.e., the alarm line) by t a . We define a random
variable which is the justifying set size. We can compute
the expectation of X directly. For each server u 62 B define an indicator random
variable I u such that I I otherwise. For such u
we have P (I since Q and Q 0 are chosen independently. By linearity
of expectation,
Intuitively, the distribution on X is centered around its expectation and decreases
exponentially as X moves farther away from that expectation. Thus, we should be
able to show that X grows smaller than its expectation with exponentially decreasing
probability. A tempting approach to analyzing this would be to use Chernoff
bounds, but these do not directly apply because the selection of individual servers
in Q (similarly, Q 0 ) is not independent. In the analysis below, we thus use a more
powerful tool, martingales, to derive the anticipated Chernoff-like bound.
We bound the probability using the method of bounded differences,
by defining a suitable Doob martingale sequence and applying Azuma's inequality
(see [MR95, Ch. 4.4] for a good exposition of this technique; Appendix A provides
a brief introduction). Here, a Doob martingale sequence of conditional random
variables is defined by setting q, to be the expected value of X after i
selections are made in each of Q and Q 0 . Then, and it is
not difficult to see that jX This yields the following
bound (see Appendix A).
We use this formula and our desired rejection level ff to determine a ffi such that
probability value is our probability of a false alarm
and can be diminished by decreasing ff and recalculating ffi. The value
defines our region of rejection (see Section 2.3).
In order to analyze the probability that our alarm is triggered when the number
of faults in the system is t 0 ? t a , we define a second random variable X 0 identical
to X except for the revised failure hypothesis. This gives us:
An analysis similar to the above provides the following bound:
To summarize, these bounds can now be used as follows. For any given alarm line
t a , and any desired confidence level ff, we can compute the minimum ffi to satisfy
Alvisi, Malkhi, Pierce, Reiter
We thus derive the following test: An alarm is triggered whenever the
justifying set size is less than (n \Gamma t a
ffi. The analysis above guarantees that this
alarm will be triggered with false positive probability at most our computed bound
ff. If, in fact, f faults occur and f is sufficiently larger than t a , then there
exists by the analysis above, the
probability of triggering the alarm is greater than
8q .
In the case of the write marker protocol, we can tighten the analysis by using
the (known) intersection size between Q and Q 0 as follows.
Bj. Y has a hypergeometric distribution
on s, )=n. The appropriate Doob martingale
sequence in this case defines Y i , s, to be the expected value of Y after i
selections are made in S. Then, jY and so to set the region of rejection
we can use
6 Conclusion
In this paper, we have presented two methods for probabilistic fault diagnosis
for services replicated using t-masking quorum systems. Our methods mine server
responses to read operations for evidence of server failures, and if necessary trigger
an alarm to initiate appropriate recovery actions. Both of our methods were demonstrated
in the context of the threshold construction of [MR97a], i.e., in which the
quorums are all sets of size d n+2t+1e, but our techniques of Section 4 can be generalized
to other masking quorum systems, as well. Our first method has the advantage
of requiring no modifications to the read and write protocols proposed in [MR97a].
The second method requires minor modifications to these protocols, but also offers
better diagnosis capabilities and a precise identification of faulty servers. Our methods
are very effective in detecting faulty servers, since faulty servers risk detection
in every read operation to which they return incorrect answers.
Future work will focus on generalizations of these techniques, as well as uses
of these techniques in a larger systems context. In particular, we are presently
exploring approaches to react to server failures once they are detected.
--R
Unreliable failure detectors for reliable distributed sys- tems
Muteness detectors for consensus with Byzantine pro- cesses
Solving consensus in a Byzantine environment using an unreliable failure detector.
The SecureRing protocols for securing group communication.
On interprocess communication (part II: algorithms).
On the resilience of multicasting strategies in a failure-propagating environment
Randomized algorithms.
Byzantine quorum systems.
Unreliable intrusion detection in distributed computation.
The load and availability of Byzantine quorum systems.
Secure agreement protocols: Reliable and atomic group multicast in Rampart.
Reliable scheduling in a TMR database system.
Implementing fault-tolerant services using the state machine ap- proach: A tutorial
Principal features of the VOLTAN family of reliable node architectures for distributed systems.
--TR
Randomized algorithms
A <inline-equation> <f> <rad><rcd>N</rcd></rad></f> </inline-equation> algorithm for mutual exclusion in decentralized systems
Unreliable failure detectors for reliable distributed systems
Synchronous Byzantine quorum systems
Probabilistic Byzantine quorum systems
The Load and Availability of Byzantine Quorum Systems
Intrusion Detection
An Architecture for Survivable Coordination in Large Distributed Systems
Fault Detection for Byzantine Quorum Systems
Unreliable Intrusion Detection in Distributed Computations
A comparison connection assignment for diagnosis of multiprocessor systems
--CTR
Andreas Haeberlen , Petr Kouznetsov , Peter Druschel, The case for Byzantine fault detection, Proceedings of the 2nd conference on Hot Topics in System Dependability, p.5-5, November 08, 2006, Seattle, WA
Meng Yu , Peng Liu , Wanyu Zang, Specifying and using intrusion masking models to process distributed operations, Journal of Computer Security, v.13 n.4, p.623-658, July 2005 | byzantine fault tolerance;replicated data;fault detection;quorum systems |
506154 | Task assignment with unknown duration. | We consider a distributed server system and ask which policy should be used for assigning jobs (tasks) to hosts. In our server, jobs are not preemptible. Also, the job's service demand is not known a priori. We are particularly concerned with the case where the workload is heavy-tailed, as is characteristic of many empirically measured computer workloads. We analyze several natural task assignment policies and propose a new one TAGS (Task Assignment based on Guessing Size). The TAGS algorithm is counterintuitive in many respects, including load unbalancing, non-work-conserving, and fairness. We find that under heavy-tailed workloads, TAGS can outperform all task assignment policies known to us by several orders of magnitude with respect to both mean response time and mean slowdown, provided the system load is not too high. We also introduce a new practical performance metric for distributed servers called server expansion. Under the server expansion metric, TAGS significantly outperforms all other task assignment policies, regardless of system load. | Introduction
In recent years, distributed servers have become commonplace because they allow
for increased computing power while being cost-effective and easily scalable.
In a distributed server system, requests for service (tasks) arrive and must
be assigned to one of the host machines for processing. The rule for assigning
tasks to host machines is known as the task assignment policy. The choice of
the task assignment policy has a significant effect on the performance perceived
by users. Designing a distributed server system often comes down to choosing
the "best" task assignment policy for the given model and user requirements.
The question of which task assignment policy is "best" is an age-old question
which still remains open for many models.
In this paper we consider the particular model of a distributed server system
in which tasks are not preemptible - i.e. we are concerned with applications
where context switches are too costly. For example, one such application is batch
computing environments where the hosts themselves are parallel processors and
the tasks are parallel. Context switching between tasks involves reloading all the
processors and memory to return them to the state before the context switch.
Because context switching is so expensive in this environment, tasks are always
simply run to completion. Note, the fact that context switches are too expensive
does not preclude the possibility of killing a job and restarting it from scratch.
We assume furthermore that no a priori information is known about the
task at the time when the task arrives. In particular, the service demand of the
task is not known. We assume all hosts are identical and there is no cost (time
required) for assigning tasks to hosts. Figure 1 is one illustration of a distributed
server. In this illustration, arriving tasks are immediately dispatched by the
central dispatcher to one of the hosts and queue up at the host waiting for
service, where they are served in first-come-first-served (FCFS) order. Observe
however that our model in general does not preclude the possibility of having a
central queue at the dispatcher where tasks might wait before being dispatched.
It also does not preclude the possibility of an alternative scheduling discipline
at the hosts, so long as that scheduling discipline does not require preempting
tasks and does not rely on a priori knowledge about tasks.
Our main performance goal, in choosing a task assignment policy, is to minimize
mean waiting time and more importantly mean slowdown. A task's slow-down
is its waiting time divided by its service demand. All means are per-task
averages. We consider mean slowdown to be more important than mean waiting
time because it is desirable that a task's delay be proportional to its size.
That is, in a system in which task sizes are highly variable, users are likely to
anticipate short delays for short tasks, and are likely to tolerate long delays for
longer tasks. Later in the paper we introduce a new performance metric, called
server expansion which is related to mean slowdown. A secondary performance
goal is fairness. We adopt the standard definition of fairness that says all tasks,
large or small, should experience the same expected slowdown. In particular,
large tasks shouldn't be penalized - slowed down by a greater factor than are
small tasks. 1
Consider some task assignment policies commonly proposed for distributed
server systems: In the Random task assignment policy, an incoming task is sent
to Host i with probability 1=h, where h is the number of hosts. This policy
equalizes the expected number of tasks at each host. In Round-Robin task as-
signment, tasks are assigned to hosts in a cyclical fashion with the ith task
being assigned to Host i mod h. This policy also equalizes the expected number
of tasks at each host, and has slightly less variability in interarrival times than
does Random. In Shortest-Queue task assignment, an incoming task is immediately
dispatched to the host with the fewest number of tasks. This policy has
the benefit of trying to equalize the instantaneous number of tasks at each host,
rather than just the expected number of tasks. All the above policies have the
property that the tasks arriving at each host are serviced in FCFS order.
The literature tells us that Shortest-Queue is in fact the best task assignment
policy in a model where the following conditions are met: (1) there is no
a priori knowledge about tasks, (2) tasks are not preemptible, (3) each host
services tasks in a FCFS order, (4) incoming tasks are immediately dispatched
to a host, and (5) the task size distribution is Exponential (see Section 2).
If one removes restriction (4), it is possible to do even better. What we'd
really like to do is send a task to the host which has the least total outstanding
work (work is the sum of the task sizes at the host) because that host would
afford the task the smallest waiting time. However, we don't know a priori
which host currently has the least work, since we don't know task sizes. It
turns out this is actually easy to get around: we simply hold all tasks at the
dispatcher in a FCFS queue, and only when a host is free does it request the
next task. It is easy to prove that this holding method is exactly equivalent to
immediately dispatching arriving tasks to the host with least outstanding work
(see [6] for a proof and Figure 2 for an illustration). We will refer to this policy
as Least-Work-Remaining since it has the effect of sending each task to the host
with the currently least remaining work. Observe that Least-Work-Remaining
comes closest to obtaining instantaneous load balance.
It may seem that Least-Work-Remaining is the best possible task assignment
policy. Previous literature shows that Least-Work-Remaining outperforms
all of the above previously-discussed policies under very general conditions
(see Section 2). Previous literature also suggests that Least-Work-Remaining
1 For example, Processor-Sharing (which requires infinitely-many preemptions) is ultimately
fair in that every task experiences the same expected slowdown.
DISPATCHER
OUTSIDE
ARRIVALS
FCFS
FCFS
FCFS
FCFS
Figure
1: Illustration of a distributed server.
DISPATCHER
OUTSIDE
ARRIVALS
FCFS
FCFS
FCFS
FCFS
Send to
host with
least work
FCFS
(a) (b)
Figure
2: Two equivalent ways of implementing the Least-Work-Remaining
task assignment policy. (a) Shows incoming tasks immediately being dispatched
to the host with the least remaining work, but this requires knowing a priori
the sizes of the tasks at the hosts. (b) Shows incoming tasks pooled at a FCFS
queue at the dispatcher. There are no queues at the individual hosts. Only when
a host is free does it request the next task. This implementation does not require
a priori knowledge of the task sizes, yet achieves the same effect as (a).
may be the optimal (best possible) task assignment policy in the case where the
task size distribution is Exponential (see Section 2 for a detailed statement of
the previous literature).
But what if task size distribution is not Exponential? We are motivated in
this respect by the increasing evidence for high variability in task size distri-
butions, as seen in many measurements of computer workloads. In particular,
measurements of many computer workloads have been shown to fit heavy-tailed
distributions with very high variance, as described in Section 3 - much higher
variance than that of an Exponential distribution. Is there a better task assignment
policy than Least-Work-Remaining when the task size variability is
characteristic of empirical workloads? In evaluating various task assignment
policies, we will be interested in understanding the influence of task size variability
on the decision of which task assignment policy is best. For analytical
tractability, we will assume that the arrival process is Poisson - our simulations
indicate that the variability in the arrival process is much less critical to choosing
a task assignment policy than is the variability in the task size distribution.
In this paper we propose a new algorithm called TAGS - Task Assignment
by Guessing Size which is specifically designed for high variability workloads.
We will prove analytically that when task sizes show the degree of variability
characteristic of empirical (measured) workloads, the TAGS algorithm can out-perform
all the above mentioned algorithms by several orders of magnitude. In
fact, we will show that the more heavy-tailed the task size distribution, the
greater the improvement of TAGS over the other task assignment algorithms.
The above improvements are contingent on the system load not being too
high. 2 In the case where the system load is high, we show that all the task
assignment policies have such poor performance that they become impractical,
and TAGS is especially negatively affected. In practice, if the system load is
too high to achieve reasonable performance, one adds new hosts to the server
(without increasing the outside arrival rate), thus dropping the system load,
until the system behaves as desired. We refer to the "number of new hosts which
must be added" above as the server expansion requirement. We will show that
TAGS outperforms all the previously-mentioned task assignment policies with
respect to the server expansion metric (i.e., given any initial load, TAGS requires
far fewer additional hosts to perform well).
We will describe three flavors of TAGS. The first, called TAGS-opt-meanslowdown
is designed to minimize mean slowdown. The second, called TAGS-opt-meanwaitingtime
2 For a distributed server, system load is defined as follows:
System load = Outside arrival rate \Delta Mean task size = Number of hosts
For example, a system with 2 hosts and system load .5 has same outside arrival rate as a
system with 4 hosts and system load .25. Observe that a 4 host system with system load ae
has twice the outside arrival rate of a 2 host system with system load ae.
is designed to minimize mean waiting time. Although very effective, these
algorithms are not fair in their treatment of tasks. The third flavor, called
TAGS-opt-fairness, is designed to optimize fairness. While managing to be
fair, TAGS-opt-fairness still achieves mean slowdown and mean waiting time
close to the other flavors of TAGS.
Section 2 elaborates in more detail on previous work in this area. Section
3 provides the necessary background on measured task size distributions
and heavy-tails. Section 4 describes the TAGS algorithm and all its flavors. Section
5 shows results of analysis for the case of 2 hosts and Section 6 shows
results of analysis for the multiple-host case. Section 7 explores the effect of
less-variable job size distributions. Lastly, we conclude in Section 8. Details on
the analysis of TAGS are described in the Appendix.
Previous Work on Task Assignment
2.1 Task assignment with no preemption
The problem of task assignment in a model like ours (no preemption and no a
priori knowledge) has been extensively studied, but many basic questions remain
open.
One subproblem which has been solved is that of task assignment under the
restriction that all tasks be immediately dispatched to a host upon arrival and
each host services its tasks in FCFS order. Under this restricted model, it has
been shown that when the task size distribution is exponential and the arrival
process is Poisson, then the Shortest-Queue task assignment policy is optimal,
Winston [19]. In this result, optimality is defined as maximizing the discounted
number of tasks which complete by some fixed time t. Ephremides, Varaiya,
and Walrand [5] showed that the Shortest-Queue task assignment policy also
minimizes the expected total time for the completion of all tasks arriving by
some fixed time t, under an exponential task size distribution and arbitrary
arrival process. The actual performance of the Shortest-Queue policy is not
known exactly, but the mean response time is approximated by Nelson and
Phillips [11], [12]. Whitt has shown that as the variability of the task size
distribution grows, the Shortest-Queue policy is no longer optimal [18]. Whitt
does not suggest which policy is optimal.
The scenario has also been considered, under the same restricted model
described in the above paragraph, but where the ages (time in service) of the
tasks currently serving are known, so that it is possible to compute an arriving
task's expected delay at each queue. In this scenario, Weber [17] considers the
Shortest-Expected-Delay rule which sends each task to the host with the
least expected work (note the similarity to the Least-Work-Remaining policy).
Weber shows that this rule is optimal for task size distributions with increasing
failure rate (including Exponential). Whitt [18] shows that there exist task size
distributions for which this rule is not optimal.
Wolff, [20] has proven that Least-Work-Remaining is the best possible task
assignment policy out of all policies which do not make use of task size. This
result holds for any distribution of task sizes and for any arrival process.
Another model which has been considered is the case of no preemption
but where the size of each task is known at the time of arrival of the task.
Within this model, the SITA-E algorithm (see [7]) has been shown to outperform
the Random, Round-Robin, Shortest-Queue, and Least-Work-Remaining algorithms
by several orders of magnitude when the task size distribution is heavy-
tailed. In contrast to SITA-E, the TAGS algorithm does not require knowledge
of task size. Nevertheless, for not-too-high system loads (! :5), TAGS improves
upon the performance of SITA-E by several orders of magnitude for heavy-tailed
workloads.
2.2 When preemption is allowed and other generalizations
Throughout this paper we maintain the assumption that tasks are not pre-
emptible. That is, once a task starts running, it can not be stopped and re-
continued where it left off. By contrast there exists considerable work on the
very different problem where tasks are preemptible (see [8] for many citations).
Other generalizations of the task assignment problem include the scenario
where the hosts are heterogeneous or there are multiple resources under contention
The idea of purposely unbalancing load has been suggested previously in [3]
and in [1], under different contexts from our paper. In both these papers, it
is assumed that task sizes are known a priori. In [3] a distributed system with
preemptible tasks is considered. It is shown that in the preemptible model,
mean waiting time is minimized by balancing load, however mean slowdown is
minimized by unbalancing load. In [1], real-time scheduling is considered where
tasks have firm deadlines. In this context, the authors propose "load profiling,"
which "distributes load in such a way that the probability of satisfying the
utilization requirements of incoming tasks is maximized."
3 Heavy Tails
As described in Section 1, we are concerned with how the distribution of task
sizes affects the decision of which task assignment policy to use.
Many application environments show a mixture of task sizes spanning many
orders of magnitude. In such environments there are typically many small tasks,
and fewer large tasks. Much previous work has used the exponential distribution
to capture this variability, as described in Section 2. However, recent measurements
indicate that for many applications the exponential distribution is a poor
model and that a heavy-tailed distribution is more accurate. In general a heavy-tailed
distribution is one for which
2. The simplest heavy-tailed distribution is the Pareto distribu-
tion, with probability mass function
and cumulative distribution function
A set of task sizes following a heavy-tailed distribution has the following properties
1. Decreasing failure rate: In particular, the longer a task has run, the longer
it is expected to continue running.
2. Infinite variance (and if ff - 1, infinite mean).
3. The property that a very small fraction (! 1%) of the very largest tasks
make up a large fraction (half) of the load. We will refer to this important
property throughout the paper as the heavy-tailed property.
The lower the parameter ff, the more variable the distribution, and the more
pronounced is the heavy-tailed property, i.e. the smaller the fraction of large
tasks that comprise half the load.
As a concrete example, Figure 3 depicts graphically on a log-log plot the
measured distribution of CPU requirements of over a million UNIX processes,
taken from paper [8]. This distribution closely fits the curve
PrfProcess Lifetime ?
In [8] it is shown that this distribution is present in a variety of computing en-
vironments, including instructional, research, and administrative environments.
In fact, heavy-tailed distributions appear to fit many recent measurements
of computing systems. These include, for example:
ffl Unix process CPU requirements measured at Bellcore: 1 - ff - 1:25 [10].
ffl Unix process CPU requirements, measured at UC Berkeley: ff - 1 [8].
ffl Sizes of files transferred through the Web: 1:1 - ff - 1:3 [2, 4].
ffl Sizes of files stored in Unix filesystems: [9].
ffl I/O times: [14].
ffl Sizes of FTP transfers in the Internet: :9 - ff - 1:1 [13].
In most of these cases where estimates of ff were made, ff tends to be close to
1, which represents very high variability in task service requirements.
In practice, there is some upper bound on the maximum size of a task,
because files only have finite lengths. Throughout this paper, we therefore model
task sizes as being generated i.i.d. from a distribution that follows a power law,
but has an upper bound - a very high one. We refer to this distribution as a
Bounded Pareto. It is characterized by three parameters: ff, the exponent of
the power law; k, the smallest possible observation; and p, the largest possible
observation. The probability mass function for the Bounded Pareto B(k; p; ff)
is defined as:
In this paper, we will vary the ff-parameter over the range 0 to 2 in order
to observe the effect of changing variability of the distribution. To focus on
the effect of changing variance, we keep the distributional mean fixed (at 3000)
and the maximum value fixed (at which correspond to typical values
taken from [2]. In order to keep the mean constant, we adjust k slightly as ff
Note that the Bounded Pareto distribution has all its moments finite. Thus,
it is not a heavy-tailed distribution in the sense we have defined above. How-
ever, this distribution will still show very high variability if k - p. For example
Figure
4 (right) shows the second moment E
\Psi of this distribution as a
function of ff for is chosen to keep E fXg constant at 3000,
1500). The figure shows that the second moment explodes exponentially
as ff declines. Furthermore, the Bounded Pareto distribution also still
exhibits the heavy-tailed property and (to some extent) the decreasing failure
rate property of the unbounded Pareto distribution. We mention these properties
because they are important in determining our choice of the best task
assignment policy.
Distribution of process lifetimes (log plot)
(fraction of processes with duration > T)
Duration (T secs.)1/2
Figure
3: Measured distribution of UNIX process CPU lifetimes, taken from
[HD97]. Data indicates fraction of jobs whose CPU service demands exceed T
seconds, as a function of T .
power law
w/ exponent
Second Moment of Bounded Pareto Distribution
alpha
Figure
4: Parameters of the Bounded Pareto Distribution (left); Second Moment
of as a function of ff, when E
OUTSIDE
ARRIVALS
Figure
5: Illustration of the flow of tasks in the TAGS algorithm.
4 The TAGS algorithm
This section describes the TAGS algorithm.
Let h be the number of hosts in the distributed server. Think of the hosts
as being numbered: h. The ith host has a number s i associated with
it, where s
TAGS works as shown in Figure 5: All incoming tasks are immediately dispatched
to Host 1. There they are serviced in FCFS order. If they complete
before using up s 1 amount of CPU, they simply leave the system. However, if
a task has used s 1 amount of CPU at Host 1 and still hasn't completed, then
it is killed (remember tasks cannot be preempted because that is too expensive
in our model). The task is then put at the end of the queue at Host 2, where
it is restarted from scratch 3 . Each host services the tasks in its queue in FCFS
order. If a task at host i uses up s i amount of CPU and still hasn't completed
it is killed and put at the end of the queue for Host i + 1. In this way, the TAGS
algorithm "guesses the size" of each task, hence the name.
The TAGS algorithm may sound counterintuitive for a few reasons: First of
all, there's a sense that the higher-numbered hosts will be underutilized and the
3 Note, although the task is restarted, it is still the same task, of course. We are therefore
careful in our analysis not to assign it a new service requirement.
first host overcrowded since all incoming tasks are sent to Host 1. An even more
vital concern is that the TAGS algorithm wastes a large amount of resources by
killing tasks and then restarting them from scratch. 4 There's also the sense that
the big tasks are especially penalized since they're the ones being restarted.
TAGS comes in 3 flavors; these only differ in how the s i 's are chosen. In
TAGS-opt-meanslowdown, the s i 's are chosen so as to optimize mean slowdown.
In TAGS-opt-meanwaitingtime, the s i 's are chosen so as to optimize mean waiting
time. As we'll see, TAGS-opt-meanslowdownand TAGS-opt-meanwaitingtime
are not necessarily fair. In TAGS-opt-fairness the s i 's are chosen so as to optimize
fairness. Specifically, the tasks whose final destination is Host i experience
the same expected slowdown under TAGS-opt-fairness as do the tasks whose
final destination is Host j, for all i and j.
TAGS may seem reminiscent of multi-level feedback queueing, but they are not
related. In multi-level feedback queueing there is only a single host with many
virtual queues. The host is time-shared and tasks are preemptible. When a task
uses some amount of service time it is transferred (not killed and restarted) to
a lower priority queue. Also, in multi-level feedback queueing, the tasks in that
lower priority queue are only allowed to run when there are no tasks in any of
the higher priority queues.
5 Analysis and Results and For the Case of 2
Hosts
This section contains the results of our analysis of the TAGS task assignment
policy and other task assignment policies. In order to clearly explain the effect
of the TAGS algorithm, we limit the discussion in this section to the case of 2
hosts. In this case we refer to the tasks whose final destination is Host 1 as
the small tasks and the tasks whose final destination is Host 2 as the big tasks.
Until Section 5.3, we will always assume the system load is 0:5 and there are 2
hosts. In Section 5.3, we will consider other system loads, but still stick to the
case of 2 hosts. Finally, in Section 6 we will consider distributed servers with
multiple hosts.
We evaluate several task assignment policies, all as a function of ff, where ff
is the variance-parameter for the Bounded Pareto task size distribution, and ff
ranges between 0 and 2. Recall from Section 3 that the lower ff is, the higher the
variance in the task size distribution. Recall also that empirical measurements
of task size distributions often show ff - 1.
4 My dad, Micha Harchol, would add that there's also the psychological concern of what
the angry user might do when he's told his task's been killed to help the general good.
We will evaluate the Random, Least-Work-Remaining, and TAGS policies.
The Round-Robin policy (see Section 1) will not be evaluated directly because
we showed in a previous paper [7] that Random and Round-Robin have
almost identical performance. As we'll explain in Section 5.1, our analysis of
Least-Work-Remaining is only an approximation, however we have confidence
in this approximation because our extensive simulation in paper [7] showed it
to be quite accurate in this setting. As we'll discuss in Section 5.1, our analysis
of TAGS is also an approximation, though to a lesser degree.
Figure
6(a) below shows mean slowdown under TAGS-opt-slowdown as compared
with the other task assignment policies. The y-axis is shown on a log
scale. Observe that for very high ff, the performance of all the task assignment
policies is comparable and very good, however as ff decreases, the performance
of all the policies degrades. The Least-Work-Remaining policy consistently
outperforms the Random policy by about an order of magnitude, however
the TAGS-opt-slowdown policy offers several orders of magnitude further
improvement: At 1:5, the TAGS-opt-slowdown policy outperforms
the Least-Work-Remaining policy by 2 orders of magnitude; at ff - 1, the
TAGS-opt-slowdown policy outperforms the Least-Work-Remaining policy by
over 4 orders of magnitude; at :4 the the TAGS-opt-slowdown policy out-performs
the Least-Work-Remaining policy by over 9 orders of magnitude, and
this increases to 15 orders of magnitude for
Figures
6(b) and (c) show mean slowdown of TAGS-opt-waitingtime and
TAGS-opt-fairness, respectively, as compared with the other task assignment
policies. Since TAGS-opt-waitingtime is optimized for mean waiting time,
rather than mean slowdown, it is understandable that its performance improvements
with respect to mean slowdown are not as dramatic as those of
TAGS-opt-slowdown. However, what's interesting is that the performance of
TAGS-opt-fairness is very close to that of TAGS-opt-slowdownand yet TAGS-opt-fairness
has the additional benefit of fairness.
Figure
7 is identical to Figure 6 except that in this case the performance
metric is mean waiting time, rather than mean slowdown. Again the TAGS al-
gorithm, especially TAGS-opt-waitingtime, shows several orders of magnitude
improvement over the other task assignment policies.
Why does the TAGS algorithm work so well? Intuitively, it seems that
Least-Work-Remaining should be the best performer, since Least-Work-Remaining
sends each task to where it will individually experience the lowest waiting time.
The reason why TAGS works so well is 2-fold: The first part is variance reduction
(Section 5.1) and the second part is load unbalancing (Section 5.2).
using TAGS-opt-slowdown: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
(b)
Results: MEAN SLOWDOWN using TAGS-opt-waitingtime: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
(c)
using TAGS-opt-fairness: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
TAGS-opt-fairness
Figure
Mean slowdown for distributed server with 2 hosts and system
using TAGS-opt-slowdown: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
TAGS-opt-fairness
(b)
Results: MEAN WAITING TIME using TAGS-opt-waitingtime: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
TAGS-opt-fairness
(c)
using TAGS-opt-fairness: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
TAGS-opt-fairness
Figure
7: Mean waiting time for distributed server with 2 hosts and system
5.1 Variance Reduction
Variance reduction refers to reducing the variance of task sizes that share the
same queue. Intuitively, variance reduction is important for improving performance
because it reduces the chance of a small task getting stuck behind a big
task in the same queue. This is stated more formally in Theorem 1 below, which
is derived from the Pollaczek-Kinchin formula.
Theorem 1 Given an M/G/1 FCFS queue, where the arrival process has rate
-, X denotes the service time distribution, and ae denotes the utilization
fXg). Let W be a task's waiting time in queue, S be its slowdown, and Q
be the queue length on its arrival. Then,
Proof: The slowdown formulas follow from the fact that W and X are independent
for a FCFS queue, and the queue size follows from Little's formula.
Observe that every metric for the simple FCFS queue is dependent on
\Psi , the second moment of the service time. Recall that if the workload
is heavy-tailed, the second moment of the service time explodes, as shown in
Figure
4.
We now discuss the effect of high variability in task sizes on a distributed
server system under the various task assignment policies.
Random Task Assignment The Random policy simply performs Bernoulli
splitting on the input stream, with the result that each host becomes an independent
queue. The load at the ith host, ae i , is equal to the
system load, ae. The arrival rate at the ith host is 1=h-fraction of the total
outside arrival rate. Theorem 1 applies directly, and all performance metrics
are proportional to the second moment of B(k; p; ff). Performance is generally
poor because the second moment of the B(k; p; ff) is high.
Round Robin The Round Robin policy splits the incoming stream so each
host sees an E h =B(k; p; ff)=1 queue, with utilization ae This system has
performance close to the Random policy since it still sees high variability in
service times, which dominates performance.
Least-Work-Remaining The Least-Work-Remaining policy is equivalent
to an M/G/h queue, for which there exist known approximations, [16],[21]:
QM=G=h
QM=M=h
where X denotes the service time distribution, and Q denotes queue length.
What's important to observe here is that the mean queue length, and therefore
the mean waiting time and mean slowdown, are all proportional to the second
moment of the service time distribution, as was the case for the Random and
Round-Robin task assignment policies. In fact, the performance metrics are all
proportional to the squared coefficient of variation (C
) of the service
time distribution.
TAGS The TAGS policy is the only one which reduces the variance of task sizes
at the individual hosts. Let p i be the fraction of tasks whose final destination
is Host i. Consider the tasks which queue at Host i: First there are those
tasks which are destined for Host i. Their task size distribution is B(s
because the original task size distribution is a Bounded Pareto. Then there are
the tasks which are destined for hosts numbered greater than i. These tasks are
all capped at size s i . Thus the second moment of the task size distribution at
Host i is lower than the second moment of the original B(k; p; ff) distribution
(for all hosts except the highest-numbered host, it turns out). The full analysis
of the TAGS policy is presented in the Appendix and is relatively straightforward
except for one point which we have to fudge and which we explain now: For
analytic convenience, we need to be able to assume that the tasks arriving at
each host form a Poisson Process. This is of course true for Host 1. However
the arrivals at Host i are those departures from Host exceed size
s They form a less bursty process than a Poisson Process since they are
spaced apart by at least s i\Gamma1 . Throughout our analysis of TAGS, we make the
assumption that the arrival process into Host i is a Poisson Process.
5.2 Load Unbalancing
The second reason why TAGS performs so well has to do with "load unbalancing."
Observe that all the other task assignment policies we described specifically try
to balance load at the hosts. Random and Round-Robin balance the expected
load at the hosts, while Least-Work-Remaining goes even further in trying to
balance the instantaneous load at the hosts. In TAGS we do the opposite.
Figure
8 shows the load at Host 1 and the load at Host 2 for TAGS-opt-slowdown,
TAGS-opt-waitingtime, and TAGS-opt-fairness as a function of ff. Observe
that all 3 flavors of TAGS (purposely) severely underload Host 1 when ff is low
but for higher ff actually overload Host 1 somewhat. In the middle range, ff - 1,
the load is balanced in the two hosts.
We first explain why load unbalancing is desirable when optimizing overall
mean slowdown of the system. We will later explain what happens when optimizing
fairness. To understand why it is desirable to operate at unbalanced
loads, we need to go back to the heavy-tailed property. The heavy-tailed property
says that when a distribution is very heavy-tailed (very low ff), only a
miniscule fraction of all tasks - the very largest ones - are needed to make up
more than half the total load. As an example, for the case turns out
that less than 10 \Gamma6 fraction of all tasks are needed to make up half the load. In
fact not many more tasks, still less than 10 \Gamma4 fraction of all tasks, are needed
to make up :99999 fraction of the load. This suggests a load game that can
be played: We choose the cutoff point (s 1 ) such that most tasks
fraction) have Host 1 as their final destination, and only a very few tasks (the
largest fraction of all tasks) have Host 2 as their final destination. Because
of the heavy-tailed property, the load at Host 2 will be extremely high (.99999)
while the load at Host 1 will be very low (.00001). Since most tasks get to run
at such reduced load, the overall mean slowdown is very low.
When the distribution is a little less heavy-tailed, e.g., ff - 1, we can't play
this load unbalancing game as well. Again, we would like to severely underload
Host 1 and send .999999 fraction of the load to go to Host 2. Before we were able
to do this by making only a very small fraction of all tasks (! 10 \Gamma4 fraction)
go to Host 2. However now that the distribution is not as heavy-tailed, a larger
fraction of tasks must have Host 2 as its final destination to create very high
load at Host 2. But this in turn means that tasks with destination Host 2 count
more in determining the overall mean slowdown of the system, which is bad
since tasks with destination Host 2 experience larger slowdowns. Thus we can
only afford to go so far in overloading Host 2 before it turns against us.
When get to ff ? 1, it turns out that it actually pays to overload Host 1
a little. This seems counter-intuitive, since Host 1 counts more in determining
the overall mean slowdown of the system because the fraction of tasks with
destination Host 1 is greater. However, the point is that now it is impossible to
create the wonderful state where almost all tasks are on Host 1 and yet Host 1 is
underloaded. The tail is just not heavy enough. No matter how we choose the
cutoff, a significant portion of the tasks will have Host 2 as their destination.
Thus Host 2 will inevitably figure into the overall mean slowdown and so we
need to keep the performance on Host 2 in check. To do this, it turns out we
need to slightly underload Host 2, to make up for the fact that the task size
variability is so much greater on Host 2 than on Host 1.
The above has been an explanation for why load unbalancing is important
with respect to optimizing the system mean slowdown. However it is not at all
clear why load unbalancing also optimizes fairness. Under TAGS-opt-fairness,
20.10.30.50.70.9Loads at hosts under TAGS-opt-slowdown: 2 hosts, load .5
alpha
host 1
- Load host 2
(b)
20.10.30.50.70.9Loads at hosts under TAGS-opt-waitingtime: 2 hosts, load .5
alpha
host 1
- Load host 2
(c)
20.10.30.50.70.9Loads at hosts under TAGS-opt-fairness: 2 hosts, load .5
alpha
host 1
- Load host 2
Figure
8: Load at Host 1 as compared with Host 2 in a distributed
server with 2 hosts and system load .5 under (a) TAGS-opt-slowdown, (b)
TAGS-opt-waitingtime, and (c) TAGS-opt-fairness. Observe that for very
low ff, Host 1 is run at load close to zero, and Host 2 is run at load close to 1,
whereas for high ff, Host 1 is somewhat overloaded.
(a) System load 0:3
using TAGS-opt-slowdown: 2 hosts, load .3
alpha
. Least-Work-Left Approx.
(b) System load 0:5
Results: MEAN SLOWDOWN using TAGS-opt-slowdown: 2 hosts, load .5
alpha
. Least-Work-Left Approx.
(c) System load 0:7
using TAGS-opt-slowdown: 2 hosts, load .7
alpha
. Least-Work-Left Approx.
Figure
9: Mean slowdown under TAGS-opt-slowdown in a distributed server
with 2 hosts with system load (a) 0:3, (b) 0:5, and (c) 0:7. In each figure the
mean slowdown under TAGS-opt-slowdown is compared with the performance
of Random and Least-Work-Remaining. Observe that in all the figures TAGS
outperforms the other task assignment policies under all ff. However TAGS is
most effective at lower system loads.
the mean slowdown experienced by the small tasks is equal to the mean slowdown
experienced by the big tasks. However it seems in fact that we're treating the
big tasks unfairly on 3 counts:
1. The small tasks run on Host 1 which has very low load (for low ff).
2. The small tasks run on Host 1 which has very low E
3. The small tasks don't have to be restarted from scratch and wait on a
second line.
So how can it possibly be fair to help the small tasks so much? The answer
is simply that the small tasks are small. Thus they need low waiting times to
keep their slowdown low. Big tasks on the other hand can afford a lot more
waiting time. They are better able to amortize the punishment over their long
lifetimes. It is important to mention, though, that this would not be the case
for all distributions. It is because our task size distribution for low ff is so
heavy-tailed that the big tasks are truly elephants (way bigger than the smalls)
and thus can afford to suffer more. 5
5.3 Different Loads
Until now we have studied only the model of a distributed server with two hosts
and system load :5. In this section we consider the effect of system load on the
performance of TAGS. We continue to assume a 2 host model. Figure 9 shows the
performance of TAGS-opt-slowdown on a distributed server with 2 hosts run at
system load (a) 0:3, (b) 0:5, and (c) 0:7. In all three figures TAGS-opt-slowdown
improves upon the performance of Least-Work-Remaining and Random under
the full range of ff, however the improvement of TAGS-opt-slowdown is much
better when the system is more lightly loaded. In fact, all the task assignment
policies improve as the system load is dropped, however the improvement
in TAGS is the most dramatic. In the case where the system load is 0:3,
TAGS-opt-slowdown improves upon Least-Work-Remaining by over 4 orders of
magnitude at by 6 or 7 orders of magnitude when and by almost
20 orders of magnitude when When the system load is 0:7 on the other
5 It may interest the reader to understand the degree of unfairness exhibited by
TAGS-opt-slowdown and TAGS-opt-waitingtime. For TAGS-opt-slowdown, our analysis shows
that the expected slowdown of the big tasks always exceeds that of the small tasks and
the ratio increases exponentially as ff drops, so that at
EfSlowdown(smalls)g, and at
In contrast, for TAGS-opt-waitingtime, the expected slowdown of the big tasks is approximately
equal to that of the small tasks until ff drops below 1, at which point the expected
slowdown of the big tasks drops way below that of the small tasks, the ratio of bigs to smalls
decreasing superexponentially as ff drops.
hand, TAGS-opt-slowdown behaves comparably to Least-Work-Remaining for
most ff and only improves upon Least-Work-Remaining in the narrower range
of however that at ff - 1, the improvement of TAGS-opt-slowdown
is still about 4 orders of magnitude.
Why is the performance of TAGS so correlated with load? There are 2 reasons,
both of which can be understood by looking at Figure 10 which shows the loads
at the 2 hosts under TAGS-opt-slowdown in the case where the system load is
(a) 0:3, (b) 0:5, and (c) 0:7.
The first reason for the ineffectiveness of TAGS under high loads is that the
higher the load, the less able TAGS is to play the load-unbalancing game described
in Section 5.2. For lower ff, TAGS reaps much of its benefit at the lower ff by
moving all the load onto Host 2. When the system load is only 0:5, TAGS is easily
able to pile all the load on Host 2 without exceeding load 1 at Host 2. However
when the system load is 0:7, the restriction that the load at Host 2 must not
exceed 1 becomes a bottleneck for TAGS since it means that Host 1 can not be
as underloaded as TAGS would like. This is seen by comparing Figure 10(b) and
Figure
10(c) where in (c) the load on Host 1 is much higher for the lower ff than
it is in (b).
The second reason for the ineffectiveness of TAGS under high loads has to
do with what we call excess. Excess is the extra work created in TAGS by tasks
being killed and restarted. In the 2-host case, the excess is simply equal to
is the outside arrival rate, p 2 is the fraction of tasks whose
final destination is Host 2, and s 1 is the cutoff differentiating small tasks from
big tasks. An equivalent definition of excess is the difference between the actual
sum of the loads on the hosts and h times the system load, where h is the
number of hosts. Notice that the dotted line in Figure 10(a)(b)(c) shows the
sum of the loads on the hosts.
Until now we've only considered the distributed servers with 2 hosts and
system load 0:5. For this scenario, excess has not been a problem. The reason
is that for low ff, where we need to do the severe load unbalancing, excess is
basically non-existent for loads 0:5 and under, since p 2 is so small (due to the
heavy-tailed property) and since s 1 could be forced down. For high ff, excess
is present. However all the task assignment policies already do well in the high
ff region because of the low task size variability, so the excess is not much of a
handicap.
When we look at the case of system load 0:7, however, excess is much more
of a problem, as is evidenced by the dotted line in Figure 10(c). One reason
that the excess is worse is simply that overall excess increases with load because
excess is proportional to - which is in turn proportional to load. The other
reason that the excess is worse at higher loads has to do with s 1 . In the low ff
range, although p 2 is still low (due to the heavy-tailed property), s 1 cannot be
forced low because the load at Host 2 is capped at 1. Thus the excess for low
ff is very high. To make matters worse, some of this excess must be heaped on
Host 1. In the high ff range, excess again is high because p 2 is high.
Fortunately, observe that for higher loads excess is at its lowest point at
ff - 1. In fact, it is barely existent in this region. Observe also that the
region is the region where balancing load is the optimal thing to do (with
respect to minimizing mean slowdown), regardless of the system load. This
"sweet spot" is fortunate because ff - 1 is characteristic of many empirically
measured computer workloads, see Section 3.
6 Analytic Results for Case of Multiple Hosts
Until now we have only considered distributed servers with 2 hosts. For the case
of 2 hosts, we saw that the performance of TAGS-opt-slowdown was amazingly
good if the system load was 0:5 or less, but not nearly as good for system load
? 0:5. In this section we consider the case of more than 2 hosts.
The phrase "adding more hosts" can be ambiguous because it is not clear
whether the arrival rate is increased as well. For example, given a system with
hosts and system load 0:7, we could increase the number of hosts to 4 hosts
without changing the arrival rate, and the system load would drop to 0:35. On
the other hand, we could increase the number of hosts to 4 hosts and increase
the arrival rate appropriately (double it) so as to maintain a system load of 0:7.
In our discussions below we will attempt to be clear as to which view we have
in mind.
One claim that can be made straight off is that an h host system (h ?
with system load ae can always be configured to produce performance which is
at least as good as that of a 2 host system with system load ae. To see why,
observe that we can use the h host system (assuming h is even) to simulate a 2
host system as illustrated in Figure 11: Rename Hosts 1 and 2 as Subsystem 1.
Rename Hosts 3 and 4 as Subsystem 2. Rename Hosts 5 and 6 as Subsystem 3,
etc. Now split the traffic entering the h host system so that 2=hth of the tasks
go to each of the h=2 Subsystems. Now apply your favorite task assignment
policy to each Subsystem independently - in our case we choose TAGS. Each
Subsystem will behave like a 2 host system with load ae running TAGS. Since
each Subsystem will have identical performance, the performance of the whole
host system will be equal to the performance of any one subsystem. (Observe
that the above cute argument works for any task assignment policy).
Figure
12 shows the mean slowdown under TAGS-opt-slowdown for the case
of a 4 host distributed server with system load 0:3. Comparing these results to
those for the 2 host system with system load 0:3 (Figure 9(a)), we see that:
(a) System load 0:3
Loads at hosts under TAGS-opt-slowdown: 2 hosts, load .3
alpha
host 1
- Load host 2
. Sum
(b) System load 0:5
Loads at hosts under TAGS-opt-slowdown: 2 hosts, load .5
alpha
host 1
- Load host 2
. Sum
(c) System load 0:7
20.611.41.8Loads at hosts under TAGS-opt-slowdown: 2 hosts, load .7
alpha
host 1
- Load host 2
. Sum
Figure
10: Load at Host 1 and Host 2 under TAGS-opt-slowdown shown for
a distributed server with 2 hosts and system load (a) 0:3 (b) 0:5 (c) 0:7. The
dotted line shows the sum of the loads at the 2 hosts. If there were no excess,
the dotted line would be at (a) 0:6 (b) 1:0 and (c) 1:4 in each of the graphs
respectively. In figures (a) and (b) we see excess only at the higher ff range.
In figure (c) we see excess in both the low ff and high ff range, but not around
ff - 1.
DISPATCHER
OUTSIDE
ARRIVALS
TAGS
SUBSYSTEM
TAGS
SUBSYSTEM
TAGS
Figure
11: Illustration of the claim that an h host system (h ? 2) with system
load ae can always be configured to produce performance at least as good as a 2
host system with system load ae (although the h host system has much higher
arrival rate).
Results: MEAN SLOWDOWN using TAGS-opt-slowdown: 4 hosts, load .3
alpha
. Least-Work-Left Approx.
Figure
12: Mean slowdown under TAGS-opt-slowdown compared with other task
assignment policies in the case of a distributed server with 4 hosts and system
load 0:3. The cutoffs for TAGS-opt-slowdown were optimized by hand. In many
cases it is possible to improve upon the results shown here by adjusting the cutoffs
further, so the slight bend in the graph may not be meaningful. Observe that the
mean slowdown of TAGS almost never exceeds 1.
1. The performance of Random stayed the same, as it should.
2. The performance of Least-Work-Remaining improved by a couple orders
of magnitude in the higher ff region, but less in the lower ff region. The
Least-Work-Remaining task assignment policy is helped by increasing
the number of hosts, although the system load stayed the same, because
having more hosts increases the chances of one of them being free.
3. The performance of TAGS-opt-slowdown improved a lot. So much so,
that the mean slowdown under TAGS-opt-slowdown is never over 6 and
almost always under 1. At ff - 1, TAGS-opt-slowdown improves upon
Least-Work-Remainingby 4-5 orders of magnitude. At
improves upon Least-Work-Remaining by 8-9 orders of magnitude. At
improves upon Least-Work-Remaining by
over 25 orders of magnitude!
The enhanced performance of TAGS on more hosts may come from the fact
that more hosts allow for greater flexibility in choosing the cutoffs. However
it is hard to say for sure because it is difficult to compute results for the case
of more than 2 hosts. The cutoffs in the case of 2 hosts were all optimized
by Mathematica, while in the case of 4 hosts it was necessary to perform the
optimizations by hand (and for all we know, it may be possible to do even
better). For the case of system load 0:7 with 4 hosts we ran into the same type
of problems as we did for the 2 host case with system load 0:7.
6.1 The Server Expansion Performance Metric
There is one thing that seems very artificial about our current comparison of
task assignment policies. No one would ever be willing to run a system whose
expected mean slowdown In practice, if a system was operating with
mean slowdown of the number of hosts would be increased, without increasing
the arrival rate, (thus dropping the system load) until the system's performance
improved to a reasonable mean slowdown, like 3 or less. Consider the
following example: Suppose we have a 2-host system running at system load .7
and with variability parameter :6. For this system the mean slowdown under
TAGS-opt-slowdown is on the order of 10 9 , and no other task assignment policy
that we know of does better. Suppose however we desire a system with mean
slowdown of 3 or less. So we double the number of hosts (without increasing the
outside arrival rate). At 4 hosts, with system load 0:35, TAGS-opt-slowdown
now has mean slowdown of around 1, whereas Least-Work-Remaining's slow-down
has improved to around 10 8 . It turns out we would have to increase
number of hosts to 13 for the performance of Least-Work-Remaining to improve
to the point of mean slowdown of under 3. And for Random to reach that
level it would require an additional 10 9 hosts!
The above example suggests a new practical performance metric for distributed
servers, which we call the server expansion metric. The server expansion
metric asks how many additional hosts must be added to the existing
server (without increasing outside arrival rate) to bring mean slowdown down
to a reasonable level (where we'll arbitrarily define ``reasonable'' as 3 or less).
Figure
13 compares the performance of our task assignment policies according
to the server expansion metric, given that we start with a 2 host system with
system load of 0:7. For TAGS-opt-slowdown, the server expansion is only 3 for
no more than 2 for all the other ff. For Least-Work-Remaining, on
the other hand, the number of hosts we need to add ranges from 1 to 27, as ff
decreases. Still Least-Work-Remaining is not so bad because at least its performance
improves somewhat quickly as hosts are added and load is decreased,
the reason being that both these effects increase the probability of a task finding
an idle host. By contrast Random, shown in Figure 13(b), is exponentially
worse than the others, requiring as many as 10 5 additional hosts when ff - 1.
Although Random does benefit from increasing the number of hosts, the effect
(a) Non-log scale
alpha
-o- TAGS-opt-slowdown
.o. Least-Work-Remaining
(b) Log scale
Server expansion requirement
-o- Random
.o. Least-Work-Remaining
-o- TAGS-opt-slowdown
alpha
Figure
13: Server expansion requirement for each of the task assignment policies,
given that we start with a 2 host system with system load of 0:7. (a) Shows just
Least-Work-Remaining and TAGS-opt-slowdown on a non-log scale (b) Shows
Least-Work-Remaining, TAGS-opt-slowdown, and Random on a log scale.
Second Moment of Bounded Pareto Distribution B(k,p,alpha) where
alpha
Figure
14: Second moment of B(k; p; ff) distribution, where now the upper
bound, p, is set at The mean is held fixed at 3000 as
ff is varied. Observe that the coefficient of variation now ranges from 2, when
isn't nearly as strong as it is for TAGS and Least-Work-Remaining.
7 The effect of the range of task sizes
The purpose of this section is to investigate what happens when the range of
task sizes (difference between the biggest and smallest possible task sizes) is
smaller than we have heretofore assumed, resulting in a smaller coefficient of
variation in the task size distribution.
Until now we have always assumed that the task sizes are distributed according
to a Bounded Pareto distribution with upper bound fixed
mean 3000. This means, for example, that when ff - 1 (as agrees with empirical
data), we need to set the lower bound on task sizes to 167. However this
implies that the range of task sizes spans 8 orders of magnitude!
It is not clear that most applications have task sizes ranging 8 orders in mag-
nitude. In this section we rederive the performance of all the task assignment
policies when the upper bound p is set to still holding the mean
of the task size distribution at 3000. This means, for example, that when ff - 1
using TAGS-opt-slowdown: 2 hosts, load .5,p =10 7
alpha
Figure
15: Mean slowdown under TAGS-opt-slowdown in a distributed server
with 2 hosts with system load 0:5, as compared with the performance of Random
and Least-Work-Remaining. In this set of results the task size distribution is
(as agrees with empirical data), we need to set the lower bound on task sizes to
which implies the range of task sizes spans just 5 orders of magnitude.
Figure
14 shows the second moment of the Bounded Pareto task size distribution
as a function of ff when . Comparing this figure to Figure 4, we
see that the task size variability is far lower when therfore so is the
coefficient of variation.
Lower variance in the task size distribution suggests that the improvement
of TAGS over the other task assignment policies will not be as dramatic as in
the higher variability setting (when This is in fact the case. What is
interesting, however, is that even in this lower variability setting the improvement
of TAGS over the other task assignment policies is still impressive, as shown
in
Figure
15. Figure 15 shows the mean slowdown of TAGS-opt-slowdown as
compared with Random and Least-Work-Left for the case of two hosts with
system load 0:5. Observe that for ff - 1, TAGS improves upon the other task
assignment policies by over 2 orders of magnitude. As ff drops, the improvement
increases. This figure should be contrasted with Figure 9(b), which shows the
same scenario where
8 Conclusion and Future Work
This paper is interesting not only because it proposes a powerful new task
assignment policy, but more so because it challenges some natural intuitions
which we have come to adopt over time as common knowledge.
Traditionally, the area of task assignment, load balancing and load sharing
has consisted of heuristics which seek to balance the load among the multiple
hosts. TAGS, on the other hand, specifically seeks to unbalance the load, and
sometimes severely unbalance the load. Traditionally, the idea of killing a task
and restarting from scratch on a different machine is viewed with skepticism,
but possibly tolerable if the new host is idle. TAGS, on the other hand, kills tasks
and then restarts them at a target host which is typically operating at extremely
high load, much higher load than the original source host. Furthermore, TAGS
proposes restarting the same task multiple times.
It is interesting to consider further implications of these results, outside the
scope of task assignment. Consider for example the question of scheduling CPU-bound
tasks on a single CPU, where tasks are not preemptible and no a priori
knowledge is given about the tasks. At first it seems that FCFS scheduling is
the only option. However in the fact of high task size variability, FCFS may
not be wise. This paper suggests that killing and restarting tasks may be worth
investigating as an alternative, if the load on the CPU is low enough to tolerate
the extra work created.
Task assignment also has applications outside of the context of a distributed
server system described in this paper. A very interesting recent paper by Shaikh,
Rexford, and Shin [15] discusses routing of IP flows (which also have heavy-tailed
size distributions) and recommends routing long flows differently from
short flows.
--R
Load profiling: A methodology for scheduling real-time tasks in a distributed system
Task assignment in a distributed system: Improving performance by unbalancing load.
A simple dynamic routing problem.
Task assignment in a distributed server.
Task assignment in a distributed server.
Exploiting process lifetime distributions for dynamic load balancing.
Unix file size survey - <Year>1993</Year>
An approximation to the response time for shortest queue routing.
An approximation for the mean response time for shortest queue routing with general interarrival and service times.
Fractal patterns in DASD I/O traffic.
Approximations in finite capacity multiserver queues with poisson arrivals.
On the optimal assignment of customers to parallel servers.
Deciding which queue to join: Some counterexamples.
Optimality of the shortest line discipline.
An upper bound for multi-channel queues
Stochastic Modeling and the Theory of Queues.
--TR
Deciding which queue to join: Some counterexamples
An approximation to the response time for shortest queue routing
An approximation for the mean response time for shortest queue routing with general interarrival and service times
area traffic
Exploiting process lifetime distributions for dynamic load balancing
Self-similarity in World Wide Web traffic
Task assignment in a distributed system (extended abstract)
Heavy-tailed probability distributions in the World Wide Web
Load-sensitive routing of long-lived IP flows
Load-balancing heuristics and process behavior
On choosing a task assignment policy for a distributed server system
Implementing Multiprocessor Scheduling Disciplines
Theory and Practice in Parallel Job Scheduling
Improved Utilization and Responsiveness with Gang Scheduling
Valuation of Ultra-scale Computing Systems
A parallel workload model and its implications for processor allocation
Evaluation of Task Assignment Policies for Supercomputing Servers
Load profiling
--CTR
Jianbin Wei , Cheng-Zhong Xu, Design and implementation of a feedback controller for slowdown differentiation on internet servers, Special interest tracks and posters of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Konstantinos Psounis , Pablo Molinero-Fernndez , Balaji Prabhakar , Fragkiskos Papadopoulos, Systems with multiple servers under heavy-tailed workloads, Performance Evaluation, v.62 n.1-4, p.456-474, October 2005
Victoria Ungureanu , Benjamin Melamed , Phillip G. Bradford , Michael Katehakis, Class-Dependent Assignment in cluster-based servers, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Victoria Ungureanu , Benjamin Melamed , Michael Katehakis , Phillip G. Bradford, Deferred Assignment Scheduling in Cluster-Based Servers, Cluster Computing, v.9 n.1, p.57-65, January 2006
Mor Harchol-Balter , Cuihong Li , Takayuki Osogami , Alan Scheller-Wolf , Mark S. Squillante, Cycle stealing under immediate dispatch task assignment, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA
Jianbin Wei , Xiaobo Zhou , Cheng-Zhong Xu, Robust Processing Rate Allocation for Proportional Slowdown Differentiation on Internet Servers, IEEE Transactions on Computers, v.54 n.8, p.964-977, August 2005
James Broberg , Zahir Tari , Panlop Zeephongsekul, Task assignment with work-conserving migration, Parallel Computing, v.32 n.11-12, p.808-830, December, 2006
T. Madhusudan , Young-Jun Son, A simulation-based approach for dynamic process management at web service platforms, Computers and Industrial Engineering, v.49 n.2, p.287-317, September 2005 | load sharing;fairness;job scheduling;supercomputing;distributed servers;contrary behavior;high variance;load balancing;task assignment;heavy-tailed workloads;clusters |
506161 | Mesh Partitioning for Efficient Use of Distributed Systems. | Mesh partitioning for homogeneous systems has been studied extensively; however, mesh partitioning for distributed systems is a relatively new area of research. To ensure efficient execution on a distributed system, the heterogeneities in the processor and network performance must be taken into consideration in the partitioning process; equal size subdomains and small cut set size, which results from conventional mesh partitioning, are no longer the primary goals. In this paper, we address various issues related to mesh partitioning for distributed systems. These issues include the metric used to compare different partitions, efficiency of the application executing on a distributed system, and the advantage of exploiting heterogeneity in network performance. We present a tool called PART, for automatic mesh partitioning for distributed systems. The novel feature of PART is that it considers heterogeneities in the application and the distributed system. Simulated annealing is used in PART to perform the backtracking search for desired partitions. While it is well-known that simulated annealing is computationally intensive, we describe the parallel version of simulated annealing that is used with PART. The results of the parallelization exhibit superlinear speedup in most cases and nearly perfect speedup for the remaining cases. Experimental results are also presented for partitioning regular and irregular finite element meshes for an explicit, nonlinear finite element application, called WHAMS2D, executing on a distributed system consisting of two IBMSPs with different processors. The results from the regular problems indicate a 33 to 46 percent increase in efficiency when processor performance is considered as compared to the conventional even partitioning. The results indicate a 5 to 15 percent increase in efficiency when network performance is considered as compared to considering only processor performance; this is significant given that the optimal improvement is 15 percent for this application. The results from the irregular problem indicate up to 36percent increase in efficiency when processor and network performance are considered as compared to even partitioning. | Introduction
Distributed computing has been regarded as the future of high performance computing. Nation-wide
high speed networks such as vBNS [25] are becoming widely available to interconnect high-speed
computers, virtual environments, scientific instruments and large data sets. Projects such
as Globus [15] and Legion [20] are developing software infrastructure that integrates distributed
computational and informational resources. In this paper, we present a mesh partitioning tool for
distributed systems. This tool, called PART, takes into consideration the heterogeneity in processors
and networks found in distributed systems as well as heterogeneities found in the applications.
Mesh partitioning is required for efficient parallel execution of finite element and finite difference
applications, which are widely used in many disciplines such as biomedical engineering, structural
mechanics, and fluid dynamics. These applications are distinguished by the use of a meshing
procedure to discretize the problem domain. Execution of a mesh-based application on a parallel
or distributed system involves partitioning the mesh into subdomains that are assigned to individual
processors in the parallel or distributed system.
Mesh partitioning for homogeneous systems has been studied extensively [2, 4, 14, 31, 36, 37, 41];
however, mesh partitioning for distributed systems is a relatively new area of research brought about
by the recent availability of such systems. To ensure efficient execution on a distributed system, the
heterogeneities in the processor and network performance must be taken into consideration in the
partitioning process; equal size subdomains and small cut set size, which results from conventional
mesh partitioning, are no longer desirable. PART takes advantage of the following heterogeneous
system features: (1) processor speed; (2) number of processors; (3) local network performance; and
wide area network performance. Further, different finite element applications under consideration
may have different computational complexity, different communication patterns, and different
element types, which also must be taken into consideration when partitioning.
In this paper, we discuss the major issues in mesh partitioning for distributed systems. In
particular, we identify a good metric to be used to compare different partitioning results, present
a measure of efficiency for a distributed system, and discuss optimal number of cut sets for remote
communication. The metric used with PART to identify good efficiency is estimated execution
time.
We also present a parallel version of PART that significantly improves performance of the
partitioning process. Simulated annealing is used in PART to perform the backtracking search for
desired partitions. However, it is well known that simulated annealing is computationally intensive.
In the parallel PART, we use the asynchronous multiple Markov chain approach of parallel simulated
annealing [21]. PART is used to partition six irregular meshes into 8, 16, and 100 subdomains using
up to 64 client processors on an IBM SP2 machine. The results show superlinear speedup in most
cases and nearly perfect speedup for the rest. The results also indicate that the parallel version of
PART produces partitions consistent with the sequential version of PART.
Using partitions from PART, we ran an explicit, 2-D finite element code on two geographically
distributed IBM SP machines. We used Globus software for communication between the two SPs.
We compared the partitions from PART with that generated using the widely-used partitioning tool,
METIS [26], which considers only processor performance. The results from the regular problems
indicate a increase in efficiency when processor performance is considered as compared to
the conventional even partitioning; the results indicate 5 \Gamma 15% increase in efficiency when network
performance is considered as compared to considering only processor performance; this is significant
given that the optimal is 15% for this application. The result from the irregular problem indicate up
to 36% increase in efficiency when processor and network performance are considered as compared
to even partitioning.
The remainder of the paper is organized as follows: Section 2 provides background. Section 3
discusses issues. Section 4 describes PART in detail. Section 5 is experimental results. Section 6
gives previous work and finally conclusion.
Background
2.1 Mesh-based Applications
Finite element method has been the fundamental numerical analysis technique to solve partial
differential equations in the engineering community for the past three decades [24, 3]. There are
three basic procedures in the finite element method. The problem is first formulated in variational or
weighted residual form. In the second step, the problem domain is discretized into complex shapes
called elements. The last major step is to solve the resulting system of equations. The procedure of
discretizing the problem domain is called meshing. Applications that involve a meshing procedure
are referred to as mesh-based applications.
Mesh-based applications are naturally suitable for parallel or distributed systems. Implementing
the finite element method in parallel involves partitioning the global domain of elements into
connected subdomains that are distributed among P processors; each processor executes the
numerical technique on its assigned subdomain. The communication among processors is dictated
by the types of integration method and solver method. Explicit integration finite element problems
do not require the use of a solver since a lumped matrix (which is a diagonal matrix) is used.
Therefore, communication only occurs among neighboring processors that have common data and
is relatively simple. For implicit integration finite element problems, however, communication is
determined by the type of solver used in the application. The application used in this paper is
an explicit, nonlinear finite code, called WHAMS2D [6], which is used to analyze elastic plastic
materials. While we focus on the WHAMS2D code, the concept can be generalized to implicit as
well as other mesh-based applications.
2.2 Distributed System
Distributed computing consists of a platform with a network of resources. These resources may be
clusters of workstations, cluster of personal computers, or parallel machines. Further, the resources
maybe located at one site or distributed among different sites. Figure 1 shows an example of
a distributed system. Distributed systems provide an economical alternative to costly massively
parallel computers. Researchers are no longer limited by the computing resources at individual sites.
The distributed computing environment also provides researchers opportunities to collaborate and
share ideas through the use of collaboration technologies.
In a distributed system, we define "group" as a set of processors that share one interconnection
network and have the same performance. A group can be an SMP, a parallel computer, or a cluster
of workstations or personal computers. Communication occurs both within a group and between
groups. We refer to communication within a group as local communication; and those between
processors in different groups as remote communication. The number of groups in the distributed
system is represented by the term S.
Supercomputer
SMPs NOW
Supercomputer
Figure
1: A distributed system.
2.3 Problem Formulation
Mesh partitioning for homogeneous systems can be viewed as a graph partitioning problem. The
goal of the graph partitioning problem is to find a small vertex separator and equal sized subsets.
Mesh partitioning for distributed system, however, is a variation of the graph partitioning problem;
the goal differs from regular graph partitioning problem in that equal sized subsets may not be
desirable. The distributed system partitioning problem can be stated as follows:
Given a graph E) of jV
and the maximum of a
cost function f over all V i is minimized:
Min
In this paper, the cost function f is the estimate of execution time of a given application on a
distributed system. This function is discussed further in Section 4.
Graph partitioning has been proven to be NP-complete. The mesh partitioning problem for
distributed system is also NP-complete as proven in Appendix 1. Therefore, we focus on heuristics
to solve this problem.
3 Major Issues
In this section, we discuss the following major issues related to the mesh partitioning problem for
distributed systems: comparison metric, efficiency, and number of cuts between groups.
3.1 Comparison Metric
The de facto metric for comparing the quality of different partitions for homogeneous parallel
systems has been equal subdomains and minimum interface (or cut set) size. Although there have
been objections and counter examples [14], this metric has been used extensively in comparing the
quality of different partitions. It is obvious that equal subdomain size and minimum interface is
not valid for comparing partitions for distributed systems.
One may consider an obvious metric for a distributed system to be unequal subdomains (pro-
portional to processor performance) and small cut set size. The problem with this metric is that
heterogeneity in network performance is not considered. Given the local and wide area networks
are used in distributed system, it is the case that there will be a big difference between local and
remote communication, especially in terms of latency.
We argue that the use of an estimate of execution time of the application on the target heterogeneous
system will always lead to a valid comparison of different partitions. The estimate is used for
relative comparison of different partition methods. Hence a coarse approximation of the execution
is appropriate for the comparison metric. It is important to make the estimate representative of
the application and the system. The estimate should include parameters that correspond to system
heterogeneities such as processor performance, local and remote communication. It should also
reflect the application computational complexity.
3.2 Efficiency
The efficiency for the distributed system is equal to the ratio of the relative speedup to the effective
number of processors, V . This ratio is given below:
(1)
where E(1) is the sequential execution time on one processor and E is the execution time on the
distributed system. The term V is equal to the summation of each processor's performance relative
to the performance of the processor used for sequential execution. This term is as follows:
(2)
where k is the processor used for sequential execution. For example, with two processors having
processor performance F 2, the efficiency would be
if processor 1 is used for sequential execution; the efficiency is
if processor 2 is instead used for sequential execution.
3.3 Network Heterogeneity
It is well-known that heterogeneity of processor performance must be considered with distributed
systems. In this section, we identify conditions for which heterogeneity in network performance
must be considered.
For a distributed system, recall that we define a group to be a collection of processors that
have the same performance and share a local interconnection network. Remote communication
corresponds to communication between two groups. Given that some processors require remote
and local communication, while others only require local communication, there will be a disparity
between the execution times of these processors corresponding to the difference in remote and local
communications (assuming equal computational loads).
3.3.1 Ideal Reduction in Execution Time
A retrofit step is used with the PART tool to reduce the computational load of processor with local
and remote communication to equalize the execution time among the processors in a group. This
step is described in detail in Section 6.2. The reduction in execution time that occurs with this
retrofit is demonstrated by considering a simple case, stripe partitioning, for which communication
occurs with at most two neighboring processors. Assume there exists two groups having the same
processor and local network performance; the groups are located at geographically distributed sites
requiring a WAN for interconnection. Figure 2 illustrates one such case.
G processors
local communication
local communication
remote communication
Figure
2: Communication Pattern for Stripe Partitioning.
Processor i (as well as processor local and remote communication. The difference
between the two communication times is:
where x is the percentage of the difference of CR and CL in the total execution time E. Assume
that E represents the execution time taking into consideration only processor performance. Since
it is assumed that all processors have the same performance, this entails an even partition of the
mesh. This time can be written as:
Now consider the case of partitioning to take into consideration the heterogeneity in network
performance. This is achieved by decreasing the load assigned to processor i and increasing the
loads of the processors in group 1. The same applies to processor j in group 2. The amount
of the load to be redistributed is CR \Gamma CL or x%E and this amount is distributed to G processors.
This is illustrated in Figure 6, which is discussed with the retrofit step of PART. The execution
time is now:
G
The difference between E and E 0 is:
x
G
G
Therefore, by taking the network performance into consideration when partitioning, the percentage
reduction in execution time is approximately x%E(denoted as \Delta(1; G)) which includes the following:
(1) the percentage of communication in the application and (2) the difference in the remote and
local communication. Both factors are determined by the application and the partitioning. If the
maximum number of processors among the groups that have remote communication is -, then the
reduction in execution time is as follows:
G
\Delta(-;
For example, for the WHAMS2D application in our experiments, we calculated the ideal reduction
to be 15% for the regular meshes with 8. For those partitions, only one processor
in each group has local and remote communication, therefore, it is relatively easy to calculate the
ideal performance improvement.
3.3.2 Number of Processors in a Group with Local and Remote Communication
The major issue to be addressed with the reduction is how to partition the domain assigned to a
group to maximize the reduction. In particular, this issue entails a tradeoff between the following
two scenarios:
1. Many processors in a group having local and remote communication, resulting in small message
sizes for which the execution time without the retrofit step is smaller than that for case 2.
However, given that many processors in a group have remote and local communication, there
are fewer processors that are available for redistribution of the additional load. This is illustrated
in Figure 3 where a mesh of size n \Theta 2n is partitioned into 2P blocks. Each block
oe -?
oe -?
Figure
3: A size n \Theta 2n mesh partitioned into 2P blocks.
is n
\Theta n
assuming all processors have equal performance. The mesh is partitioned into
two groups, each group having P processors. Processors on the group boundary incur remote
communication as well as local communication. Part of the computational load of these
processors need to be moved to processors with only local communication to compensate for
the longer communication times. Assume there is no overlap in communication messages and
message aggregation is used for the communication of one node to the diagonal processor, the
communication time for a processor on the group boundary is approximately:
local+remote
For a processor with only local communication, the communication time is approximately
(again, message aggregation and no overlapping is assumed):
local
Therefore, the communication time difference between a processor with local and remote
communication and a processor with only local communication is approximately:
local
There are a total of
P number of processors with local and remote communication. There-
fore, using Equation 1, the ideal reduction in execution time in Group 1 (and Group 2)
-:
. n
Figure
4: A size n \Theta 2n mesh partitioned into 2P stripes.
is:
blocks
2. Only one processor in a group has local and remote communication, resulting in large message
sizes which result in the execution time without the retrofit step larger than that for case 1.
However, there are more processors that are available for redistribution of additional load.
This is illustrated in Figure 4 where the same mesh is partitioned into stripes; there is only
one processor in each group that have local and remote communication. Following a similar
analysis as in Figure 3, the communication time difference between a processor with both local
and remote communication and a processor with only local communication is approximately:
local
There is only one processor with remote communication in each group. Hence, using Equation
1, the ideal reduction in execution time is:
stripes
comm (stripes) (15)
Therefore, the total execution time for stripe and block partitioning are:
stripes
reduction
blocks
reduction
The difference in total execution time between block and stripe partition is:
\DeltaT blocks\Gammastripes
ff L
A
Therefore, the difference in total execution time between block and stripe partitioning is determined
by . The term A and C are positive since P ? 1; while the term B is negative
4, the block partition has a higher execution time, i.e., the stripe
partitioning is advantageous. If P ? 4, however, block partitioning will still have a higher execution
time unless n is so large that the absolute value of term B is larger than the sum of the absolute
values of A and C. Note that ff L and ff R are one to two orders of magnitude larger than fi L . In our
experiments, we calculated that block partitioning has a lower execution time only if n ? 127KB.
In the meshes that we used, however, the largest n is only about 10KB.
4 Description of PART
PART considers heterogeneities in both the application and the system. In particular, PART
takes into consideration that different mesh based applications may have different computational
complexities and the mesh may consist of different element types. For distributed systems, PART
takes into consideration heterogeneities in processor and network performance.
Figure
5 shows a flow diagram of PART. PART consists of an interface program and a simulated
annealing program. A finite element mesh is fed into the interface program and produces the
proposed communication graph, which is then fed into a simulated annealing program where the
final partitioning is computed. This partitioned graph is translated to the required input file format
for the application. This section describes the initial interface program and the steps required to
partition the graph.
Problem domain
# of groups
# of processors per group
Computational models
Communication models
Partitioned data
Finite element mesh Interface
program graph Simulated
Annealing
Partitioned
graph Interface
program
Input to
processors
Figure
5: PART flowchart.
4.1 Mesh Representation
We use a weighted communication graph to represent a finite element mesh. This is a natural
extension of the communication graph. As in the communication graph, vertices represent elements
in the original mesh. A weight is added to each vertex to represent the number of nodes within
the element. Same as in communication graph, edges represent the connectivity of the elements in
the weighted communication graph. A weight is also added to each edge to represent the number
of nodes of which information need to be exchanged between the two neighboring elements.
4.2 Partition Method
PART entails three steps to partition a mesh for distributed systems. These steps are:
1. Partition the mesh into S subdomains for the S groups, taking into consideration heterogeneity
in processor performance and element types.
2. Partition each subdomain into G parts for the G processors in a group, taking into consideration
heterogeneity in network performance and element types.
3. If necessary, globally retrofit the partitions among the groups, taking into consideration heterogeneity
in the local networks among the different groups.
Each of the above steps is described in detail in the following subsections. Each subsection
includes a description of the objective function used with simulated annealing.
The key to a good partitioning by simulated annealing is the cost function. The cost function
used by PART is the estimate of execution time. For one particular supercomputer, let E i be the
execution time for the i-th processor (1 - i - p). The goal here is to minimized the variance of the
execution time for all processors.
While running the simulated annealing program, we found that the best cost function is:
instead of the sum of the
2 . So (20) is the actual cost function used in the simulated annealing
program. In this cost function, Ecomm includes the communication cost for the partitions that have
elements that need to communicate with elements on a remote processor. Therefore, the execution
time will be balanced. - is the parameter that needs to be tuned according to the application and
problem size.
Partition
The first step generates a coarse partitioning for the distributed systems. Each group gets a
subdomain that is proportional to its number of processors, the performance of the processors, and
the computational complexity of the application. Hence computational cost is balanced across all
the groups.
The cost function is given by:
where S is the number of groups in the system.
4.2.2 Step 2: Retrofit
In the second step, the subdomain that is assigned to each group from Step 1 is partitioned among
its processors. Within each group, simulated annealing is used to balance the execution time.
In this step, variance in network performance is considered. Processors that entails inter group
communication will have reduced computational load to compensate for the longer communication
time.
The step is illustrated in Figure 6 for two supercomputers, SC1 and SC2. In SC1, four processors
are used; and two processors are used in SC2. Computational load is reduced for P3 since it
communicates with a remote processor. The amount of reduced computational load is represented
as ffi. This amount is equally distributed to the other three processors. Assuming the cut size
remains unchanged, the communication time will not change, hence the execution time will be
balanced after this shifting of computational load.
comm.
comp. comm.
p3
p3
comp.
retrofit
d
d/4
Figure
An illustration of the retrofit step for two supercomputers assuming only two nearest
neighbor communication.
This step entails generating imbalanced partitions in group i that take into consideration that
some processors communicate locally and remotely and other processors communicate only locally.
The imbalance is represented with the term \Delta i . This term is added to processors that require
local and remote communication. Adding this term results in a decrease in Ecomm i
as compared to
processors requiring only local communication. The cost function is given by the following equation:
where p is the number of processors in a given group; \Delta i is the difference in the estimation of local
and remote communication time. For processors that only communicate locally,
4.2.3 Step 3: Global Retrofit
The third step addresses the global optimization, taking into consideration differences in the local
interconnect performance of various groups. Again, the goal is to minimize the variance of the
execution time across all processors. In this step, elements on the boundaries of partitions are
moved according to the execution time variance between neighboring processors. This step is only
executed if there is a large difference in the performance of the different local interconnects. For
the case when a significant number of elements are moved between the groups in Step 3, the second
step is executed again to equalize the execution time in a group given the new computational load.
After Step 2, processors in each group will have a balanced execution time. However, execution
time of the different groups may not be balanced. This may occur when there is a large difference
in the communication time of the different groups. To balance the execution among all the groups,
we take the weighted average of execution times from all the groups. The weight
for each group equals to the computing power of that group versus the total computing power.
The computing power for a particular group is the multiplication of the ratio of the processor
performance with respect to the slowest one among all the groups and the number of processors used
from that group. We denote this weighted average as -
E. Under the assumption that communication
time will not change much (i.e., the separators from Step 1 will not incur a large change in size),
E is the optimal execution time that can be achieved. To balance the execution time so that each
group will have an execution time of -
E, we first compute the difference of E i with -
E:
This \Gamma i is then added to each E comp i
in the cost function. The communication cost Ecomm i
is
now again the remote communication cost for group i. The cost function is therefore given by:
where S is the number of groups in the system. For groups whose the domain will increase;
for groups whose the domain will decrease. If Step 3 is necessary, then Step 2 is performed
again to partition within each group.
5 Parallel Simulated Annealing
PART uses simulated annealing to partition the mesh. Figure 7 shows the serial version of the
simulated annealing algorithm. This algorithm uses the Metropolis criteria (line 8 to 13 in Figure 7)
to accept or reject moves. The moves that reduce the cost function are accepted; the moves that
increase the cost function may be accepted with probability e \Gamma \DeltaE
avoiding being trapped
in local minima. This probability decreases when the temperature is lowered. Simulated Annealing
is computationally intensive, therefore, a parallel version of simulated annealing is used in the
parallel version of PART. There are three major classes of parallel simulated annealing [19]: serial-
like [32, 39], parallel moves [1], and multiple Markov chains [5, 21, 34]. Serial like algorithms
essentially break up each move into subtasks and parallelize the subtasks (parallelizing line 6 and
7 in
Figure
7). For the parallel moves algorithms, each processor generates and evaluates moves
cost function calculation may be inaccurate since processors are not aware of moves
by other processors. Periodic updates are normally used to address the effect of cost function error.
Parallel moves algorithms essentially parallelize the for loop in Figure 7 (line 5 to 14). For the
multiple Markov chains algorithm, multiple simulated annealing processes are started on various
processors with different random seeds. Processors periodically exchange solutions and the best is
selected and given to all the processors to continue their annealing processes. In [5], the multiple
Markov chain approach was shown to be most effective for VLSI cell placement. For this reason,
the parallel version of PART uses the multiple Markov chain approach.
Given P processors, a straightforward implementation of the multiple Markov chain approach
would be initiating simulated annealing on each of the P processors with a different seed. Each
processor performs moves independently and then finally the best solution from those computed by
1. Get an initial solution S
2. Get an initial temperature T ? 0
3. While stopping criteria not met f
4. number of moves per temperature
5. for
6. Generate a random move
7. Evaluate changes in cost function: \DeltaE
8. if (\DeltaE !
9. accept this move, and update solution S
10. g else f
11. accept with probability
12. update solution S if accepted
13. g
14. g /*end for loop*/
15.
16.g /*end while loop */
Figure
7: Simulated annealing.
all processors is selected. In this approach, however, simulated annealing is essentially performed
P times which may result a better solution but not speedup.
To achieve speedup, P processors perform an independent simulated annealing with a different
seed, but each processor performs only M=P moves (M is the number of moves performed by
the simulated annealing at each temperature). Processors exchange solutions at the end of each
temperature. The exchange of data occurs synchronously or asynchronously. In the synchronous
multiple Markov chain approach, the processors periodically exchange solutions with each other.
In the asynchronous approach, the client processors exchange solutions with a server processor. It
has been reported that the synchronous approach is more easily trapped in a local optima than
the asynchronous [21], therefore the parallel version of PART uses the asynchronous approach.
During solution exchange, if the client solution is better, the server processor is updated with the
better solution; if the server solution is better, the client gets updated with the better solution and
continues from there. Each processor exchanges solution with the server processor at the end of
each temperature.
To ensure that each subdomain is connected, we check for disconnected components at the
end of PART. If any subdomain has disconnected components, the parallel simulated annealing
is repeated with a different random seed. This process continues until there are no disconnected
subdomains or the number of trials exceed three times. A warning message is given in the output
if there are disconnected subdomains.
6 Experiments
In this section, we present the results from two different experiments. The first experiment focuses
on the speedup of the parallel version of PART. The second experiment focuses on the quality of
the partitions generated with PART.
6.1 Speedup Results
Table
1: Parallel PART execution time (seconds): 8 partitions.
# of proc. barth4 barth5 inviscid labarre spiral viscous
4 44.4 54.7 46.6 32.4 41.2 53.7
2.1 2.2 2.4 2.5 1.5 2.9
PART is used to partition six 2D irregular meshes with triangular elements: barth4 (11451
labarre (14971 elem.), spiral (1992 elem.), and
viscous (18369 elem. The running time of partitioning the six irregular meshes into 8, and 100
Parallel PART Speedup: 8 partitions2060100140
barth4 barth5 inviscid labarre spiral viscous
Figure
8: Parallel PART speedup for 8 partitions.
subdomains are given in Tables 1 and 2, respectively. It is assumed that the subdomains will be
executed on a distributed system consisting of two IBM SPs, with equal number of processors but
different processor performance. Further, the machines are interconnected via vBNS for which the
performance of the network is given in Table 4 (discussed in Section 6.2). In each table, column 1 is
the number of client processors used by PART, and columns 2 to 6 are the running time of PART
in seconds for the different meshes. The solution quality of using two or more client processors is
within 5% of that of using one client processor. In this case, the solution quality is the estimate of
the execution time of WHAMS2D.
Figures
8 and 9 are graphical representations of the speedup of the parallel version of PART
relative to one client processor. The figures show that when the meshes are partitioned into 8
subdomains, superlinear speedup occurs in all cases. When the meshes are partitioned into 100
subdomains, superlinear speedup occurs only in the cases of two smallest meshes, spiral and inviscid.
Other cases show slightly less than perfect speedup. This superlinear speedup is attributed to the
use of multiple client processors conducting a search, for which all the processors benefit from the
results. Once a good solution is found by any one of the clients, this information is given to other
clients quickly, thereby reducing the effort of continuing to search for a solution. The superlinear
Table
2: Parallel PART execution time (seconds): 100 partitions.
# of proc. barth4 barth5 inviscid labarre spiral viscous
4 3982.2 4666.4 3082.3 4273.5 1304.4 3974.0
288.3 426.3 192.8 291.2 62.7 391.7
speedup results are consistent with that reported in [33].
6.2 Quality of Partition
6.2.1 Regular Meshes
PART was applied to an explicit, nonlinear finite code, called WHAMS2D [6], that is used to
analyze elastic plastic materials. The code uses MPI built on top of Nexus for interprocessor
communication within a supercomputer and between supercomputers. Nexus is a runtime system
that allows for multiple protocols within an application. The computational complexity is linear
with the size of the problem.
The code was executed on the IBM SP machines located at Argonne National Laboratory and
the Cornell Theory Center. These two machines were connected by the Internet. Macro benchmarks
were used to determine the network and processor performance. The results of the network
performance analysis are given in Table 3. Further, experiments were conducted to determine that
the Cornell nodes were 1.6 times faster than the Argonne nodes.
The problem mesh consists of 3 regular meshes. The execution time is given for 100 time steps
corresponding to 0.005 seconds of application time. Generally, the application may execute for
10; 000 to 100; 000 time steps. The recorded execution time represents over 100 runs, taking the
data from the runs with standard deviation less than 3%. The regular problems were executed on
Parallel PART Speedup: 100 partitions1030507090barth4 barth5 inviscid labarre spiral viscous
Figure
9: Parallel PART speedup for 100 partitions.
Table
3: Values of ff and fi for the different networks.
Argonne SP Vulcan Switch ff
a machine configuration of 8 processors (4 at ANL IBM SP and 4 at CTC IBM SP).
Table
4 presents the results for the regular problems. Column 1 is the mesh configuration.
Column 2 is the execution time resulting from the conventional equal partitioning. In particular,
we used Chaco's spectral bisection. Column 3 is the result from the partitioning taken from the
end of the first step for which the variance in processor performance and computational complexity
are considered. Column 4 is the execution time resulting from the partitioning taken from the end
of the second step for which the variance in network performance is considered. The results in
Table
4 shows that approximately increase in efficiency can be achieved by balancing the
computational cost; another 5 \Gamma 16% efficiency increase can be achieved by considering the variance
in network performance. The small increase in efficiency by considering the network performance
Table
4: Execution time using the Internet 8 processors: 4 at ANL, 4 at CTC
Case Chaco Proc. Perf. Local Retrofit
9 \Theta 1152 mesh 102.99 s 78.02 s 68.81 s
efficiency 0.46 0.61 0.71
efficiency 0.47 0.61 0.68
36 \Theta 288 mesh 103.88 s 73.21 s 70.22 s
efficiency 0.46 0.67 0.70
is due to communication being a small component of the WHAMS2D application. However, recall
that the optimal increase in performance is 15% for the regular problem as described earlier.
The global optimization step, which is the last step of PART that balances execution time across
all supercomputers, did not give significant increase in efficiency (it is not included in Table 4).
This is expected since the two supercomputers we used, the Argonne IBM SP and the Cornell
IBM SP, both have interconnection networks that have very similar performance as indicated in
Table
3. The results indicate the performance gains achievable with each step in comparison to
conventional methods that evenly partition the mesh. Given that it is obvious that considering
processor performance results in significant gains, the following section on irregular meshes only
considers performance gains resulting from considering network performance.
6.2.2 Irregular Meshes
The experiments on irregular meshes were performed on the GUSTO testbed, which is not available
when we experimented on the regular meshes. This testbed includes two IBM SP machines, one
located at Argonne National Laboratory (ANL) and the other located at the San Diego Supercomputing
Center (SDSC). These two machines are connected by vBNS (very high speed Backbone
Network Service). We used Globus [15, 16] software to allow multimodal communication within the
application. Macro benchmarks were used to determine the network and processor performance.
The results of the network performance analysis are given in Table 5. Further, experiments were
conducted to determine that the SDSC SP processors nodes were 1.6 times as fast as the ANL
ones.
Table
5: Values of ff and fi for the different networks.
ANL SP Vulcan Switch ff
SDSC SP Vulcan Switch ff
PART is used to partition five 2D irregular meshes with triangular elements: barth4 (11451
labarre (14971 elem.), viscous (18369 elem.), and inviscid (6928
(called PART without restriction). A sightly modified version of PART (called PART with
restriction) is used to partition the meshes so that only one processor has remote communication
in each group. METIS 3.0 [26] is used to generate partitions that take into consideration processor
performance (each processor's compute power is used as one of the inputs).
These three partitioners are used to identify the performance impact of considering heterogeneity
of networks in addition to that with processors. Further, the three partitioners highlight the
difference when forcing remote communication to occur on one processor in a group versus having
multiple processors with remote communication in a group. We consider 6 configurations of the
two machines: 4 at ANL and 4 at SDSC, 8 at ANL and 8 at SDSC, and 20 at ANL and 20 at SDSC.
The two groups correspond to the two IBM SPs at ANL and SDSC. We used up to 20 processors
from each SP due to limitations in co-scheduling computing resources. The execution time is given
for 100 time steps. The recorded execution time represents an average of 10 runs, and the standard
deviation is less than 3%.
Tables
6 to Table 8 show the experimental results from the 3 configurations. Column one
identifies the irregular meshes and the number of elements in each mesh (included in parenthesis).
Column two is the execution time resulting from the partitions from PART with the restriction that
only one processor per group entails remote communication. For Columns 2 to 4, the number -
indicates the number of processors that has remote communication in a group. Column three
is similar to Column two except that the partition does not have the restriction that remote
communication be on one processor. Column four is the execution time resulting from METIS
which takes computing power into consideration (each processor's compute power is used as one of
Table
Execution time using the vBNS on 8 processors: 4 at ANL, 4 at SDSC.
Mesh PART w/ restriction PART w/o restriction Proc. Perf. (METIS)
efficiency
viscous (18369 elem.) 150.0s (-=1) 169.0s (-=3) 170.0s (-=3)
efficiency 0.86 0.75 0.75
labarre (14971 elem.) 133.0s (-=1) 142.0s (-=2) 146.0s (-=3)
efficiency 0.79 0.73 0.71
efficiency 0.79 0.68 0.68
inviscid (6928 elem.) 73.2s (-=1) 85.5s (-=3) 88.5s(-=3)
efficiency 0.66 0.56 0.55
the inputs to the METIS program).
The results show that by using PART without restrictions, a slight decrease (1-3%) in execution
time is achieved as compared to METIS. By forcing all the remote communication on one
processor, the retrofit step can achieve more significant reduction in execution time. The results
in
Tables
6 to Table 8 show that efficiency is increased by up to 36% as compared to METIS, and
the execution time is reduced by up to 30% as compared to METIS; This reduction comes from
the fact that even on a high speed network such as the vBNS, the difference of message start up
cost on remote and local communication is very large. From Table 5, we see this difference is two
orders of magnitude for message start up as compared to approximately one order of magnitude
for bandwidth. Restricting remote communication on one processor allows PART to redistribute
the load among more processors thereby achieving close to the ideal reduction in execution time.
7 Previous Work
The problem of domain partitioning for finite element meshes is equivalent to partitioning the
graph associated with the finite element mesh. Graph partitioning has been proven to be an
Table
7: Execution time using the vBNS on processors: 8 at ANL, 8 at SDSC.
Mesh PART w/ restriction PART w/o restriction Proc. Perf. (METIS)
efficiency 0.72 0.62 0.59
viscous (18369 elem.) 82.9s(-=1) 100.8s(-=4) 106.0s(-=5)
efficiency 0.77 0.64 0.61
labarre (14971 elem.) 75.8s(-=1) 83.7s(-=3) 88.6s(-=3)
efficiency 0.69 0.62 0.59
efficiency 0.74 0.50 0.48
inviscid (6928 elem.) 42.2s(-=1) 62.8s(-=3) 67.2s(-=4)
efficiency 0.57 0.39 0.36
NP-complete problem [17]. Many good heuristic static partitioning methods have been proposed.
Kernighan-Lin [31] proposed a locally optimized partitioning method. Farhat [13, 14] proposed an
automatic domain decomposer based on Greedy algorithm. Berger and Bokhari [4] proposed Recursive
Coordinate Bisection (RCB) which utilizes spatial nodal coordinate information. Nour-Omid
et al. [35, 40] proposed Recursive Inertial Bisection (RIB). Simon [37] proposed Recursive Spectral
Bisection (RSB) which computes the Fiedler vector for the graph using the Lanczos algorithm and
then sorts vertices according to the size of the entries in Fiedler vector. Recursive Graph Bisection
(RGB) is proposed by George and Liu [18], which uses SPARSPAK RCM algorithm to compute
a level structure and then sort vertices according to the RCM level structure. Barnard et al. in
[2] proposed a multilevel version of RSB which is faster. Hendrickson and Leland [23, 22] also
reported a similar multilevel partitioning method. Karypis and Kumar [27, 28, 30] proposed a new
coarsening heuristic to improve the multilevel method.
Most of the aforementioned decomposition methods are available in one of three automated
tools: Chaco [22], METIS [26, 29] and TOP/DOMDEC [38]. Chaco, the most versatile, implements
inertial, spectral, Kernighan-Lin, and multilevel algorithms. These algorithms are used to
Table
8: Execution time using the vBNS on 40 processors: 20 at ANL, 20 at SDSC.
Mesh PART w/ restriction PART w/o restriction Proc. Perf. (METIS)
efficiency 0.69 0.54 0.45
viscous (18369 elem.) 38.7s(-=1) 58.6s(-=5) 64.9s(-=7)
efficiency 0.67 0.44 0.40
labarre (14971 elem.) 33.8s(-=1) 51.2s(-=3) 53.5s(-=6)
efficiency 0.62 0.41 0.40
efficiency 0.39 0.34 0.32
inviscid (6928 elem.) 33.5(-=1) 34.7s(-=4) 46.8s(-=5)
efficiency
recursively bisect the problem into equal sized subproblems. METIS uses the method for fast partitioning
of the sparse matrices, using a coarsening heuristic to provide the speed. TOP/DOMDEC
is an interactive mesh partitioning tool. All these tools produce equal size partitions. These tools
are applicable to systems with the same processors and one interconnection network. Some tools
such as METIS, can produce partitions with unequal weights. However, none of these tools can
take network performance into consideration in the partitioning process. For this reason, these
tools are not applicable to distributed systems.
Crandall and Quinn [7, 8, 9, 10, 11, 12] developed a partitioning advisory system for network
of workstations. The advisory system has three built-in partitioning methods (contiguous row,
contiguous point, and block). Given information about the problem space, the machine speed, and
the network, the advisory system provides ranking of the three partitioning methods. The advisory
system takes into consideration of variance in processor performance among the workstations. The
problem, however, is that linear computational complexity is assumed for the application. This
is not the case with implicit finite element problems, which are widely used. Further, variance in
network performance is not considered.
8 Conclusion
In this paper, we addressed issues in mesh partitioning problem for distributed systems. These
issues include comparison metric, efficiency, and cut sets. We present a tool, PART, for automatic
mesh partitioning for distributed systems. The novel feature of PART is that it considers heterogeneities
in both the application and the distributed system. The heterogeneities in the distributed
system include processor and network performance; the heterogeneities in the application include
computational complexity. We also demonstrate the use of a parallel version of PART for distributed
systems. The novel part of the parallel PART is that it uses the asynchronous multiple
Markov chain approach of parallel simulated annealing for mesh partitioning. The parallel PART
is used to partition 6 irregular meshes into 8, 16, and 100 subdomains using up to 64 client processors
on an IBM SP2 machine. Results show superlinear speedup in most cases and nearly perfect
speedup for the rest.
We used Globus software to run an explicit, 2-D finite element code using mesh partitions
from the parallel PART. Our testbed includes two geographically distributed IBM SP machines.
Experimental results are presented for 3 regular meshes and 4 irregular finite element meshes for the
WHAMS2D application executing on a distributed system consisting of two IBM SPs. The results
from the regular problems indicate a increase in efficiency when processor performance is
considered as compared to even partitioning; the results also indicate an additional 5 \Gamma 16% increase
in efficiency when network performance is considered. The result from the irregular problem indicate
a 38% increase in efficiency when processor and network performance are considered as compared
to even partitioning. Experimental results from the irregular problems also indicate up to 36%
increase in efficiency compared with using partitions that only take processor performance into
consideration. This improvement comes from the fact that even on a high speed network such as
the vBNS, the message start up cost on remote and local communication still has a large difference.
Appendix
1: Proof of NP-complete of the Mesh Partitioning Problem for Distributed
Systems
partitioning problem for distributed systems is NP-complete.
Proof 1 We transform a proven NP-Complete problem, MINIMUM SUM OF SQUARES [17], to
the Partition problem for distributed systems. Let set A , with
for each a 2 A be an arbitrary instance of MINIMUM SUM OF SQUARES. We shall construct a
graph the desired partition exists for G if and only if A has a
sum of squares.
The basic units of MINIMUM SUM OF SQUARES instance are a n. The local
replacement substitute for each a i 2 A is the collection E i of 3 edges shown in Figure 10. Therefore,
E) is defined as the following:
a
a [2]
a [1]
Figure
10: Local replacement for a i 2 A for transforming MINIMUM SUM OF SQUARES to the
Partition problem for distributed systems.
It is easy to see this instance of Partition problem for distributed systems can be constructed in
polynomial time from the MINIMUM SUM OF SQUARES instance.
are the disjoint k partitions of A such that the sum of squares is minimized,
then the corresponding k disjoint partitions of V is given by taking fa i [1]; a i [2]; a i g for each a i in
every subset of A. We also restrict the cost function f i to be the same as is in MINIMUM SUM
OF SQUARES:
and a 2 A i . This ensures that the partition sum of squares of the cost
function.
Conversely, if is a disjoint k partition of G with minimum sum of squares of the
cost function, the corresponding disjoint k partition of set A is given by choosing those vertices a i
such that fa i [1]; a i [2]; a i Hence the minimum sum of squares for the cost
function over k disjoint partitions ensures that the sum of squares of s(a) on k disjoint set of A is
also minimized. We conclude that the Partition problem for distributed systems is NP-Complete.
Appendix
2: Nomenclature
Estimated execution time on processor i.
Estimated computational time on processor i.
Estimated communication time on processor i.
Performance of processor i as measured by a computation kernel.
ff L - Per message cost of local communication.
message cost of remote communication.
cost of local communication.
cost of remote communication.
- Size of message.
CL - Local communication time for a processor.
CR - Remote communication time for a processor.
- The difference between CR and CL for one processor.
The difference between CR and CL for processor i.
- The maximum number of processors in a group that have
both local and remote communication.
Coefficient of computational complexity.
Parameter used to equalize the contribution of the computation
and communication to execution time.
Number of elements in partition i.
- Number of processors in the system.
Number of processors in group i.
G - Number of processors in a particular group (same as P i ).
- Number of groups in the system.
The i-th group in the system.
The ratio of the speed of processors in S i relative to the
slowest processor in the system.
--R
Parallel simulated annealing algorithms for cell placement on hypercube multiprocessors.
A fast multilevel implementation of recursive spectral bisection for partitioning unstructured problems.
Finite Element Procedures in Engineering Analysis.
A partitioning strategy for non-uniform problems on multiproces- sors
An evaluation of parallel simulated annealing strategies with application to standard cell placement.
WHAMS3D project progress report PR-2
Data partitioning for networked parallel processing.
Problem decomposition in parallel networks.
Block data partitioning for partial-homogeneous parallel networks
Evaluating decomposition techniques for high-speed cluster computing
A partitioning advisory system for networked data-parallel processing
A simple and efficient automatic fem domain decomposer.
Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics.
Managing multiple communication methods in high-performance networked computing systems
Software infrastructure for the i-way meta-computing experiment
Computers and Intractability: A Guide to the Theory of NP- Completeness
Computer Solution of Large Sparse Positive Definite Systems.
Parallel simulated annealing techniques.
the Legion team.
Simulated annealing based parallel state assignment of finite state machines.
The chaco user's guide.
A multilevel algorithm for partitioning graphs.
The Finite Element Method.
The internet fast lane for research and education.
A fast and high quality multilevel scheme for partitioning irregular graphs.
A fast and high quality multilevel scheme for partitioning irregular graphs.
Multilevel k-way partitioning scheme for irregular graphs
Multilevel k-way partitioning scheme for irregular graphs
Parallel multilevel k-way partitioning scheme for irregular graphs
An efficient heuristic procedure for partitioning graphs.
Placement by simulated annealing on a multiprocessor.
Introduction to Parallel Computing: Design and Analysis of Algorithms.
Asynchronous communication of multiple markov chain in parallel simulated annealing.
Solving finite element equations on concurrent computers.
Partitioning sparse matrices with eigenvectors of graphs.
Partitioning of unstructured problems for parallel processing.
Top/domdec: a software tool for mesh partitioning and parallel processing.
Parallel n-ary speculative computation of simulated annealing
A study of the factorization fill-in for a parallel implementation of the finite element method
A retrofit based methodology for the fast generation and optimization of large-scale mesh partitions: Beyond the minimum interface size criterion
--TR
A partitioning strategy for nonuniform problems on multiprocessors
Partitioning sparse matrices with eigenvectors of graphs
Parallel simulated annealing techniques
Introduction to parallel computing
Three-dimensional grid partitioning for network parallel processing
The Legion vision of a worldwide virtual computer
Managing multiple communication methods in high-performance networked computing systems
Simulated annealing based parallel state assignment of finite state machines
Computer Solution of Large Sparse Positive Definite
Computers and Intractability
Parallel Simulated Annealing Algorithms for Cell Placement on Hypercube Multiprocessors
Parallel N-ary Speculative Computation of Simulated Annealing
Mesh Partitioning for Distributed Systems
Problem Decomposition in Parallel Networks
--CTR
Kyungmin Lee , Dongman Lee, A scalable dynamic load distribution scheme for multi-server distributed virtual environment systems with highly-skewed user distribution, Proceedings of the ACM symposium on Virtual reality software and technology, October 01-03, 2003, Osaka, Japan
Zhiling Lan , Valerie E. Taylor , Greg Bryan, Dynamic load balancing of SAMR applications on distributed systems, Scientific Programming, v.10 n.4, p.319-328, December 2002
Zhiling Lan , Valerie E. Taylor , Greg Bryan, Dynamic load balancing of SAMR applications on distributed systems, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.36-36, November 10-16, 2001, Denver, Colorado | simulated annealing;distributed systems;mesh partitioning |
506198 | Speculative Versioning Cache. | Dependences among loads and stores whose addresses are unknown hinder the extraction of instruction level parallelism during the execution of a sequential program. Such ambiguous memory dependences can be overcome by memory dependence speculation which enables a load or store to be speculatively executed before the addresses of all preceding loads and stores are known. Furthermore, multiple speculative stores to a memory location create multiple speculative versions of the location. Program order among the speculative versions must be tracked to maintain sequential semantics. A previously proposed approach, the Address Resolution Buffer (ARB) uses a centralized buffer to support speculative versions. Our proposal, called the Speculative Versioning Cache (SVC), uses distributed caches to eliminate the latency and bandwidth problems of the ARB. The SVC conceptually unifies cache coherence and speculative versioning by using an organization similar to snooping bus-based coherent caches. Our evaluation for the Multiscalar architecture shows that hit latency is an important factor affecting performance and private cache solutions trade-off hit rate for hit latency. | Introduction
Modern microprocessors extract instruction level parallelism
(ILP) from sequential programs by issuing instructions
from an active instruction window. Data dependences
among instructions, and not the original program order, determine
when an instruction may be issued from the win-
dow. Dependences involving register data are detected easily
because register designators are completely specified
within instructions. However, dependences involving memory
data (e.g. between a load and a store or two stores) are
ambiguous until the memory addresses are computed.
A straightforward solution to the problem of ambiguous
memory dependences is to issue loads and stores only after
their addresses are determined. Furthermore, a store is
not allowed to complete and commit its result to memory
until all preceding instructions are known to be free of ex-
ceptions. Each such store to a memory location creates a
speculative version of that location. These speculative versions
are held in buffers until they can be committed. Multiple
speculative stores to the same location create multiple
versions of the location. To improve performance, loads
are allowed to bypass buffered stores, as long as they are
to different addresses. If a load is to the same address as a
buffered store, it can use data bypassed from the store when
the data becomes available. An important constraint of this
approach is that a load instruction cannot be issued until the
addresses of all the preceding stores are determined. This
approach may diminish ILP unnecessarily, especially in the
common case where the load is not dependent on preceding
stores.
More aggressive uniprocessor implementations issue
load instructions as soon as their addresses are known, even
if the addresses of all previous stores may not be known.
These implementations employ memory dependence speculation
[8] and predict that a load does not depend on previous
stores. Furthermore, one can also envision issuing and
computing store addresses out of order. Such memory dependence
speculation enables higher levels of ILP, but more
advanced mechanisms are needed to support this specula-
tion. These aggressive uniprocessors dispatch instructions
from a single instruction stream, and issue load and store
instructions from a common set of hardware buffers (e.g.
reservation stations). Using a common set of buffers allows
the hardware to maintain program order of the loads and
stores via simple queue mechanisms, coupled with address
comparison logic. The presence of such queues provides
support for a simple form of speculative versioning.
However, proposed next generation processor designs
use replicated processing units that dispatch and/or issue
instructions in a distributed manner. These future approaches
partition the instruction stream into sub streams
called tasks [11] or traces [10]. Higher level instruction
control units distribute the tasks to the processors for execution
and, the processors execute the instructions within
each task leading to a hierarchical execution model. Proposed
next generation multiprocessors [9, 12] that provide
hardware support for dependence speculation also use such
execution models. A hierarchical execution model naturally
leads to memory address streams with a similar hierarchical
structure. In particular, each individual task generates its
own address stream, which can be properly ordered (dis-
ambiguated) within the processor that generates it, and at
the higher level, the multiple address streams produced by
the processors must also be properly ordered. It is more
challenging to support speculative versioning for this execution
model than a superscalar execution model because a
processor executes loads and stores without knowing those
executed by other processors.
The Address Resolution Buffer [3] (ARB) provides speculative
versioning support for such hierarchical execution
models. Each entry in the ARB buffers all versions of the
same memory location. However, there are two significant
performance limitations of the ARB:
1. The ARB is a single shared buffer connected to the multiple
processors and hence, every load and store incurs the
latency of the interconnection network. Also, the ARB
has to provide sufficient bandwidth for all the processors.
2. When a task completes all its instructions, the ARB commits
its speculative state into the architected storage (or
copies all the versions created by this task to the data
cache). Such write backs generate bursty traffic and can
increase the time to commit a task, which delays the issue
of new task to that processor and lowers the overall
performance.
We propose a new solution for speculative versioning
called the Speculative Versioning Cache [2, 5] (SVC), for
hierarchical execution models. The SVC comprises a private
cache for each processor, and the system is organized
similar to a snooping bus-based cache coherent Symmetric
Multiprocessor (SMP). Memory references that hit in the
private cache do not use the bus as in an SMP. Task commits
do not write back speculative versions en masse. Each
cache line is individually handled when it is accessed the
next time.
Section 2 introduces the hierarchical execution model
briefly and identifies the issues in providing support for
speculative versioning for such execution models. Section 3
presents the SVC as a progression of designs to ease under-
standing. Section 4 gives a preliminary performance evaluation
of the SVC to highlight the importance of a private
cache solution for speculative versioning. We derive conclusions
in section 5.
2. Speculative versioning
First, we discuss the issues involved in providing support
for speculative versioning for current generation processors.
Second, we describe the hierarchical execution model used
by the proposed next generation processors. Third, we discuss
the issues in providing support for speculative versioning
for this execution model and use examples to illustrate
them. Finally, we present similarities between multiprocessor
cache coherence and speculative versioning for the hierarchical
execution model and use this unification to motivate
our new design, the speculative versioning cache.
Speculative versioning involves tracking the program order
among the multiple buffered versions of a location to
guarantee the following sequential program semantics:
ffl A load must eventually read the value created by the most
recent store to the same location. This requires that (i)
the load must be squashed and re-executed if it executes
before the store and incorrectly reads the previous version
and, (ii) all stores (to the same location) that follow the
load in program order must be buffered until the load is
executed.
ffl A memory location must eventually have the correct version
independent of the order of the creation of the ver-
sions. Consequently, the speculative versions of a location
must be committed to the architected storage in program
order.
2.1. Hierarchical execution model
In this execution model, the dynamic instruction stream
of a program is partitioned into fragments called tasks.
These tasks form a sequence corresponding to their order
in the dynamic instruction stream. A higher level control
unit predicts the next task in the sequence and assigns it
for execution to a free processor. Each processor executes
the instructions in the task assigned to it and buffers the
speculative state created by the task. The Wisconsin Multiscalar
[11] is an example architecture that uses the hierarchical
execution model.
When a task misprediction is detected, the speculative
state of all the tasks in the sequence including and after
the incorrectly predicted task are invalidated 1 and the corresponding
processors are freed. This is called a task squash.
The correct tasks in the sequence are then assigned for exe-
cution. When a task prediction has been validated, it commits
by copying the speculative buffered state to the architected
storage. Tasks commit one by one in the order of the
sequence. Once a task commits, its processor is free to execute
a new task. Since the tasks commit in program order,
tasks are assigned to the processors in program order.
An alternative model for recovery invalidates only the dependent
(a) (b)
Figure
1: Task commits and squashes: example.
Figure
commits and task squashes. Ini-
tially, tasks 0, 1, 99 and 3 are predicted and speculatively
executed in parallel by the four processors as shown in
Figure
1(a). When the misprediction of task 99 is detected,
tasks 99 and 3 are squashed and their buffered states are
invalidated. New tasks 2 and 3 are then executed by the
processors as show in Figure 1(b). Tasks that are currently
executing are said to be active. When task 0 completes exe-
cution, the corresponding processor is freed and task 4 is assigned
for execution as shown in Figure 1(c). The program
order, represented by the sequence among the tasks, enforces
an implicit total order among the processors; the arrows
show this order. When the tasks are speculatively executed
in parallel, the multiple speculative load/store streams
from the processors are merged in arbitrary order. Providing
support for speculative versioning for such execution
models requires mechanisms that establish program order
among these streams. The following subsections outline
how the order is established using the sequence among the
tasks.
2.1.1. Loads A task executes a load as soon as its address
is available, speculating that stores from previous tasks in
the sequence do not write to the same location. The closest
previous version of the location is supplied to the load; this
version could have been created either by the same task or
by a previous task. A load that is supplied a version from a
previous task is recorded to indicate a use before a potential
definition. If such a definition (a store to the same location
from a previous task) occurs, the load was supplied with an
incorrect version and memory dependence was violated.
2.1.2. Stores When a task executes a store to a memory
location, it is communicated to all later active tasks in the
sequence 2 . When a task receives a new version of a location
from a previous task, it squashes if a use before definition
is recorded for that location - a memory dependence violation
is detected. All tasks after the squashed task are also
squashed as on a task misprediction (simple squash model).
2.1.3. Task commits and squashes The oldest active task
is non-speculative and can commit its speculative memory
state (versions created by stores from this task) to ar-
chains of instructions by maintaining information at a finer granularity.
This paper assumes the simpler model.
In reality, the store has to be communicated only until the task that has
created the next version, if any, of the location.
chitected storage. Committing a version involves logically
copying the versions from the speculative buffers to the architected
storage (data cache). As we assume the simple
task squash model, the speculative state associated with a
task is invalidated when it is squashed.
2.2. Examples for speculative versioning
Figure
2 illustrates the issues involved in speculative versioning
using an example program and a sample execution
of the program on a four processor hierarchical system. We
use the same example in the later sections to explain the
SVC design. Figure 2(a) shows the loads and stores in the
example program and the task partitioning. Other instructions
are not of direct relevance here. Figure 2(b) shows
two snapshots of the memory system during a sample execution
of the program. Each snapshot contains four boxes,
one for each active task and shows the load or store that has
been executed by the corresponding task. The program order
among the instructions translates to a sequence among
the tasks which imposes a total order among the processors
executing them; solid arrowheads show the program order
and hollow arrowheads show the execution time order in all
the examples.
Example Program
(a) (b) Sample Execution
st 3, A
ld r, A
st 0, A23
st 0, A
st 1, A
st 0, A
st 1, A
st 3, A
st 5, A
ld r, A
ld r, A
Program
Order31st 1, A
Dependence Violation23
Figure
2: Speculative versioning example.
The first snapshot is taken just before task 1 executes
a store to address A. Tasks 0 and 3 have already stored
and 3 to A and task 2 has executed a load to A. The
load is supplied the version created and buffered by task
But, according to the original program, this load must
be supplied the value 1 created by the store from task 1,
i.e., the store to load dependence has been violated. This
violation is detected when task 1 stores to address A and
all the tasks including and after task 2 are squashed and re-
executed. The second snapshot is taken after the tasks have
been squashed and re-started.
2.3. Coherence and speculative versioning
The actions performed on memory accesses and task
commits and squashes are summarized in Table 1. The
functionality in this table requires the hardware to track the
active tasks or processors that executed a load/store to a location
and the order among the different copies/versions of
this location. Cache coherent Symmetric MultiProcessors
use similar functionality to track the caches that have
a copy of every memory location. SMPs, however, need not
track the order among these copies since all the copies are
of a single version.
Event Actions
Load Record use before definition by the task;
supply the closest previous version.
Store Communicate store to later tasks; later
tasks look for memory dependence violations.
Commit Write back buffered versions created
by the task to main memory.
Squash Invalidate buffered versions created
by the task.
Table
1: Versioning: events and actions.
SMPs typically use snooping [4] to implement a Multiple
Reader/Single Writer protocol, which uses a coherence
directory that is a collection of sets, each of which tracks the
sharers of a line. In a snooping bus based SMP, the directory
is typically implemented in a distributed fashion comprising
state bits associated with each cache line. On the
other hand, the Speculative Versioning Cache (SVC) implements
a Multiple Reader/Multiple Writer protocol that
tracks copies of multiple speculative versions of each memory
location. This protocol uses a version directory that
maintains ordered sets for each line, each of which tracks
the program order among the multiple speculative versions
of a line. This ordered set or list, called the Version Ordering
List (VOL), can be implemented in several different
ways - the SVC, proposed in this paper, uses explicit
pointers in each line to implement it as a linked list (like in
SCI [1]). The following sections elaborate on a design that
uses pointers in each cache line to maintain the VOL.
The private cache organization of the SVC makes it
a feasible memory system for proposed next generation
single chip multiprocessors that execute sequential programs
on tightly coupled processors using automatic parallelization
[9, 12]. Previously, ambiguous memory dependences
limited the range of programs chosen for automatic
parallelization. The SVC provides hardware support
to overcome ambiguous memory dependences and enables
more aggressive automatic parallelization of sequential programs
3. SVC design
In this section, we present the Speculative Versioning
Cache (SVC) as a progression of designs to ease under-
standing. Each design improves the performance over the
previous one by tracking more information. We begin with
a brief review of snooping bus-based cache coherence and
then present a base SVC design which provides support for
speculative versioning with minimal modifications to the
cache coherence scheme. We then highlight the performance
bottlenecks in the base design and introduce optimizations
one by one in the rest of the designs.
3.1. Snooping bus based cache coherence
Figure
3 shows a 4-processor SMP with private L1
caches that uses a snooping bus to keep the caches consis-
tent. Each cache line comprises an address tag that identifies
the data that is cached, the data that is cached, and two
bits (valid and store) representing the state of the line. The
valid (V ) bit is set if the line is valid. The store (S) or dirty
bit is set when a processor stores to the line.
Bus Arbiter
V: Valid S: Store or dirty
Tag V S Data
Next level
memory
Snooping Bus
Figure
3: SMP coherent cache.
A cache line is in one of three states: Invalid, Clean and
Dirty. A request (load or store) from a processor to its L1
cache hits if a valid line with the requested tag is in an appropriate
state; otherwise, it misses. Cache misses issue bus
requests while cache hits do not. More specifically, a load
from a clean or dirty line and a store to a dirty line result in
cache hits. Otherwise, the load(store) misses and the cache
issues a BusRead(BusWrite) request. The L1 caches and
the next level memory snoop the bus on every request. If
a cache has a valid line with the requested tag, it issues an
appropriate response according to a coherence protocol. A
store to a clean line misses and the cache issues a BusWrite
request. An invalidation-based coherence protocol invalidates
copies of this line in all other caches, if any. This
protocol allows a dirty line to be present in only one of
the caches. However, a clean line can be present in multiple
caches simultaneously. The cache with the dirty line
supplies the data on a BusRead request. A cache issues a
BusWback request to cast out a dirty line on a replacement.
This simple protocol can be extended by adding an exclusive
bit to the state of each line to cut down traffic on the
shared bus. If a cache line has the exclusive bit set, then it
has the only valid copy of the line and can perform a store
to that line locally. The SVC designs we discuss in the following
sections also use an invalidation-based protocol.
Z
Y
Z
Y
Z
Y
Z
Y
BusWback
Flush
Replace
BusWrite Invalidate
ld r, A
BusRead
Flush
st 1, A
0State Data W.Z: Caches
Figure
4: Cache coherence example.
Figure
4 shows snapshots of the cache lines with tag or
address A in an SMP with four processors, W , X , Y , and
Z. The state of the cache line is shown in a box corresponding
to that cache. An empty box corresponding to a
cache represents that the line is not present in that cache.
The first snapshot is taken before processor Z issues a load
from A and misses in its private cache. The cache issues a
BusRead request and cache X supplies the data on the bus.
The second snapshot shows the final state of the lines; they
are clean. Later, processor Y issues a BusWrite request to
perform a store to A. The clean copies in caches X and Z
are invalidated and the third snapshot shows the final state.
chooses to replace this line, it casts out
the line to memory by issuing a BusWback request; the final
state is shown in the fourth snapshot; only the next level
memory contains a valid copy of the line.
3.2. Base SVC design
The organization of the private L1 caches in the SVC
design is shown in Figure 5; all the SVC designs use the
same organization. The base design minimally modifies the
memory system of the snooping bus-based cache coherent
SMP to support speculative versioning for processors based
on the hierarchical execution model. We assume that memory
dependences among loads and stores executed by an individual
processor are ensured by a conventional load-store
queue; our design guarantees program order among loads
and stores from different processors. The base design also
assumes that the cache line size is one word; a later design
relaxes this assumption. First, we introduce the modifications
to the SMP coherent cache, and then discuss how the
individual operations listed in Table 1 are performed.
1. Each cache line maintains an extra state bit called the load
(L) bit, as shown in Figure 6. The L bit is set when a task
loads from a line before storing to the line - a potential
Bus Arbiter
Version Control Logic
Next level
memory
Snooping Bus
Version Control Logic
Task
assignment
information
VCL responses to each cache
States of snooped lines from each cache
Figure
5: Speculative versioning cache.
V: Valid S: Store L: Load
Tag Pointer Data
Figure
Base SVC design: structure of a line.
violation of memory dependence in case a previous task
stores to the same line.
2. Each cache line maintains a pointer that identifies the
processor (or L1 cache) that has the next copy/version,
if any, in the Version Ordering List (VOL) for that line.
Thus, the VOL for a line is stored in a distributed fashion
among the private L1 cache lines. It is important to note
that the pointer identifies a processor rather than a task.
Storing the VOL explicitly in the cache lines using pointers
may not be necessary for the base design. However, it
is necessary to explicitly store the VOL for the advanced
designs and we introduce it in the base design to ease the
transition to the advanced designs.
3. The SVC uses combinational logic called the Version
Control Logic (VCL) that provides support for speculative
versioning using the VOL. A processor request that
hits in the private L1 cache does not need to consult the
VOL and hence does not issue a bus request; the VCL
is also not used. Cache misses issue a bus request that
is snooped by the L1 caches and the next level memory.
The states of the requested line in each L1 cache and the
VOL are supplied to the VCL. The VCL uses the bus re-
quest, the program order among the tasks, and the VOL
to compute appropriate responses for each cache. Each
cache line is updated based on its initial state, the bus
request and the VCL response. A block diagram of the
Version Control Logic is shown in Figure 5. For the base
design, the VCL responses are similar to that of the disambiguation
logic in the ARB [3]. The disambiguation
logic searches for previous or succeeding stages in a line
to execute a load or store, respectively.
3.2.1. Loads Loads are handled in the same way as in an
SMP except that the L bit is set if the line was initially in-
valid. On a BusRead request, the VCL locates the closest
previous version by searching the VOL in the reverse order
beginning from the requestor; this version, if any, is supplied
to the requestor. If a previous version is not buffered
in any of the L1 caches, the next level memory supplies the
data. Task assignment information is used to determine the
position of the requestor in the VOL. The VCL can search
the VOL in reverse order because it has the entire list available
and the list is short.
0.3: Tasks
Data
Pointer
State
W.Z: Caches
Z
ld r, A
Execution time order
Program order
Figure
7: Base SVC design: example load.
We illustrate the load executed by task 2 to address A in
the example program. Figure 7 shows two snapshots: one
before the load executes and one after the load completes.
Each box shows the line with tag or address A in an L1
cache (the valid bit is not explicitly shown). The number
adjacent to a box gives a processor/cache identifier and a
task identifier. The processor identifiers are used by the explicit
pointers in each line to represent the VOL, whereas,
the task identifiers serve only to ease the explanation of the
examples. Task 2 executes a load that misses in cache Z and
results in a bus request. The VCL locates cache Z in the
VOL for address A using program order and then searches
the VOL in the reverse order to find the correct version to
supply, which is the version in cache Y (the version created
by task 1).
3.2.2. Stores The SVC performs more operations on a
store miss as compared to a cache coherent SMP. When a
BusWrite request is issued on a store miss, the VCL sends
invalidation responses to the caches beginning from the re-
questor's immediate successor (in task assignment order) to
the cache that has the next version (including it, if it has
the L bit set). This invalidation response allows for multiple
versions of the same line to exist and also serves to detect
memory dependence violations. A cache sends a task
squash signal to its processor when it receives an invalidation
response from the VCL and the L bit is set in the line.
Z Z
Z
st 3, A st 1, A
W/-
Figure
8: Base SVC design: example stores.
We illustrate the stores executed by tasks 1 and 3 in the
example program. Figure 8 shows four snapshots of the
cache lines with address A. The first snapshot is taken before
task 3 executes a store that results in a BusWrite re-
quest. Since task 3 is the most recent in program order, the
store by task 3 does not result in any invalidations. Note
that a store to a line does not invalidate all other cache lines
(unlike an SMP) to allow for multiple versions of the same
line. The second snapshot is taken after the store from task
3 completes and before task 1 executes its store. Based on
task assignment information, the VCL sends an invalidation
response to each cache from the one after cache Y until the
one before cache W , which has the next version of the line
(cache W is not included since it does not have the L bit set)
sends an invalidation response to cache Z. But, the
load executed by task 2, which follows the store by task 1
in program order, has already executed. Cache Z detects a
memory dependence violation since the L bit is set when
it receives an invalidation response from the VCL. Tasks 2
and 3 are squashed as shown in the third snapshot by shaded
boxes. The final snapshot is taken after the store by task 1
has completed.
3.2.3. Task commits and squashes The base SVC design
handles task commits and squashes in a naive man-
ner. When a processor commits a task, all dirty lines in
its L1 cache are immediately written back to the next level
memory and all other lines are invalidated. To write back
all the dirty lines immediately, a list of the stores executed
by the task is maintained by the processor. When a task
is squashed, all lines in the corresponding cache are invalidated
3.3. Base design performance drawbacks
The base design just described has two significant performance
limitations that make it less desirable: (i) write
backs performed when a processor commits a task lead to
bursty bus traffic that may increase the time to commit the
task and delay issuing a new task to that processor, (ii) clean
lines are also invalidated when a task commits or squashes
because the buffered versions could be stale for the new task
allocated on the same processor; the correct version may be
present in other caches. Consequently, every task begins
execution with a cold L1 cache, increasing the bandwidth
demand. The following advanced designs eliminate these
problems by tracking additional information.
1. The first advanced design, the ECS design (section 3.5),
makes task commits and squashes more efficient. To ease
the understanding of this design, we first present an intermediate
design, the EC design (section 3.4), that makes
task commits efficient by distributing the write backs of
dirty lines over time. Also, it retains read-only data in the
L1 caches across task commits by careful book-keeping.
However, it assumes that mispredictions do not occur.
Then, we present the ECS design that extends the EC design
to allow task squashes. Task squashes are as simple
as in the base design, but are more efficient as they retain
non-speculative data in the caches across task squashes.
2. The second advanced design (section 3.6) boosts the hit-rate
of the ECS design by allowing requests to snarf [6]
the bus to account for reference spreading. Snarfing involves
copying the data supplied on a bus request issued
by another processor in an attempt to combine bus requests
indirectly.
3. The final design (section 3.7) is realistic and allows the
size of a cache line to be more than one word.
3.4. Implementing efficient task commits (EC)
The EC design avoids expensive cache flushes on task
commits by maintaining an extra state bit, called the commit
bit, in each cache line. Task commits do not stall until all
lines with speculative versions are written back. The EC
design eliminates write back bursts on the bus during task
commits. Also, no extra hardware is necessary to maintain a
list of stores performed by each task. Further, the EC design
improves cache utilization by keeping the L1 caches warm
across tasks.
Tag S L
V Data
V: Valid L: Load
C: Commit
S: Store
T: sTale
Figure
9: EC design: structure of a line.
The structure of a cache line in the EC design is shown
in
Figure
9. When a processor commits a task, the C bit
is set in all its cache lines. This operation is entirely local
to the L1 cache and does not issue a bus request. A dirty
committed line is written back, if necessary, when it is accessed
the next time either on a processor request or on a
bus request. Therefore, committed versions could remain in
the caches until much later in time since the task that created
the version committed. The order among committed
and uncommitted versions is still maintained by the explicit
pointers in the line. This order among the versions is necessary
to write back the correct committed version and to
supply the correct version on a bus request. The EC design
uses an additional state bit, the sTale (T ) bit, to retain
read-only data across tasks. First, we discuss how loads and
stores are handled when caches have both committed and
uncommitted versions and then discuss the stale bit.
3.4.1. Loads and stores Loads to committed lines are
handled like cache misses and issue a bus request. The VCL
searches the VOL in the reverse order beginning from the
requestor for the closest previous uncommitted version; this
version, if any, is supplied to the requestor. If no such version
is found, the VCL supplies the most recent committed
version, if any. This version is the first committed version
that is encountered on the reverse search. All other committed
versions need not be written back and are invalidated.
On a store miss, committed versions are purged in a similar
fashion.
ld r, A
Figure
10: EC design: example load.
We illustrate the load executed by task 2 in the example
program. Figure 10 shows two snapshots: one before the
load executes and one after the load completes. Versions
0 and 1 have been committed (the C bit is set in the lines
in caches X and Y ). Task 2 executes a load that misses in
cache Z and results in a bus request. The VCL knows that
task 2 is the head task and determines that cache X has the
most recent committed version. Cache X supplies the data
which is also written back to the next level memory. Other
committed versions (version are invalidated and are never
written back to memory. The VCL also inserts the new copy
of version 1 into the VOL by modifying the pointers in the
lines accordingly - the second snapshot shows the modified
VOL.
Figure
11 illustrates the actions performed on a store
miss. The first snapshot is taken before a store is executed
by task 5. Versions 0 and 1 have been committed. Task
5 executes a store that misses in cache Y and results in a
BusWrite request even though the line has a committed ver-
sion. The VCL purges all committed versions of this line
- it determines that version 1 has to be written back to the
next level memory and the other versions (version
be invalidated. Purging the committed versions also makes
space for the new version (version 5). The modified VOL
shown in the second snapshot contains only the two uncommitted
versions.
Y
st 5, A
Figure
11: EC design: example store.
3.4.2. Stale copies The EC design makes task commits
efficient by delaying to commit each cache line until a later
time. Therefore, a cache line could have a stale copy because
versions more recent than the version buffered by the
committed task could be present in other caches. The base
SVC design does not introduce stale copies because it invalidates
all non-dirty lines whenever a task commits. The EC
design uses the stale (T ) bit to distinguish stale copies from
correct copies and avoids issuing a bus request on accesses
to correct copies. This additional information allows the EC
design to retain read-only data (correct copies) across task
commits. First, we illustrate that stale and correct copies
are indistinguishable without the T bit and then show how
the T bit is used.
ld r, A
Z
CS
Y
CL
ld r, A
CL
W/3 Y/5 CS CS
CSS W/7Z Z
Figure
12: EC design: correct and stale copies.
Figure
12 shows two execution time lines - one that
leaves a correct copy of address A (shown using solid lines)
in cache Z and another that leaves a stale copy of address
A in the same cache (shown using dashed lines). The first
time line shows a sample execution of a modified version of
our example program - task 3 in Figure 2 does not execute
the store. The second time line shows an execution of our
original program. The first snapshot is the same for both
time lines. The second snapshot in the second time line is
taken after tasks 0 and 1 have committed. The C bit is set in
their caches and new tasks 4 and 5 have been allocated. The
final snapshot in both time lines are taken when tasks 4 to 7
are active and before task 6 executes a load. In the first time
line, the data in cache Z is a correct copy, since no versions
were created after version 1; the load can be supplied data
by just resetting the C bit and without issuing a bus request.
In the second time line, the copy in cache Z is stale since the
creation of version 3 and hence the load misses resulting a
bus request. However, cache Z cannot distinguish between
these two scenarios and has to issue a request in both cases
to consult the VOL and obtain a copy of the correct version.
The EC design uses the stale (T ) bit to distinguish between
these two scenarios and avoids the bus request whenever
a copy is not stale. The design maintains the invariant:
the most recent version of an address and its copies have
the T bit reset and the other copies and versions have the T
bit set. This invariant is easily guaranteed by resetting the
T bit in the most recent version, or a copy thereof, when
it is created and setting the T bit in the previous versions,
if any. The T bits are updated on the BusWrite request issued
to create a version or a BusRead request issued to copy
a version and hence do not generate additional bus traffic.
Since stores in different tasks can be executed out of program
order, an active task could execute a store to a copy
that has the T bit set (the copy is not stale for this task, but
is stale for the next task allocated to the same processor).
Figure
13 shows the two time lines in our example with the
status of the T bit. Cache Z can distinguish between the
correct copy (T bit is not set) and the stale copy (T bit is
set). The load hits if a correct copy is present and no bus
request is issued.
ld r, A
CSTCL
ld r, A
Z
CST
CST
CLT
Y
Figure
13: EC design: Using the stale bit.
The EC design eliminates the serial bottleneck in flushing
the L1 cache on task commits by using the commit (C)
bit. Also, this design retains non-dirty lines after task commits
as long as they are not stale. More generally, read-only
data used by a program is fetched only once into the L1
caches and never invalidated unless chosen to be replaced
on a cache miss. Further a task commits by just setting the
C bit in all lines in its L1 cache.
3.5. Implementing efficient task squashes (ECS)
The ECS design extends the EC design to allow task
squashes for the EC design. Also, the ECS design makes
the task squashes more efficient than in the base design by
retaining non-speculative data in the caches across squashes
using another state bit, the Architectural (A) bit. The structure
of a line in the ECS design is shown in Figure 14.
V: Valid S: Store L: Load
C: Commit T: sTale A: Architectural
Tag S L
Figure
14: ECS design: structure of a line.
When a task squashes, all uncommitted lines (lines with
the C bit reset) are invalidated by resetting the valid (V ) bit.
The invalidation makes the pointers in these lines and their
VOLs inexact. The VOL has a (dangling) pointer in the last
valid (or unsquashed) copy or version of the line and the
status of the T bit in the lines are incorrect. The ECS design
repairs the VOL of such a line when the line is accessed later
either on a processor request or on a bus request. Updating
the T bits is not necessary because it is only a hint to avoid a
bus request and a squash would not incorrectly reset a stale
version to be correct. However, the ECS design updates the
T bit on this bus request by consulting the repaired VOL.
W/-
CST
ST ST W
WZ
Figure
15: ECS design: VOL repair.
Figure
repair with an example time
line with three snapshots. The first snapshot is taken just
before the task squash occurs. Tasks 3 and 4 are squashed;
only version 3 is invalidated. The VOL with incorrect T bits
and the dangling pointer are shown in the second snapshot.
executes a load that misses in cache W and results
in a bus request. The VCL resets the dangling pointer and
the T bit in cache Y . The VCL then determines the version
to supply the load. Also, the most recent committed version
(version back to the next level memory. The
third snapshot is taken after the load has completed.
3.5.1. Squash invalidations The base design invalidates
non-dirty lines in the L1 cache on task squashes. This includes
both speculative data from previous tasks and architectural
data from the next level memory (or the committed
tasks). The base design invalidates these lines because
it does not track the creator of a speculative versions for
each line and hence cannot determine whether the version
in a line has been committed or squashed. Squashing non-speculative
data leads to higher miss rates for tasks that are
squashed and restarted multiple times.
To distinguish between copies of speculative and architectural
versions, we add the architectural (A) bit to each
cache line as shown in Figure 14. The A bit is set in a
copy if either the next level memory or a committed version
supplies data when a bus request issued to obtain the
copy; else the A bit is reset. One of the VCL responses on
a bus request specifies whether the A bit should be set or
reset. Copies of architectural versions are not invalidated
on task squashes, i.e., the ECS design only invalidates lines
that have both the A and C bits reset. Further, a copy of a
speculative version used by a task becomes an architectural
copy when the task commits. However, the A bit is not set
until the line is accessed by a later task, when the C bit is
reset and the A bit is set.
3.6. Hit rate optimizations
The base and ECS designs incur severe performance
penalties due to reference spreading. When a uniprocessor
program is executed on multiple processors with private
L1 caches, successive accesses to the same line that hit after
missing once in a shared L1 cache could result in a series of
misses. This phenomenon is also observed for parallel programs
where the miss rate for read-only shared data with
private caches is higher than that with a shared cache. We
use snarfing [6] to mitigate this problem. Our SVC implementation
snarfs data on the bus if the corresponding cache
set has a free line available. However, an active task's cache
can only snarf the version that the task can use unlike an
coherent cache. The VCL determines whether a task
can copy a particular version or not and informs caches of
an opportunity to snarf data on a bus request.
3.7. Realistic line size
The base and ECS designs assume that the line size of
the L1 caches is one word. The final SVC design however
allows lines to be longer than a word. Similar to an SMP
coherent cache, we observe effects due to false sharing. In
addition to causing higher bus traffic, false sharing leads to
more squashes when a store to a cache line from a task is
executed out-of-order with a load from a different byte or
word in the same line from a later task. We mitigate the
effects of false sharing by using a technique similar to the
sector cache [7]. Each line is divided into sub-blocks and
the L and S bits are maintained for each sub-block. The
size of a sub-block or versioning block is less than that of
the address block (storage unit for which an address tag is
maintained). Also, when a store miss results in a BusWrite
request, mask bits that indicate the versioning blocks modified
by the store are also made available on the bus.
4. Performance evaluation
We report preliminary performance results for the SVC
using the SPEC95 benchmarks. The goal of our implementation
and evaluation is to prove the SVC design not just to
analyze its performance. We underline the importance of
a private cache solution by first showing how performance
degrades rapidly as the hit latency for a shared cache solution
is increased; the Address Resolution Buffer (ARB) is
the shared cache solution we use for this evaluation. We
mitigate the commit time bottlenecks in the ARB (by using
an extra stage that contains architectural data) to isolate the
effects of pure hit latency from other performance bottlenecks
4.1. Methodology and configuration
All the results in this paper were collected on a simulator
that faithfully models a Multiscalar processor. The
simulator dynamically switches between a functional and a
detailed cycle-by-cycle model to provide accurate and fast
simulation of a program. The memory system model includes
a fair amount of detail including an off chip cache,
DRAM banks and interconnects between the different levels
of memory hierarchy. The Multiscalar processor used in
the experiments has 4 processors each of which can issue 2
instructions out-of-order. Each processor has 2 simple integer
ALUs, 1 complex integer unit, 1 floating point unit, 1
branch unit and 1 address calculation unit, all of which are
assumed to be completely pipelined. Inter-processor register
communication latency is 1 cycle and each processor
can send as many as two registers to its neighbor in every
cycle. Loads and stores from each processor are executed
in program order by using a load/store queue of 16 entries
The ARB is a fully-associative set of 32-byte lines with
a total of 8KB storage per stage and five stages; the shared
data cache that backs up the ARB is 2-way set associative
and 64KB in size. The off chip cache is 4MB in size with
a total peak bandwidth of 16 bytes per processor clock to
the L1 data, instruction and task caches. Main memory access
time for the first word is 24 processor clocks and has
a RAMBUS-like interface that operates at half the speed
of the processors to provide a peak bandwidth of 8 bytes
every bus clock. All the caches and memory are 4-way in-
terleaved. Both the ARB and the L1 data cache have
MSHRs/writebuffers each; each buffer can combine up to 8
accesses to the same line. Disambiguation is performed at
the byte-level. The base ARB hit time is varied from 1 to 3
cycles in the experiments. Both the tags and data RAMs are
single ported in all the caches.
The private caches that comprise the SVC are connected
together and with the off chip cache by an 8-word split-transaction
snooping bus where a typical transaction requires
3 processor cycles 3 . Each processor has its own private
L1 cache with 16KB of 4-way set-associative storage in
lines. Both loads and stores are non-blocking with
8 MSHRs/writebuffers per cache. Each buffer can combine
up to 4 accesses to the same line. Disambiguation is performed
at the byte-level. L1 cache hit time is fixed at 1 cy-
cle. The tag RAM is dual ported to support snooping while
the data RAM is single ported.
4.2. Benchmarks
We used the following programs from the SPEC95
benchmark suite with train inputs except in the cases
listed: compress, gcc (ref/jump.i), vortex, perl, ijpeg
(test/specmun.ppm), mgrid (test/mgrid.in), apsi, fpppp, and
turb3d. All programs were stopped after executing 1 billion
instructions. From past experience, we know that for these
programs performance change is not significant beyond 1
billion instructions.
4.3. Experiments
Figure
presents the instructions per cycle (IPC) for
a Multiscalar processor with either the ARB or the SVC.
The configurations keep total data storage of the SVC and
ARB/cache storage roughly the same. The percentage miss
rates for the ARB and the SVC are shown on top of the
IPC bar clusters (in that order). For the SVC, an access is
counted as a miss if data is supplied by the next level mem-
data transfers between the L1 caches are not counted
as misses.
From these preliminary experiments, we make three ob-
servations: (i) the hit latency of data memory significantly
affects ARB performance, (ii) the SVC trades-off hit rate
for hit latency and the ARB trades-off hit latency for hit
rate to achieve performance, and (iii) for the same total data
storage, the SVC performs better than the ARB having a hit
latency of 2 or more cycles as shown in Figure 16. The
graphs in these figures show that performance improves in
the range of 5% to 20% when decreasing the hit latency of
the ARB from 3 cycles to 1 cycle. This improvement indicates
that techniques that use private caches to improve
hit latency are an important factor in increasing overall per-
formance, even for latency tolerant processors like a Multiscalar
processor.
3 Bus arbitration occurs only once for cache to cache data transfers. An
extra cycle is used to flush a committed version to the next level memory.
compress
gcc
vortex
perl
ijpeg
mgrid
apsi
turb3d
IPC
3.1/7.5
2.1/3.6
2.6/2.4
8.1/9.3
2.3/3.4
1.1/2.2
6.9/8.1
ARB (3 cycle)
Figure
The distribution of storage for the SVC produces higher
miss rates than for the ARB. We attribute the increase in
miss rates for the SVC to two factors. First, distributing
the available storage results in reference spreading [6] and
replication of data reduces available storage. Second, a
latest version of a line that caches fine-grain shared data
between Multiscalar tasks constantly moves from one L1
cache to another (migratory data). Such fine-grain communication
may increase the number of total misses as well.
5. Conclusion
Speculative versioning is important to overcome limits
on Instruction Level Parallelism (ILP) due to ambiguous
memory dependences in a sequential program. Our pro-
posal, called the Speculative Versioning Cache(SVC), uses
distributed caches to eliminate the latency and bandwidth
problems of a previous solution, the Address Resolution
Buffer, which uses a centralized buffer. The SVC conceptually
unifies cache coherence and speculative versioning by
using an organization similar to snooping bus-based coherent
caches. A preliminary evaluation for the Multiscalar
architecture shows that hit latency is an important factor
affecting performance, and private cache solutions trade-off
hit rate for hit latency. The SVC provides hardware
support to break ambiguous memory dependences allowing
proposed next generation multiprocessors to use aggressive
parallelizing software for sequential programs.
Acknowledgements
We thank Scott Breach, Andreas Moshovos, Subbarao
Palacharla and the anonymous referees for their comments
and valuable suggestions on earlier drafts of the paper.
This work was supported in part by NSF Grants CCR-
9303030 and MIP-9505853, ONR Grant N00014-93-1-
0465, and by U.S. Army Intelligence Center and Fort
Huachuca under Contract DABT63-95-C-0127 and ARPA
order no. D346 and a donation from Intel Corp. The views
and conclusions contained herein are those of the authors
and should not be interpreted as necessarily representing
the official policies or endorsements, either expressed or
implied, of the U. S. Army Intelligence Center and Fort
Huachuca, or the U.S. Government.
--R
IEEE Standard for Scalable Coherent Interface (SCI) 1596- 1992
Data memory alternatives for multiscalar pro- cessors
ARB: A hardware mechanism for dynamic reordering of memory references.
Using cache memory to reduce processor-memory traffic
Speculative Versioning Cache.
Memory reference behavior and cache performance in a shared memory multi- processor
Structural aspects of the system/360 model 85 part II: The cache.
Dynamic speculation and synchronization of data de- pendences
The case for a single-chip multiprocessor
Trace processors: Moving to fourth-generation microarchitectures
Multiscalar processors.
The potential for thread-level data speculation in tightly-coupled multiprocessors
--TR
The Wisconsin multicube: a new large-scale cache-coherent multiprocessor
The expandable split window paradigm for exploiting fine-grain parallelsim
Boosting the performance of hybrid snooping cache protocols
Multiscalar processors
The case for a single-chip multiprocessor
Improving superscalar instruction dispatch and issue by exploiting dynamic code sequences
Dynamic speculation and synchronization of data dependences
Complexity-effective superscalar processors
Trace processors
Data speculation support for a chip multiprocessor
A scalable approach to thread-level speculation
Architectural support for scalable speculative parallelization in shared-memory multiprocessors
IEEE Standard for Scalable Coherent Interface, Science
Using cache memory to reduce processor-memory traffic
The Potential for Using Thread-Level Data Speculation to Facilitate Automatic Parallelization
Speculative Versioning Cache
Hardware for Speculative Parallelization of Partially-Parallel Loops in DSM Multiprocessors
--CTR
Arun Kejariwal , Xinmin Tian , Wei Li , Milind Girkar , Sergey Kozhukhov , Hideki Saito , Utpal Banerjee , Alexandru Nicolau , Alexander V. Veidenbaum , Constantine D. Polychronopoulos, On the performance potential of different types of speculative thread-level parallelism: The DL version of this paper includes corrections that were not made available in the printed proceedings, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia | speculative versioning;memory disambiguation;snooping cache coherence protocols;speculative memory |
506342 | Strong normalizability of the non-deterministic catch/throw calculi. | The catch/throw mechanism in Common Lisp provides a simple control mechanism for non-local exits. We study typed calculi by Nakano and Sato which formalize the catch/throw mechanism. These calculi correspond to classical logic through the Curry-Howard isomorphism, and one of their characteristic points is that they have non-deterministic reduction rules. These calculi can represent various computational meaning of classical proofs. This paper is mainly concerned with the strong normalizability of these calculi. Namely, we prove the strong normalizability of these calculi, which was an open problem. We first formulate a non-deterministic variant of Parigot's -calculus, and show it is strongly normalizing. We then translate the catch/throw calculi to this variant. Since the translation preserves typing and reduction, we obtain the strong normalization of the catch/throw calculi. We also briefly consider second-order extension of the catch/throw calculi. Copyright 2002 Elsevier Science B.V | Introduction
The catch and throw mechanism provides a means to implement non-local exits. The
following simple example written in Common Lisp [19] shows how to use the catch and
throw mechanism:
(defun multiply (x)
(catch 'zero (multiply2 x)))
(defun multiply2 (x)
(if (null x) 1
(if (= (car x)
(* (car x) (multiply2 (cdr x))))))
The rst function multiply sets up the catch-point with the tag zero, and immediately
calls the second function. The second one multiply2 performs the actual computation by
recursion. Given a list of integers, it calculates the multiplication of the members in the
list. If 0 is found in the list, then the result must be 0 without computing any further, so
it returns 0 by the throw-expression. The catch/throw mechanism is useful if one wants
to escape from nested function calls at a time, especially in run-time errors.
Nakano [11{14] proposed calculi with inference rules which give logical interpretations
of the catch/throw constructs in Lisp. His calculi dier from the actual catch/throw-
constructs in Common Lisp in the following two ways.
(1) He changed the scope rule of the catch-construct from a dynamic one to a lexical
one. In the above example, the expression (throw 'zero 0) is not lexically in the scope
of the corresponding catch-expression, which indicates that the catch-expression has
dynamic scope in Common Lisp. 1 In Nakano's calculi, tags are variables rather than
constants, and the correspondence between throw and catch is represented as the ordinary
variable binding mechanism, in which the scope of binders is lexical.
(2) He introduced the tag-abstraction and tag-application mechanisms which do not
exist in Common Lisp. 2 The motivation of this was to recover the expressivity which was
lost by changing the scope rule of the catch-construct.
Let us see how the above example can be written in Nakano's style:
(defun multiply (x)
(catch 'zero (multiply2 x 'zero)))
(defun multiply2 (x u)
(if (null x) 1
(if (= (car x) (throw u
(* (car x) (multiply2 (cdr x) u)))))
In this modied program, the catch-construct has lexical scope so that the scope of
the tag zero is (multiply2 x 'zero) only. To throw an object from another function
multiply2, the function is abstracted by the tag variable u. When using the function
multiply2 we must provide the tag zero as the second parameter.
Nakano also introduced a new type constructor (called \otherwise") for the tag
abstraction mechanism; if a is a term of type A, and u is a tag-variable of type B, then
the abstraction of a by u has type A B.
The characteristic points in Nakano's formulation were (1) L c=t has restriction (side-
condition) in the implication-introduction rule, and it excludes terms which corresponds
to classical proofs. Actually L c=t corresponds to an intuitionistic calculus through the
Curry-Howard isomorphism. (2) L c=t allows as many reductions as possible, hence it is
non-deterministic (not con
uent). These two features may look strange, since classical
logic is said to be essentially non-con
uent, while intuitionistic logic is con
uent. 3 We
consider that the classical version of L c=t , which is obtained by removing the restriction, is
a more natural calculus, and is suitable for extracting algorithmic meaning from classical
proofs. We call L K
c=t as the classical version of L c=t .
1 Similarly the exception mechanism in the Standard ML has dynamic scope.
2 The exception mechanism in the Standard ML has abstraction/application.
3 We refer to Girard [6] and Parigot [15] for the discussion on the con
uence and the classical logic.
A few years later than Nakano, the second author (Sato) proposed another formulation
for the catch/throw mechanism [17]. His motivation was to eliminate the type of the
tag abstraction (\otherwise") in L c=t , since it is equivalent to disjunction. By unifying
the throw-expression and the tag-abstraction mechanism, he obtained a simpler calculus
NJ c=t . He also showed that L c=t can be interpreted in NJ c=t . NJ c=t has essentially the
same restriction in the implication-introduction rule, hence it corresponds to intuitionistic
logic. He also dened NK c=t by throwing away the restriction, and showed that it corresponds
to classical logic. In summary, there are proposed four calculi for the catch/throw
mechanism:
Author Intuitionistic Logic Classical Logic
Nakano L c=t L K
c=t
Sato NJ c=t NK c=t
In this paper, we investigate the strong normalizability (SN) of the above four calculi,
in particular, L K
c=t and NK c=t . The SN of L c=t was proved by Nakano [14], but his proof
was based on complex model-theoretic arguments. In our previous works, we proved the
SN of NJ c=t in [8], and the SN of a large fragment of L K
c=t in [9], but the SN of the full
fragments of classical calculi L K
c=t and NK c=t was an open problem. This paper solves this
problem in an armative way.
We rst formulate a non-deterministic variant of Parigot's -calculus by adding several
reduction rules, and prove its strong normalizability using the reducibility method. We
then translate the catch/throw calculi to this variant. Since this translation preserves
typing as well as reduction, we obtain a proof of the strong normalizability of all the four
calculi. We nally brie
y discuss second-order extension of them.
2. The Catch/Throw Calculi
2.1. Nakano's Formulation
Nakano proposed several calculi for the catch/throw mechanism. Among them, L c=t
given in [14] is the strongest one. In this paper we also study L K
c=t , an extension of
L c=t . Although Nakano himself did not present L K
c=t in published papers, the latter can be
obtained from L c=t by simply throwing away the restriction in the implication introduction
rule, therefore we regard L K
c=t as one of Nakano's calculi.
In the following, we shall dene L K
c=t and mention the dierence of L K
c=t and L c=t .
We assume that there are nitely many atomic types (we use K as a metavariable for
atomic including ? (falsity).
Denition 2.1 (Type)
In this denition, !, ^, _ are the types for the function space, product, and sum. By the
Curry-Howard isomorphism, we may identify them with logical connectives implication,
conjunction, and disjunction. The connective was introduced to give a type to tag
abstraction. As usual, we abbreviate A !? as :A.
We assume that, for each type A, there are innitely many individual variables x A of
type A and innitely many tag variables u A of type A. We use x A ; y A ; z A for individual
variables and u A ; v A ; w A for tag variables. We regard u A and u B as dierent tag variables
if A 6 B. This implies that we may sometimes use the same variable-name for dierent
entities (dierent types).
Preterms of L c=t and L K
c=t are dened as follows.
Denition 2.2 (Preterm)
Among the preterms above, the constructs catch, throw, , and tapp were introduced
by Nakano to represent the catch and throw mechanism. We refer to the following table
for the correspondence to similar constructs in Common Lisp and Standard ML.
c=t Common Lisp Standard ML
As noted in the introduction, tags in Common Lisp (exception names in Standard ML) are
represented as tag-variables rather than constants. The preterm u:t is the tag-abstraction
mechanism like the -abstraction x:t, and the preterm tapp(t; u) is the tag-application
mechanism 4 like the functional application apply(t; u).
We sometimes omit the types in variables. We also write apply(a; b) as ab. An individual
variable is bound by the -construct and the case-construct, and a tag variable
is bound by the catch-construct and the -construct. We identify two terms which are
equivalent under renaming of bound individual/tag variables. FV(t) and FTV(t) denote
the set of free individual variables and the set of free tag variables in t, respectively.
The type inference rules are given in the natural deduction style, and listed in Table 1.
The inference rules are used to derive a judgment of the form ' a : A ; where
is a nite set in the form fx g, and is a nite set in the form
g. In both sets we understand each variable appears only once. is
a context of individual variables, and is a context of tag variables.
In L c=t , the implication-introduction rule (marked (*)) has a restriction on free tag variables
in b. L K
c=t has no restriction. In the intuitionistic calculus L c=t , a preterm x A :b is
well-typed only when x A does not essentially occur in the scope of any throw-construct
in b. One of Nakano's main results was that, this restriction neatly corresponds to intuitionistic
propositional calculus through the Curry-Howard isomorphism. The actual
Actually, Nakano did not use the word tapp. Rather, he simply wrote tu for tapp(t; u). In this paper,
we use dierent function symbols for dierent term-construction to clarify the syntax.
Table
Type Inference Rules of L c=t and L K
c=t
' a :? ;
restriction is complex due to the existence of the case-construct. In this paper we do not
give the precise denition of \essential occurrence". We refer to [11] and [14] for details.
Among the inference rules, the rst ten are standard. The rules for throw and catch
re
ect their intended semantics, namely, aborts the current context so that
this term can be any type regardless of the type of b, and the type of catch(u
a) is the
same as a and also the same as the type of possibly thrown terms. The term u B :a is
tag-abstraction, and it is assigned a new type A B. Conversely, if a is of type A B,
then applying a tag variable u B to it generates a term of type A.
An example of the type inference is as follows (which corresponds to the double negation
Ag. The above one is
a type inference gure in L K
c=t , but not in L c=t . This is because, in the formation of
x A :throw(u A ; x A ), the abstracted variable x A occurs free in throw(u
does not t into Nakano's restriction.
Let a; b; c; be metavariables for terms. If ' a : A ; is derived by the inference
rules, we say a is a term of type A under contexts and .
One-step reduction rules of L c=t and L K
c=t are given by Table 2.
Table
One-Step Reduction Rules of L c=t and L K
c=t
a 6 x and x 2 FV (a))
In this denition, C[ ] represents a context with a hole [ dened as usual. Also substitution
a[b=x] and a[v=u] are dened as usual.
As an instance, we have the following reductions:
tapp((v:(throw(v;
Instead of having a one-step reduction like catch(u; a[throw(u; b)=x]) ; 1 b, the catch/throw
mechanism splits into two steps as follows:
Since we did not restrict any evaluation strategy, the reduction in L K
c=t is non-deterministic,
moreover it is not con
uent. For instance, we have the following reduction sequences where
we put t catch(u
We dene a ; b (zero or more step reduction), and a ;+ b (one or more step reduction)
as usual.
Theorem 2.1 (Nakano)
The subject reduction property holds for L c=t and L K
c=t .
2.2. Sato's Formulation
In [17], Sato proposed another formulation of the catch/throw mechanism. His primary
motivation was to get rid of the logical connective from L K
c=t , yet to obtain a system
which is as powerful as L K
c=t . From the logical point of view, is redundant, since it is
equivalent to disjunction. Sato successfully eliminated from the calculus by unifying
the two binders of tag variables, catch and .
We shall give the denition of NK c=t in the following. NJ c=t is obtained from NK c=t by
restricting the !-introduction rule in the same way as L c=t from L K
c=t . Types are those of
L c=t with deleted. Preterms are dened as follows.
Denition 2.3 (Preterm)
Individual variables are bound by the - and the case-constructs, and tag variables are
bound by the ?-constructs. The ?-construct replaces catch and in L c=t , the !-construct
replaces throw in L c=t , and the tapply-construct replaces tapp in L c=t .
The type inference rules for the new constructs are given by Table 3.
Table
Type Inference Rules of NJ c=t and NK c=t
The inference rule for the !-construct is the same as that of throw in L c=t . The term ?u
may be constructed even if the type of a diers from B. The meaning of ?u B :a is that,
if the computation of a ends normally and returns a 0 , then it returns inj 1
(a 0 ), and if a
term b is thrown during the computation of a, then it returns inj 2
(b). Hence ?u B :a has
type A _ B if a is of type A. The tapply-construct may be dicult to understand, but
it is an inverse operation of tag abstraction. So tapply(?u B reduces to a[v B =u B ].
Type inference rules of other constructs are the same as before. The calculus with
the restriction in the implication introduction rule is called NJ c=t , and the one without
the restriction is NK c=t . The former corresponds to intuitionistic logic and the latter to
classical logic.
One-step reduction rules for the new constructs are given as follows:
(a) (if u 62 FTV(a))
The last reduction may look strange, but it is useful in writing concise proofs [17], and
necessary to simulate the reduction tapp(v:a; u) ; 1 a[u=v] in L c=t /L K
c=t .
Theorem 2.2 (Sato)
The subject reduction property holds for NJ c=t and NK c=t .
2.3. Non-determinism and Classical Logic
All the four calculi for the catch/throw mechanism have non-deterministic reduction
rules, and are not con
uent. We do not think that this is defect because: (1) as far
as the strong normalizability is concerned, it is good to have as many reduction rules
as possible. As a corollary of the strong normalizability of the strongest calculus, we
obtain the strong normalizability of any subcalculus, and (2) classical logic is said to be
inherently non-deterministic. In order to express all possible computations in classical
proofs, our calculus should be non-deterministic. Later we can choose one answer by
xing an evaluation strategy. Murthy gave examples which show classical proofs may
contain multiple computational meanings [10]. The second author showed in [18] that
Murthy's example can be expressed in in the NK c=t -style calculus.
3. A Non-deterministic Variant of Parigot's
In this section, we give a non-deterministic variant of Parigot's as a target of translation
from the catch/throw calculi.
Parigot's -calculus [16] is a second-order propositional calculus for classical logic. It
is a natural-deduction system whose sequents have multiple consequents. The -calculus
is a quite nice formulation of classical logic, and at the same time, it is computationally
interesting, since various control structures can be represented by the -construct whose
typing is given as follows:
The most important reduction rule for the -construct (called structural reduction) is:
where af[]c := [](cb)g is the term obtained from a by substituting [](cb) for every
subterm in the form []c where this is free in a. We refer to [16] for the denition of
the -calculus.
We can simulate a simplied version of the catch/throw mechanism in L K
c=t by the -
construct as follows:
catch(u; a) as u:[u]a
throw(u; a) as v:[u]a (where v does not appear in [u]a)
However, the catch/throw calculi we consider are not con
uent. Moreover, one term
reduces to dierent variables x A and y A as we saw in the previous section. Since the
calculus is a con
uent calculus, direct simulation of the catch/throw calculi by is not
possible.
A possible solution is to add more reductions to , for instance, the call-by-value
version of the structural reduction (the symmetric structural reduction). However, it is
not known that a system which has both the structural reduction and the symmetric
structural reduction is strongly normalizing or not 5 . Instead of naively adding reduction
rules, we slightly modify the -calculus, then add non-deterministic reductions. Namely,
we classify uses of into three cases:
(1) u:[u]a
(2) u:[v]a with u 62 FTV([v]a)
(3) u:[v]a with u 6 v and u 2 FTV(a)
We need (1) and (2) to simulate the catch-construct and the throw-construct, respec-
tively. We only need to extend the reduction rules for (2), and the reduction rules for (1)
remain the same. We do not need (3) to simulate the catch/throw calculi, so such a term
construction will be excluded.
Another modication to the -calculus is that we no longer have distinction of individual
variables and tag variables. The named term [u]a will be represented by ordinary
application ua. By this modication, we can directly -abstract over variables which
correspond to names such as []. This is the key to simulate the tag-abstraction/tag-
application mechanism in L K
c=t . This representation is essentially due to de Groote [3],
who formalized the exception mechanism for ML. Fujita [4] recently studied a similar
calculus for the exception mechanism.
As notational convenience, we write u:a for the term u:[u]a, and abort(va) for
the term u:[v]a. We also extend reduction rules for the abort-construct to have non-deterministic
features. We call the resulting system ND .
3.1. A Non-deterministic Calculus ND
The types of ND are dened as follows:
Denition 3.1 (Type)
Recently, Fujita[5] indicated that such a system is shown to be strongly normalizing by translating it to
Barbanera and Berardi's symmetric -calculus if we restrict the system to rst-order. However, we need
the second-order version in this paper.
Since ND is second-order, the type ? is redundant from the logical point of view. We,
however, include ? as a primitive type, since we want to interpret ? dierently from
8X:X. The type variable X is bound by the type abstraction 8X, and we identify two
types which are identical modulo renaming of bound type variables. We abbreviate A !?
as :A.
The preterms are as follows. Note that we adopt the Curry-style (implicit typing) for
ND as in the -calculus. 6 Hence we do not attach types to variables, and consider
reduction rules.
Denition 3.2 (Preterm)
Contrary to the original , we have only one sort of variables. A variable x may be used
for an ordinary variable and also for a name (a tag-variable in our sense). Also we have
no distinction between ordinary terms and named terms. Variables are bound by and
constructs, and we again identify two terms which diers only in the bound variables.
The preterm abort(t) is new to ND as we explained above.
A judgment in ND is in the form ' a : A where is a nite set in the form
g. The type inference rules which derive judgments are shown in
Table
Table
Type Inference Rules of ND
' a :?
In the 8-introduction rule (marked (*)), X may not occur freely in .
If ' a : A is derived using the above rules, we say a is a (typable) term of type A
(sometimes written as a : A).
The reduction rules are derived from the -calculus, but we added several rules for
abort which makes ND non-deterministic. Since we shall use the substitution in the
6 In [16], Parigot also denes a Church-style system.
Table
One-Step Reduction Rules of ND
form [x:u(xb)=u] many times, it is abbreviated as [b= u]. When using this notation, we
always assume that x is a fresh variable. We also abbreviate composition of substitutions
We often write b for the sequence hb 1 ; ; b n i. Hence
successive application ( (ab 1 ) b n ) is abbreviated as a b and successive substitution
In the last case we assume that b 1 ; ; b n do not contain u
free. We also use simultaneous substitution [b 1 =x
um are mutually distinct and c do not contain u
As before, we use the notation a ; b and a ;+ b.
The following lemma can be proved easily.
Lemma 3.1
Let be the substitution [b 1 =x
um ]. If a ; b, then a ; b.
3.2. Strong Normalizability of ND
In this subsection, we prove the strong normalizability (SN) of ND . The proof is a
slight modication of Parigot's original proof of the SN of . Nevertheless, we give the
proof here for completeness.
Let T be the set of preterms in ND , and SN be the set of strongly normalizing
preterms in ND . Note that we do not restrict T and ND be subsets of typable terms,
following [16]. (a) is the maximum length of reduction sequences starting from a if
a 2 SN , and is undened if a 62 SN .
For F T , let F <! be the set of nite sequences of elements of F . Namely,
In particular, F <! contains the empty sequence hi.
Let F and G be subsets of T , and S be a subset of T <! . Then we introduce the
following notations:
As a special case, we have G.
Denition 3.3 (Reducibility Candidate)
A reducibility candidate is a subset of T , and is inductively dened as follows:
1. SN is a reducibility candidate.
2. If F and G are reducibility candidates, so is F ! G.
3. If fF i g i2I is a family of reducibility candidates for a non-empty set I, then
is a reducibility candidate. (Note that the index set I may be innite.)
The set of the reducibility candidates is denoted as RC.
Lemma 3.2
For any F 2 RC, the following four clauses hold.
1. F SN .
2. All variables are contained in F .
3. If a 2 SN , then abort(a) 2 F .
4. There exists a set S such that S SN <! and
The clause 3 was added from Parigot's original proof. It means that abort(a) with
a strongly normalizing term a is contained in any reducibility candidate. The main
dierence of our proof and Parigot's is that, in our case, a term in the form C[abort(a)]
may reduce to abort(a), so we should always consider abort(a) as a reduct. However
such a term is contained in any reducibility candidate if a is strongly normalizing by this
lemma, and therefore we can always handle this term easily.
Proof. We prove all the four clauses simultaneously by induction on \F 2 RC".
(Case: F is SN ) Clause 4 is proved by taking fhig as S, and other clauses are trivial.
(Case: F is G induction hypothesis (abbreviated as IH),
we have x proves Clause 1.
By IH, G SN , and there exists a set S 0 SN <! such that
by taking
G;
we have G which proves Clause 4.
Let x be a variable, a 2 SN , b
since all its reducts are in the form xb 0
n or abort(d)bk+1 b 0
proving Clause 2. We
also have abort(a)b proving Clause 3.
(Case: F is T
are easily proved from IH. Also by IH, for each
I there is an S i SN <! such that G
then we have
which proves Clause 4. ut
From Clause 4 in the above lemma, we put F ? as the largest such S. Namely, for any
A preterm a is neutral if it is either a variable or in the form bc.
Lemma 3.3
For any F 2 RC, the following two clauses hold.
1. For any a 2 F , if a ; 1 a 0 then a 0 2 F .
2. If a is neutral, and a 0 2 F for any a 0 such that a ; 1 a 0 , then a 2 F .
Proof. This lemma is proved by induction on \F 2 RC". The key case is F G ! H.
We shall prove Clause 2 only. Suppose a is neutral, and a 0 2 G ! H for any a 0 such
that a ; 1 a 0 . Take an arbitrary preterm b 2 G. We shall prove ab 2 H by induction on
(a) (b) (since a and b are SN). The preterm ab reduces in one step to either one of
a 0 b (with a ; 1 a 0 ), ab (with a a 00 [abort(c)=x]), or abort(d)
(with b b 00 [abort(d)=x]). We can easily prove that all the four terms belong to H, and
by IH, we have ab 2 H. Consequently a 2 G ! H. ut
Denition 3.4 (Interpretation of Types)
An interpretation is a map from type variables to reducibility candidates.
Note that there exists an interpretation which maps all the type variables to SN .
An interpretation is naturally extended to any types in the following way:
where an interpretation [F=X] is dened as
for Y 6= X.
Lemma 3.4
Let A; B be types, and be an interpretation. Then
This lemma can be proved by induction on the structure of A.
Lemma 3.5
Let F 2 RC, x; u be variables, a; b be terms, and c be a sequence of terms.
1. If a[b=x] 2 F and b is SN, then (x:a)b 2 F .
2. If (u: (a[d= u])d)c 2 SN and c
3. If c
4. If a 2 SN , then u: a 2 SN .
Proof. 1. We can prove this clause by induction on (a[b=x]) + (b) using Lemma
3.1 and Lemma 3.3. We must take care that the reducts of (x:a)b may be of the form
abort(c), but this case can be treated using Clause 3 of Lemma 3.2.
2. We can prove this clause by induction on ((u: (a[d= u])d)c)
3. From Clause 1 above, all we have to prove is u(bc) 2 SN for any b 2 F . Since
4. This can be proved by analyzing reduction rules. ut
Theorem 3.1
Assume is derived in ND .
Assume also that is an interpretation, b
then we have
At rst look, the statement of the theorem looks ambiguous. For instance, given a
proof of x C ; y :D ' a : A, we may split the lefthand side of ' in two ways, each of which
results in the following conclusion:
holds for any b 1 2 (C) and b 2 2 (:D).
holds for any b 2 (C) and c 2 (D) ? .
Actually the theorem implies that both hold, so no ambiguity arises. We now state the
proof of the theorem.
Proof. The theorem is proved by induction on the type inference of a. Let be
a substitution [b 1 =x
as .
(Case: Assumption-Rule) In this case a x. We have to prove x 2 (A). There are two
subcases. (i) x x i for some i. Then A C i and x b i . By the assumption b i 2 (C i ),
so x 2 (A). (ii) x u i for some i. Then A :D i and x z:u i (zc). From Lemma
3.5, we have x 2 (D
(Case: !-introduction) In this case a x: c. We have A B ! C and c is a term
of type C. By a suitable renaming, we have a x: c. Take any d 2 (B). By IH,
Hence by Lemma 3.5, we have (x: c)d 2 (C), hence
C).
(Case: !-elimination) In this case, we have a bc,
Hence a (b)(c) 2 (A).
(Case: 8-introduction) In this case, A 8X: B and a : B is derivable. Let F 2 RC
and 0 [F=X]. Since the type variable X does not occur freely in , b
m. Hence, by IH, we have a 2 0 (B). Finally
a 2 (8X: B).
(Case: 8-elimination) In this case, a : 8X: B and A B[C=X]. We have a 2 (8X: B)
from IH, hence a 2 [(C)=X](B). By Lemma 3.4, a 2 (B[C=X]).
(Case: -introduction) In this case, a u:b, and b : A. By a suitable renaming, we
have a u: b. We have By IH, we have
Hence (b 0 )c 2 SN , therefore by Clause 4 of Lemma 3.5,
we have u: (b 0 )c 2 SN . By Clause 2 of Lemma 3.5, (u: b)c 2 SN . Consequently,
(Case: ?-elimination) In this case, a abort(b). By IH, b 2
3.2, we have a abort(b) 2 (A). ut
By choosing x m) in
the theorem above, we obtain a 2 (A) for any term a of type A and an interpretation .
Since there exists an interpretation , and (A) SN , we have the following theorem:
Corollary 3.1
ND is strongly normalizing.
4. Translation from the Catch/Throw Calculi to ND
This section gives translations from the catch/throw calculi to ND . In the following
we give only translations from the classical catch/throw calculi L K
c=t and NK c=t , but the
translations work also for L c=t and NJ c=t , since they are subcalculi.
4.1. Translation of Nakano's Calculus
We shall translate L K
c=t to ND . The translation is the same as the standard encoding
of propositional logic in second-order logic except the catch/throw-constructs.
First, we translate types. (other than ?) in L K
c=t , and
variables in ND .
The point here is that the type AB is translated to :B ! A. This translation re
ects
our intention that the -abstraction is translated to the -abstraction.
We then translate preterms in L K
c=t to ND . We assume that, for each individual
variable x A in L K
c=t , x is a variable in ND , and for each tag variable u A in L K
c=t , u is a
variable in ND . We also assume that this mapping on variables are injective.
Preterms are translated as follows:
x A x
abort(a) abort(a)
x A :a x:a
ab ab
(a) a(xy:x)
(a) xy:xa
tapp(a;
The translation is extended to contexts for variables in the following way. Let be
a context for individual variables fx A 1: A
and be a context for tag
variables
in L K
c=t . Then we dene:
Note that the types of tag variables are negated through the translation.
The translation preserves typing and reduction as we shall see.
Lemma 4.1 (Preservation of Type Assignment)
If ' a : A ; is derived in L K
c=t , then ; ' a : A is derived in ND .
Proof. Since the translation for propositional connectives are standard, we verify the
other cases only.
From IH, we have ; ' a : A. Since fu : Ag is fu : :Ag, and
a) is u:a, we can derive ; fu A : Ag ' catch(u A ; a) : A by the
-introduction rule.
(throw) From IH, we have ; ' a : A. Since [ fu A : Ag is
a) is abort(ua), we can derive by the
!-elimination rule and the ?-elimination rule.
From IH, we have ; ' a : A. Since
and A B is :B ! A, we can derive ; by the
!-introduction rule.
(tapp) From IH, we have
by the !-elimination rule, we can derive
Lemma 4.2
The translation is compatible with substitution. Namely, a[b=x A ] a[b=x], and a[v B
a[v=u].
This lemma is proved by straightforward induction on the construction of a, and is
omitted.
Lemma 4.3 (Preservation of Reduction)
If a and b are typable terms and a ; 1 b in L K
c=t , then a ;+ b in ND .
Proof.
This lemma is proved by induction on the structure of the term a. We prove the key
cases only.
1. a 6 x and x 2 FV(a)
By Lemma 4.2, we have a[throw(u A ; b)=x] a[abort(ub)=x]. By induction on the
term a, we have a[abort(c)=x] ;+ abort(c) for any c, so we have a[abort(ub)=x] ;+
abort(ub).
2. catch(u A ; a) ; 1 a with u A 62 FTV(a)
Since u 62 FV (a), we have catch(u A ; a) u:a ; 1 a.
3. catch(u A ; throw(u A ;
We have catch(u A ; throw(u A ; a)) u:abort(ua), and u 62 FV(a).
4. tapp(u:a;
We have tapp(u:a; v) (u:a)v, and it reduces to a[v=u]. By Lemma 4.2, a[v=u]
a[v=u], hence we are done. ut
From the above lemmas, we have the following theorem.
Theorem 4.1
The system L K
c=t is strongly normalizing. Hence L c=t is strongly normalizing.
Remark. The translation from L K
c=t to ND does not really need the second order
quantier. Namely, if we eliminate 8, and add ^ and _ to ND , then we can translate
c=t to this modied calculus. Since we can prove the SN of this modied calculus by an
elementary method as in [16], we can also prove the SN of L K
c=t by an elementary method.
However, we need the second order quantier 8 in translating NK c=t as we shall see in
the next section. We therefore proved the SN of L K
c=t based on the reducibility method.
4.2. Translation of Sato's Calculus
In this subsection we translate NK c=t to ND .
Before dening the translation, we try to give a naive translation from NK c=t to L K
c=t ,
and explains why it fails. A natural candidate of the translation is:
A A for any type A
tapply(a;
where the type C will be supplied from the type inference of each term. At this moment
let us ignore how to obtain this C. By the above translation we can interpret all but one
reduction rules in NK c=t . The only exception is the following one:
The lefthand side is interpreted as
tapply(?u:a; v) case(catch(u; inj 1
which does not necessarily reduce to a[v=u]. Hence the naive translation from NK c=t to
c=t fails. Moreover, it seems very dicult to nd a suitable extension of L K
c=t which is
strongly normalizing and which can reduce the above term to a[v=u].
However, the situation changes if we consider the second-order calculus where the disjunctive
type is no more primitive, but is dened as A _ B
As we shall see later, the term tapply(?u:a; v) reduces to a[v=u] through this
encoding.
Now let us dene the translation from NK c=t to ND . The translation of types are
the same as the translation from L K
c=t to ND . The translation of preterms except
are the same. The translation of the new constructs
is dened as follows:
tapply(a;
where we assume x; y are not used in the term a. This translation may look complex, but
it is the result of the second-order encoding of the above naive translation from NK c=t to
c=t .
The translation is extended to a context for individual variables in the same way as
before.
For a context for tag variables , we need to change the translation, since a tag variable
of type B should be translated to a variable of type C _ B where C is the type of the
body of enclosing ?-expression. In other words, we cannot determine the type C until
we reach the enclosing ?-expression. To solve this problem, we introduce a mapping
from a set of tag variables (in NK c=t ) to a set of types in ND , and we make the
translation of contexts for tag variables dependent on . Let be a context for tag
variables fu B1:
g, and be a mapping from fu B1
n g to types in
ND . We dene
In this denition _ 0 is an abbreviation dened as C _ 0 D 8X:
the same as the result of the translation of _).
Lemma 4.4 (Preservation of Type Assignment)
If ' a : A ; is derivable in NK c=t , then, for any mapping whose domain contains all
the tag variables in , we have ; ' a : A is derivable in ND .
Proof. This is proved by induction on the derivation of ' a : A ; . We only verify
the key cases.
We have to derive ; any . Fix a mapping
. Suppose (otherwise the proof is shorter), and 0 Bg. Let
0 be a mapping such that 0 (v) (v) for v 6 u and 0 (u) A. From IH, we
can derive ; 0 ' a : A. We have 0 is B)g. Also we have
From these
facts, we can derive the desired type inference by the -introduction rule.
We have to derive for any . Fix any . From IH,
we can derive ; ' a : B. We have
is abort(u(xy:ya)). We can derive the desired type inference by the ?-elimination
rule.
We have to derive for any .
From IH, we can derive ; ' a : A _ B. We have
B)g, and tapply(a; u B ) is a(x:x)(y:abort(u(zw:wy))). By calculating the type
of this term, we can derive the desired type inference. ut
The next lemma is used in proving the preservation of reduction through the translation.
Lemma 4.5
Let a be a typable term in NK c=t . Let be a substitution [x:v(xt 1 t 2 )=v] where v
is the result of the translation of a tag variable v B in NK c=t , t 1 is x:x, and t 2 is
y:abort(u(zw:wy)). Then we have a ; a[u=v] in ND .
Proof. We prove this lemma by induction on the structure of the term a. We state
here the key cases only.
(Case a !v
We have the following reduction sequence:
(Case a tapply(b; v B We have the following reduction sequence:
Lemma 4.6 (Preservation of Reduction)
If a and b are typable terms and a ; 1 b in ND , then a ;+ b in ND .
Proof.
We only check the key cases (the tapply-expression). In the following we put t 1 x:x,
and t 2 y:abort(u(zw:wy)).
1. tapply(inj 1
We have tapply(inj 1
2. tapply(inj 2
We have tapply(inj 2 (a); abort(u(zw:wa)). The
last term is !u B :a.
3. tapply(?v:a; u)
We have:
tapply(?v:a; u) (v:zw:za)t 1 t 2
a[u=v]
Hence we have the desired property. ut
From the lemmas we have the following theorem.
Theorem 4.2
The system NK c=t (and hence NJ c=t ) is strongly normalizing.
Remark. In our proof, the use of the second order quantier 8 is indispensable to
give a translation of NK c=t . Since NK c=t is a rst-order system, one may think that our
proof used too strong a method, and that the SN of NK c=t could be proved by a more
elementary method. At present, we do not have an answer to this question. Our trial to
apply an elementary method to NK c=t was not successful.
5. Extension of the Catch/Throw Calculi
Having given the translation to ND , it is easy to introduce the second-order quantier
to the four catch/throw calculi without loss of the nice properties such as the strong
normalization. Since the catch/throw calculi are formulated in the Church-style (variables
are explicitly labeled with their types), we should introduce term-constructs for type
abstraction/application. As usual we let X:a denote the former and aX for the latter.
The typing rules are given as follows:
In the 8-introduction rule (marked (*)), X may not occur freely in nor .
Also the reduction rule (X:a)B ; 1 a[B=X] is added. By adding these rules to L K
c=t ,
we obtain a second-order catch/throw calculus L K2
c=t . Similarly we can obtain NK 2
c=t .
The calculi L K2
c=t and NK 2
c=t enjoy nice properties such as the subject reduction and the
strong normalization. Here we brie
y mention the expressivity of these calculi.
structures such as the integers and the binary trees can be encoded by the second-order
quantier [6], we can dene functions with the catch/throw mechanism over various data
types in the extended calculi.
For instance the function multiply mentioned before is typed as follows:
It Int U:uft:tUuf
It IntList W:wft:tWwf
Here Times(a; b) is the multiplication of two integers a and b and dened as usual. It Int
and It IntList are iterators on the type Int and IntList. The term It Int (a; z:b; x)
with x 62 FV (b) is the if-then-else expression, namely it reduces to a if x is 0, and b
otherwise. It is easily seen that the above function multiply does the same computation
as one given in the introduction.
Since the above representation of free structures is not so good in computation [6], we
may consider another direction of extension. Namely, we may add inductive data types.
In [8], the rst author already proposed to add inductive data types to NJ c=t without
loss of the SN of the calculus, and showed that higher-order functions which use the
catch/throw mechanism can be represented in the extended calculus. However, we have
not fully studied this direction for the classical catch/throw calculi, so it is left for future
work.
6. Concluding Remarks
We have investigated the four catch/throw calculi by Nakano and Sato, in particular,
the calculi which correspond to classical logic through the Curry-Howard isomorphism.
We dene a non-deterministic variant of Parigot's -calculus, and proved the strong
normalizability of this variant. We gave faithful translations from the catch/throw calculi
to this variant, and as a corollary, we obtained the strong normalizability of the four
calculi. We also discussed their extension brie
y.
Recently, Fujita [4] studied exc , a call-by-name variant of de Groote's formulation of the
exception mechanism in Standard ML. His calculus is a subcalculus of the rst-order version
of -calculus. Since the catch/throw mechanism and the exception mechanism are
essentially the same, his motivation and ours are similar. The main dierences of his calculus
and our ND is that (1) his calculus is con
uent, while ours are non-deterministic,
so we have more computations, (2) he uses the rst-order version (actually, the implicational
while we use the second-order version, and (3) his calculus has two sorts
of variables (reminiscent of individual variables and tag variables), while we use one sort
of variables, thus we can directly abstract over tags.
Crolard [2] also studied a con
uent calculus for the catch/throw mechanism. Since his
calculus can be translated to Parigot's -calculus, it is similar to Fujita's formulation,
thus diers from the calculi studied in this paper.
Extracting algorithmic contents from classical proofs is now a quite active research
area. Many researchers in this area aim at obtaining con
uent calculi for classical logic.
However, classical logic is said to be inherently non-deterministic, namely, classical proofs
may contain multiple computational meanings. Therefore, if we want to represent as
many computational meanings as possible, it is natural to begin with non-deterministic
calculi. Our approach is to design and study non-deterministic calculi rst, then to study
con
uent subcalculi. We believe that the catch/throw calculi presented in this paper can
be good basis for this approach. Barbanera and Berardi's calculus [1] is another non-deterministic
calculus for classical proofs, so their calculus could be also a good basis.
Further studies on extracting computational meaning from classical proofs are left for
future work.
Acknowledgement
We would like to express our heartful thanks to Hiroshi Nakano, Makoto Tatsuta, and
Izumi Takeuti for helpful comments on earlier works. We also thank to Ken-etsu Fujita
for pointing out references and errors, and to anonymous referees for valuable comments
for improvement. The author is supported in part by Grant-in-Aid for Scientic Research
from the Ministry of Education, Science and Culture of Japan, No. 09780266 and No.
10143105.
--R
Proofs and Types (Cambridge University Press
"A Classical Catch/Throw Calculus with Tag Abstractions and its Strong Normalizability"
--TR
Common LISP: the language
Proofs and types
A formulae-as-type notion of control
Intuitionistic and classical natural deduction systems with the catch and the throw rules
Lambda-My-Calculus
Extracting Constructive Content from Classical Logic via Control-like Reductions
A Simple Calculus of Exception Handling
Classical Brouwer-Heyting-Kolmogorov Interpretation
--CTR
Emmanuel Beffara , Vincent Danos, Disjunctive normal forms and local exceptions, ACM SIGPLAN Notices, v.38 n.9, p.203-211, September | catch and throw;classical logic;type system;strong normalizability |
506345 | Least and greatest fixed points in intuitionistic natural deduction. | This paper is a comparative study of a number of (intensional-semantically distinct) least and greatest fixed point operators that natural-deduction proof systems for intuitionistic logics can be extended with in a proof-theoretically defendable way. Eight pairs of such operators are analysed. The exposition is centred around a cube-shaped classification where each node stands for an axiomatization of one pair of operators as logical constants by intended proof and reduction rules and each arc for a proof- and reduction-preserving encoding of one pair in terms of another. The three dimensions of the cube reflect three orthogonal binary options: conventional-style vs. Mendler-style, basic ("[co]iterative") vs. enhanced ("primitive-[co]recursive"), simple vs. course-of-value [co]induction. Some of the axiomatizations and encodings are well known; others, however, are novel; the classification into a cube is also new. The differences between the least fixed point operators considered are illustrated on the example of the corresponding natural number types. | Introduction
This paper is a comparative study of a number of least and greatest xed point
operators, or inductive and coinductive denition operators, that natural-
deduction (n.d.) proof systems for intuitionistic logics (typed lambda calculi
with product and sum types) can be extended with as logical constants
(type-language constants), either by an axiomatization by intended proof and
reduction rules (\implicit denition") or by a proof- and reduction-preserving
encoding in terms of some logical constants already present (\explicit deni-
tion"). One of the reasons why such logical or type-language constants are
interesting lies in their useful programming interpretation: inductive types
behave as data types, their introductions as data constructors and eliminations
as recursors; coinductive types may be viewed as codata types, their
introductions as corecursors and eliminations as codata destructors. In the
literature, a fairly large number of axiomatizations and encodings of both
particular [co]inductively dened types and general [co]inductive denition
operators can be found, see e.g., [1,14,19,20,24,25,15,7]. The paper grew out
of a wish to better understand their individual properties and their relations
to each other.
The contribution of the paper consists in a coordinated analysis of eight
intensional-semantically distinct pairs of [co]inductive denition operators, arranged
into a cube-shaped taxonomy, which resulted from an attempt to t
the various known axiomatizations and encodings into a single picture and to
nd llers for the holes. Each node of the cube stands for an axiomatization
by proof and reduction rules of one pair of logical constants and each arc for
a proof- and reduction-preserving encoding of one pair in terms of another.
Some axiomatizations and encodings rely on the presence in the system of certain
other logical constants (the standard propositional connectives, 2nd-order
quantiers, or a \retractive" recursive denition operator ). The three dimensions
of the cube re
ect three orthogonal binary choices: conventional-style vs.
Mendler-style, basic (\[co]iterative") vs. enhanced (\primitive-[co]recursive"),
simple vs. course-of-value [co]induction.
The cube looks as follows:
q
and (with optional superscripts) are conventional-style inductive and coinductive
denition operators; m and n (with optional superscripts) are Mendler-
style operators. The superscript ' q ' marks the \enhanced" feature, the superscript
indicates the \course-of-value" feature.
The distinctions between basic and enhanced, simple and course-of-value [co]in-
duction are distinctions between essentially dierent forms of [co]induction,
with dierent associating schemes of (total) [co]recursion. Basic [co]induction
gives [co]iteration, enhanced [co]induction gives (full) primitive [co]recursion.
All axiomatizations and encodings we have found in the literature deal with
simple forms of [co]induction. The axiomatizations and encodings for course-
of-value [co]induction in this paper are ours, we think.
The dierence between conventional- and Mendler-style [co]induction (named
after Mendler [19,20]) is more technical and harder to spell out informally,
but not shallow. A conventional-style [co]inductive denition operator applies
to a proposition-function only if it is positive; the associating reduction rule
refers then to a proof of its monotonicity (all positive proposition-functions
are monotonic wrt the preorder of inclusion). Mendler-style operators apply
to any proposition-functions. The axiomatizations of enhanced and course-of-
value conventional-style operators rely on the presence in the system of other
logical constants, those of Mendler-style operators do not. Thus, in more than
one sense, Mendler-style operators are more uniform than conventional-style
operators; resorting to programming jargon, one might for instance want to
say that the Mendler-style operators are generic, whereas the conventional-
style ones are only polytypic. These uniformity features have a price
the proof rules of the Mendler-style operators involve implicit (\external")
2nd-order quantication at the level of premisses.
Throughout the paper, the semantics that we keep in mind is intensional, so
we only consider -reduction, not -conversion.
Some remarks are in order regarding the technical machinery that we use.
By natural deduction, we mean a proof system style where instead of axioms
involving implications and universal quantications we systematically prefer
to have proof rules involving hypothetical and schematic judgements (\exter-
nalized" implications and universal quantications), in sharp contrast to the
Hilbert style of proof systems. For us therefore, natural deduction is really
the \extended" natural deduction of Schroeder-Heister [30,31]: we allow proof
rules to be of order higher than two: not only may conclusions have premisses
and these have premisses in their turn, but even the latter may be hypo-
thetical. This choice makes axiomatizations of dierent logical constants very
compact, but on the expense of certain added complexity in their encodings
in terms of other logical constants.
In order to compactify the notation and to get around the technicalities related
to -conversion and substitution, we use a simple meta-syntax, a higher-oder
abstract syntax derived from logical frameworks such as de Bruijn's AUT-PI
and AUT-QE [5], Martin-Lof's system of arities [22, chapter 3], and Harper,
Honsell, and Plotkin's LF [12]. denotes the schematization of
s wrt. x denotes the instantiation of s with
Schematization and instantiation are stipulated to satisfy the following rules:
are not free
in s, then
We have made an eort to make the paper self-contained; for the omitted
details, we refer to Uustalu [35]. A preliminary report of the present work
appeared as [37]. We also refer to Matthes [17], an in-depth study of extensions
of system F with constructors of basic and enhanced conventional- and
Mendler-style inductive types, which in regard to the clarication of the relationship
between the conventional- and Mendler-style induction builds partly
upon our work.
The paper is organized as follows. In section 2, we lay down our starting point:
it is given by systems that we denote NI and NI 2 , the n.d. proof systems
for 1st- and 2nd-order intuitionistic propositional logics, optionally extended
with a \retractive" recursive denition operator . Then, in section 3, we rst
present the basic [co]induction operators, both in conventional and Mendler-
style and then continue with their encodings in terms of the 2nd-order quan-
tiers and each other. In sections 4 and 5, we describe enhanced [co]induction
and course-of-value [co]induction operators respectively and their encodings
via the operators of the basic kind. In section 6, we give a survey of related
work on inductive and coinductive types. Finally, in section 7, we conclude
and mention some directions for future work.
Preliminaries
In principle, the [co]inductive denition operators described below can be
added to the n.d. proof system of any intuitionistic propositional logic. (They
also admit a straightforward generalization for predicate logics.) The most
natural base system for such extensions however is NI, the standard n.d.
proof system for (full) 1st-order intuitionistic propositional logic. The logical
constants of NI are ^ (conjuction), _ (disjunction), > (verum), ? (falsum),
and ! (implication). These propositional connectives are axiomatized by the
proof and reduction rules listed in Figure 1. (To save space, the reduction rules
are given not for proofs, but for (untyped) term codes of proofs; the reduction
rules for proofs are easy to recover. The reduction relation on terms satises
subject reduction.)
^I
_I L
e
(c) e c(e)
Figure
1: Proof and reduction rules for standard propositional connectives.
Another important base system is NI 2 , the n.d. proof system for 2nd-order
intuitionistic propositional logic. This system extends NI with 8 2 and 9 2 , the
standard 2nd-order quantiers. The proof rules for 8 2 and 9 2 are presented in
Figure
2.
In the encodings of enhanced [co]induction in terms of basic [co]induction,
we shall need a logical constant , a \retractive" recursive denition oper-
ator. This is a proposition-valued operator on proposition-functions that are
positive. The proof and reduction rules for appear in Figure 3. The introduction
and elimination rules for behave as an embedding-retraction pair. The
c c
e(c) e(c)
Figure
2: Proof and reduction rules for 8 2 , 9 2 .
extensions of NI and NI 2 with will be denoted by NI() and NI 2 ().
Of importance for us is the fact that NI 2 () is strongly normalizing (i.e.,
every proof of NI 2 () is strongly normalizing); consult Mendler [19,20] and
Urzyczyn [34].
Figure
3: Proof and reduction rules for .
The syntactic concepts of positivity and negativity of proposition-functions
are system-dependent. For any particular system, these concepts are dened
by mutual structural induction on proposition-functions denable in this sys-
tem. In NI and its extensions considered in this paper, a proposition-function
(X)F is dened to be positive [negative] if every occurrence of X in F appears
within an even [odd] number of antecedents of implications. Also for
any particular system and by a similar induction, explicit denitions can be
given for the derivable proof rules M and M + establishing that positive [neg-
ative] proposition-functions are monotonic [antimonotonic] wrt. the preorder
of proposition inclusion. These proof rules appear in Figure 4.
F
F positive F negative
Figure
4: Derivable proof rules M and M + .
As an example, we shall consider the proposition-function N dened by setting
N is obviously positive. The corresponding monotonicity witness map
N is dened
as follows:
3 Basic [co]induction
The logical constants from the two lower front nodes of the cube provide the
most fundamental forms of [co]inductive denition of propositions, viz. the
basic (in other words, \[co]iterative") forms of conventional- and Mendler-style
[co]inductive denition. and are operators of conventional-style induction
and coinduction and apply to positive proposition-functions
are their Mendler-style counterparts applicable without restrictions to any
proposition-functions. Their proof and reduction rules are given in Figures 5
and 6. The proof rules for m and n are more complex than those for and
, but their reduction rules, in compensation, are simpler and more uniform:
their right-hand sides do not refer to the M + proof rule.
e(
cata F
cata F (wrap F (c); e) e(map
F
()cata F (; e)))
e(
open
open F (ana F (c; e)) map
F
Figure
5: Proof and reduction rules for and .
e(
e(
Figure
Proof and reduction rules for m and n.
From the algebraic semantics point of view, F is a least prexed point of F
wrt. the inclusion preorder of propositions: it is both itself a prexed point
of F (by the I-rule) and a lower bound of the set of all prexed points of F
(by the E-rule). (Recall that R is said to be a prexed point of F , if F (R)
is less than R.) F , dually, is a greatest postxed point of F . 3 Since a least
prexed [postxed] point of a monotonic function is also its least
xed point, F and F are also least and greatest xed points of F .
In a similar fashion, mF can be thought of as a least robustly prexed point
of F : it is both itself a robustly prexed point of F and a lower bound of all
robustly prexed points of F . Here, R is considered to be a robustly prexed
point of F , if not only is F (R) less than R, but F (Y ) is less than R for all
Y 's less than R. But mF is also a least (ordinary) prexed point of a function
sending any R to a supremum of the
set of all F (Y )'s such that Y is less than R. F e (which is always positive)
appears to be a least monotonic majorant of F wrt. the pointwise \lifting" of
the inclusion preorder of propositions to a preorder of proposition-functions.
If F is monotonic, then F and F e are equivalent (pointwise). The dualization
is obvious: nF is a greatest robustly postxed point of F and a greatest
(ordinary) postxed point of a function F a [F a (R) 8 2 ((Y )(R!Y )!F (Y ))]
sending any R to an inmum of the set of all F (Y )'s such that Y is greater
than R.
Under the programming interpretation, F is a data type, with wrap F a data
constructor and cata F an iterator, and F is a codata type, with ana F a
coiterator and open F a codata destructor, in the most standard sense. mF ,
with mapwrap and iter, and nF , with coit and mapopen, are Mendler-style
versions of these things. This is best explained on an example.
The type of standard natural numbers Nat, with zero and succ the constant
zero and the successor function and natcata the iterator, is normally axiomatized
as follows:
These typing and reduction rules are essentially nothing else than those for
3 Note here that, in a preorder (also in a Heyting algebra), it may turn out that
all monotonic functions have least [greatest] prexed [postxed] points; hence allowing
and to apply to any positive F should not lead to inconsistencies (the
encodability of , in terms of 8 2 , 9 2 demonstrates that this is the case indeed).
conventional basic induction with N as the underlying proposition-function.
Indeed, making the following denitions ensures the required typing and reduction
properties:
Nat (N)
zero wrap N (inl(hi))
succ(c) wrap N (inr(c))
This suggests a similar specialization of Mendler-style basic induction for N
by the following denitions:
Nat m(N)
mapzero(d) mapwrap(inl(hi); d)
d) mapwrap(inr(c); d)
The type Nat of Mendler-style natural numbers, with mapzero, mapsucc and
natiter the Mendler-style constant zero, successor function, and iterator, obeys
the following typing and reduction rules.
Here, it may be helpful to think of Q as some chosen type of representations
for naturals and d as a method for converting representations of this type to
naturals. A natural, hence, is constructed from nothing or a representation
(for its predecessor), together with a method for converting representations to
naturals. Using Nat as Q, the standard constructors of naturals are denable
as follows:
zero mapzero(())
succ(c) mapsucc(c; ())
natcata and natiter are iterators. Iteration is a very simple form of total re-
cursion: the result of an iteration on a given natural is only dependent of the
result on the predecessor. If the \straightforward" denition of a function follows
some more complex form of recursion, then denitions by iteration can
get clumsy. The factorial of a given natural, for instance, depends not only
on the factorial of its predecessor, but also on the predecessor itself. An iterative
denition of the factorial has to dene both the factorial and the identity
function \in parallel" and then project the factorial component out.
zero
zero
Exactly the same trick of \tupling" is also needed to program the Fibonacci
function: the Fibonacci of a given natural number depends not only on the
Fibonacci of its predecessor, but also on the Fibonacci of its pre-predecessor.
An iterative denition of Fibonacci has to dene both Fibonacci and the \one-
step-behind Fibonacci" \in parallel".
bo(c) fst(natcata(c;
zero;
case@ snd(
bo(c) fst(natiter(c; (-)
zero;
case@ snd(-(
These examples show how other forms of recursion can be captured by iteration
using \tupling". Such modelling is not without drawbacks, however.
First, it is more transparent to dene a function using its \native" form of re-
cursion. Second, the intensional behavior of iterative denitions is not always
satisfactory. It is well known, for instance, that the predecessor function can
be programmed using iteration, but the programs take linear time to compute
(and only work as desirable on numerals, i.e., closed natural number terms).
pred(c) cata N (c; (
pred(c) iter(c; (
The more complex forms of induction considered in the following sections
remedy these problems by oering more advanced forms of recursion.
Basic [co]induction vs. 2nd-order quantiers
Both , and m, n can be encoded in terms of 8 2 , 9 2 in a proof- and reduction-
preserving manner.
Proposition 1 The following is a proof- and reduction-preserving encoding
of , in terms of 8
F (c) (() map
cata
F
ana
F
open
F (c) map
F (fst(c) snd(c); ()hfst(c); i)
This encoding is a proof theory recapitulation of the Knaster{Tarski xed
point theorem [33] stating that an inmum [supremum] of the set of all prexed
[postxed] points of a monotonic function is its least [greatest] prexed
[postxed] point. In its general form, the encoding seems to be a piece of folk-
lore. For the special case of \polynomial" proposition-functions (such as N),
essentially the same encoding was rst given by Bohm and Berarducci [1] and
Leivant [14]. For naturals, our encoding specializes to the following:
Nat
zero
natcata
(In Bohm and Berarducci's encoding, Nat
zero
c e z e s .)
Proposition 2 The following is a proof- and reduction-preserving encoding
of m, n in terms of 8
iter
coit
mapopen
This encoding builds on the following robust analog of the Knaster-Tarski
xed point theorem: an inmum [supremum] of the set of all robustly prexed
[postxed] points of any function (monotonic or not) is its least [greatest]
robustly prexed [postxed] point.
Corollary 3 NI 2 () (and also its any fragment, including NI) extended
with operators ; or m; n is strongly normalizing and con
uent.
Mendler-style vs. conventional [co]induction
It is also possible to encode , in terms of m, n and vice versa. For the
encoding in the latter direction, 8 2 , 9 2 have to be available.
Proposition 4 The following is a proof- and reduction-preserving encoding
of , in terms of m, n:
F (c) mapwrap(c; ())
cata
ana
F (e(
open
F
(c) mapopen(c; ())
Proposition 5 The following is a proof- and reduction-preserving encoding
of m, n in terms of , in the presence of 9 2 ,
iter
F a (R) 8 2 ((Y )(R!Y )!F (Y
coit
mapopen d) open F a(c) (d)
The encoding of m, n in terms of , is a proof-theoretic version of the
observation that a least [greatest] prexed [postxed] point of F e [F a ] is a
least [greatest] robustly prexed [postxed] point of F .
Enhanced [co]induction
The logical constants from the two upper front nodes of the cube capture the
enhanced (in other words, \primitive-[co]recursive") forms of conventional-
and Mendler-style [co]inductive denition. q and q are operators of enhanced
induction and coinduction; m q and n q are their Mendler-style counterparts.
Their proof and reduction rules are given in Figures 7 and 8. Adding q , q
to a proof system presupposes the presence of ^, _; there is no corresponding
restriction governing the addition of m q , n q .
q I
e(
para F (wrap q
para F (fst(); e);
e(
(R _ q
open q
open q
Figure
7: Proof and reduction rules for q and q .
From the algebraic semantics point-of-view, q F is a least \recursive" prexed
point of a given (necessarily monotonic) F , i.e., a least element of the set of
all R's such that F (R ^ q F ) is less than R (note the recurrent occurrence of
q F here!). q F is a greatest \recursive" postxed point of F .
is a least \recursive" robustly prexed point of a given F , ie., a least
element of the set of all R's such that F (Y ) is less than R for all Y 's less
than not only R but also m q F (note again the circularity!). n q F , dually, is a
greatest \recursive" robustly postxed point of F .
For programming, q F is a \recursive" data type, with wrap q
F a \recursive"
data constructor and para F a primitive recursor, and q F is a \recursive"
codata type, with apo F a primitive corecursor and open q
F a \recursive" codata
e(
rec(mapwrap q (c; d; i); e) e(c; ()rec(d(); e); ()i())
e(
mapopen q (c; d;
mapopen q (cor(c; e); d; i) e(c; ()d(cor(; e)); ()i())
Figure
8: Proof and reduction rules for m q and n q .
destructor. m q F , with mapwrap q and rec, and n q F , with cor and mapopen q , are
their Mendler-style equivalents.
Returning to our running example of naturals, specializing enhanced induction
for N yields the type Nat q of \recursive" natural numbers, with zero q , succ q
and natpara the \recursive" constant zero, \recursive" successor function and
primitive recursor.
Nat q q (N)
zero q wrap q
succ q (c) wrap q
The typing and reduction rules for Nat q are the following:
Note that a non-zero \recursive" natural is constructed from a pair of naturals.
In the reduction rule, the rst of them is used as the argument for the recurrent
applications of the function being dened, while the second one is used directly.
In principle, the two naturals can be unrelated, but the normal usage of the
construction is that the second natural is equal to the rst (the predecessor),
so the standard successor function is recovered by duplicating its argument.
zero zero q
succ(c) succ q (hc; ci)
The type Nat q of \recursive" Mendler-style naturals is dened as follows:
Nat
mapzero q (d; i) mapwrap q (inl(hi); d; i)
Nat q obeys the following typing and reduction rules:
mapzero q (d;
e z
natrec(mapzero q (d; i); e z
A non-zero \recursive" Mendler-style natural is constructed from a representation
(for the predecessor), a method for converting representations to naturals
and another function from representations to naturals. In the normal usage of
the construction, the second method is also a conversion method. Choosing
Nat q as the type of representations, the standard constructors are obtained
as follows:
zero mapzero q ((); ())
succ(c) mapsucc q (c; (); ())
On \recursive" naturals constructed using the standard constructors, natpara
and natrec capture standard primitive recursion. The factorial function, for
instance, can be programmed as follows:
A degenerate application of primitive recursion, which only uses the \direct-
access" predecessors of non-zero naturals, gives a fast (constant time) program
for the predecessor function:
pred(c) natpara(c; inl(hi); (
pred(c) natrec(c; (-; )inl(hi); (
Enhanced vs. basic [co]induction
Both , and m, n can be encoded in terms of q , q and m q , n q . The converse is
also true, but only if the retractive recursive denition operator is available.
Proposition 6 The following is a proof- and reduction-preserving encoding
of , in terms of q ,
F
(c) wrap q
F
F
cata
F
F
ana
F
F
open
F (c) map
F (open q
Proposition 7 The following is a proof- and reduction-preserving encoding
of m, n in terms of m q ,
d) mapwrap q (c; d; d)
iter
coit
mapopen d) mapopen q (c; d; d)
Proposition 8 The following is a proof- and reduction-preserving encoding
of q , q in terms of , in the presence of :
q
F q (R) F (R ^ q
F (c) i(wrap F q
F q
para
F
cata F q
q
F q (R) F (R _ q
apo
F
open q
F q
(open F q (o(c)); ()i())
Proposition 9 The following is a proof- and reduction-preserving encoding
of m q , n q in terms of m, n in the presence of :
F q (R) (R!m q
mapwrap q
rec
F q (R) (n q
cor
In the last two encodings, we would really like to dene q
cannot (because of
the circularity). Resorting to is a way to overcome this obstacle. From the
result in [32], it follows that using is a necessity, one cannot possibly do
without it.
The rst of these encodings is implicit in [25] and [15]. It also appears in [7].
The second seems to be new.
(and also its any fragment, including NI) extended
with operators q ; q or m q ; n q is strongly normalizing and con
uent.
5 Course-of-value [co]induction
The logical constants from the two lower rear nodes of the cube capture the
course-of-value forms of conventional- and Mendler-style [co]inductive deni-
tion. ? and ? are operators of course-of-value induction and coinduction; m ?
and n ? are their Mendler-style counterparts. Their proof and reduction rules
are given in Figures 9 and 10. Adding ? , ? to a proof system presupposes
the presence of ^, , _, ; there is no corresponding restriction governing the
addition of m ?
(R 4 F )(P )
F
e(
cvcata F
cvcata F (wrap ?
cvcata F (fst(open 4F (
snd(open 4F (
(R
e(
open ?
open ?
F (cvana F (c; e))
F
()cata 5F (; (
)wrap 5F (case@
Figure
9: Proof and reduction rules for ? and ? .
e(
e(
mapopen ? (c; d;
mapopen ? (cvcoit(c; e); d;
Figure
10: Proof and reduction rules for m ? and n ? .
From the algebraic semantics point-of-view, ? F is a least course-of-value prexed
point of a given (necessarily monotonic) F , i.e., a least element of the
set of all R's such that F ((Z)R ^ F (Z)) is less than R. ? F is a greatest
course-of-value postxed point of F .
is a least course-of-value robustly prexed point of a given F , i.e., a least
element of the set of all R's such that F (Y ) is less than R for all Y 's less than
not only R but also F (Y dually, is a greatest course-of-value robustly
postxed point of F .
For programming, ? F is a course-of-value data type, with wrap ?
F a course-
of-value data constructor and cvcata F a course-of-value iterator, and ? F is a
course-of-value codata type, with cvana F a course-of-value iterator and open ?
F
a course-of-value codata destructor. m ? F , with mapwrap ? and cviter, and n ? F ,
with cvcoit and mapopen ? , are their Mendler-style equivalents.
Specializing course-of-value induction for N yields the type Nat ? of \course-of-
value" natural numbers, with zero ? , succ ? and natcviter the \course-of-value"
versions of constant zero, successor function and iterator respectively.
Nat ? ? (N)
zero ? wrap ?
The specialized typing and reduction rules for these constants are the following
Similarly to the \recursive" case, non-zero \course-of-value" naturals are not
constructed from a single preceding natural. The argument of the \course-of-
value" successor function is a colist-like structure of naturals. The coiteration
in the reduction rule applies the function being dened recurrently to every
element of the colist. In principle, again, the naturals in the colist can be
unrelated. The normal usage, however, is that the tail of the colist is the ancestral
of its head (the predecessor of the natural being constructed). (By the
ancestral of a natural, we mean the colist of all lesser naturals in the descending
order.) The standard successor function for naturals is therefore easily
recovered from the \course-of-value" successor function by rst coiteratively
applying the predecessor function to its argument.
zero zero ?
)h
pred(
The predecessor function, however, does not admit a very straightforward definition
(this is a problem that vanishes in the case of course-of-value primitive
recursion). But it is denable in terms of the ancestral function, which itself
is denable by course-of-value iteration in the same way as the predecessor
function is denable by simple iteration.
pred(c)
(pred ? (c); ()fst(open()))
pred ? (c) cvcata N (c; (
The specialization of course-of-value Mendler-style induction for N yields the
Nat ? of \course-of-value" Mendler-style naturals.
Nat
mapzero ? (d;
The derived typing and reduction rules for the above-dened constants are
the following:
e z
A non-zero \course-of-value" Mendler-style natural is constructed from three
components. The rst two are the same as in the case of simple Mendler-
style naturals: a representation for a natural (the predecessor) and a method
to convert representations to naturals. The additional third component gives
a method for converting a representation (for some natural) into nothing or
another representation (normally for the predecessor of this natural). So, using
Nat ? as the type of representations, we obtain the standard constructors of
naturals as follows:
zero mapzero ? ((); pred)
succ(c) mapsucc ? (c; (); pred)
To dene the predecessor function, we again need also the ancestral function.
pred(c)
(pred ? (c); ()fst(open()))
pred ? (c) cviter(c; (
On \course-of-value" naturals constructed using the standard constructors,
natcvcata and natcviter capture standard course-of-value iteration. The Fibonacci
function, for instance, can be programmed using natcvcata as follows.
bo(c) natcvcata(c; zero; (
)case@ snd(open(
Using natcviter, the denition of the Fibonacci function becomes even more
straightforward, as, instead of having to manipulate an intermediate colist of
values that Fibonacci returns, we can \roll back" on inputs to it.
bo(c) natcviter(c; (-; )zero; (
Course-of-value vs. basic [co]induction
Encoding , and m, n in terms of ? , ? and m ? , n ? is very similar to
encoding these constants in terms of q , q and m q , n q . Also encoding in the
opposite direction is analogous and, in fact, even simpler (as in not needed).
Proposition 11 The following is a proof- and reduction-preserving encoding
of , in terms of ? ,
F
(c) wrap ?
F
cata
)case@
()fst(open 4F i
ana
F
F
open
F (c) map
Proposition 12 The following is a proof- and reduction-preserving encoding
of m, n in terms of m ?
iter
coit
mapopen
Proposition 13 The following is a proof- and reduction-preserving encoding
of ? , ? in terms of , :
F wrap F ?(c)
cvcata
F (c; e) cata F ?(c; e)
cvana
F
F open F ?(c)
Proposition 14 The following is a proof- and reduction-preserving encoding
of m ? , n ? in terms of m, n:
mapwrap ?] (c; d;
cviter
cvcoit
mapopen ?] (c; d;
Corollary 15 NI 2 () (and also its any fragment, including NI) extended
with operators strongly normalizing and con
uent.
6 Related work
The rst author to extend an intuitionistic n.d. system with (basic conventional-
style) inductively dened predicates uniformly by axiomatization was Martin-
Lof, with his \theory of iterated inductive denitions" [21].
Bohm and Berarducci [1] and Leivant [14] were the rst authors to describe
how to encode \polynomial" (basic conventional-style) inductive types in 2nd-
order simply typed lambda calculus (Girard and Reynold's system F; the
n.d. proof system for the !,8 2 -fragment of 2nd-order intuitionistic propositional
logic). This method is often referred to as the impredicative encoding
of inductive types (keeping in mind only basic conventional-style induction).
Mendler [19] described the extension by axiomatization of 2nd-order simply
typed lambda calculus with enhanced inductive and coinductive types of his
style. Mendler [20] discussed a similar system with basic Mendler-style inductive
and coinductive types. Extensions of the n.d. proof systems for 2nd-order
intuitionistic predicate logic with constructors of (basic) conventional- and
Mendler-style inductive predicates were described in Leivant's [15], a paper on
extracting programs in (extensions of) 2st-order simply typed lambda calculus
from proofs in (extensions of) the n.d. proof system for 2nd-order intuitionistic
predicate logic. Parigot's work [24,25] on realizability-based \programming
with proofs" bears connection to both Leivant's and Mendler's works.
Greiner [10] and Howard [13, chapter 3] considered programming in an extension
of 1st-order simply typed lambda calculus with axiomatized constructors
of conventional-style (co)inductive types with (co)iteration and data
destruction (codata construction). Both had their motivation in Hagino's
category-theoretic work cited below and studied thus not barely -reduction,
but even -conversion, driven by denite semantic considerations. Howard
implemented his system in a programming language Lemon. Geuvers [7] carried
out a comparative study of basic vs. enhanced, conventional- vs. Mendler-
style inductive and coinductive types in extensions of 2nd-order simply typed
lambda calculus.
In the spirit of Leivant, Paulin-Mohring [26] extracted programs in Girard's F !
from proofs in Coquand and Huet's CC (calculus of constructions). The milestone
papers on inductive type families in extensions of CC and Luo's ECC
(extended calculus of constructions, a combination of CC and Martin-Lof's
type theory) are Pfenning and Paulin-Mohring [28], Coquand and Paulin-Mohring
[4] and Ore [23]. Paulin-Mohring [27] formulated the calculus of inductive
constructions, which extends CC with inductive type families with
primitive recursion by axiomatization. The Coq proof development system
developed at INRIA-Rocqencourt and ENS-Lyon is an implementation of this
last system.
In category theory, (basic conventional-style) inductive and coinductive types
are modelled by initial algebras and terminal coalgebras for covariant functors.
Hagino [11] designed a typed functional language CPL based on distributive
categories with initial algebras and terminal coalgebras for strong covariant
functors. The implemented Charity language by Cockett et al. [3] is a similar
programming language.
The \program calculation" community is rooted in the Bird-Meertens formalism
or Squiggol [2], which, originally, was an equational theory of programming
with the parametric data type of lists. Malcolm [16] made the community
aware of Hagino's work, and studied program calculation based on
bi-Cartesian closed categories with initial algebras and terminal coalgebras
for !-cocontinuous resp. !-continuous covariant functors. Meertens [18] was
the rst author to give a treatment of primitive-recursion in this setting. Some
classic references in the area are Fokkinga's [6] and Sheard and Fegaras' [29].
7 Conclusion and Future Work
In this paper, we studied least and greatest xed point operators that intuitionistic
n.d. systems can be extended with. We described eight pairs of
such operators whose eliminations and introductions behave as recursors and
corecursors of meaningful kinds.
We intend to continue this research with a study of the perspectives of the utility
of intuitionistic n.d. systems with least and greatest xed point operators
in program construction from specications; this concerns both specication
methodology and computer assistance in synthesis. We have also started to
study the relating categorical deduction systems (typed combinatory logics a
la Curien), their utility in \program calculation" and the relevant categorical
theory [38,36,39,40]. We also intend to nd out the details of the apparent
close relationship of enhanced course-of-value Mendler-style (co)recursion to
Gimenez' new formulation of guarded (co)recursion [9] (for systems with sub-
and supertyping and quantication with upper and lower bounds; radically
dierent from the older, very syntactical formulation of [8]).
Acknowledgements
We are thankful to our anonymous referees for a number of helpful comments
and suggestions, especially in regards to matters of presentation. The proof
gures and diagrams appearing in the paper were typeset using the proof.sty
by Makoto Tatsuta and the XYpic generic
by Kristoer C. Rose, respectively.
--R
An introduction to the theory of lists
Yellow Series Report 92/480/18
Inductively de
A survey of the project AUTOMATH
Law and order in algorithmics
Inductive and coinductive types with iteration and recursion
A categorical programming language
A framework for de
Fixed points and extensionality in typed functional programming languages
Reasoning about functional programs and complexity classes associated with type disciplines
Contracting proofs to programs
Data structures and program transformation
Extensions of system F by iteration and primitive recursion on monotone inductive types
Recursive types and type constraints in second-order lambda- calculus
Inductive types and type constraints in the second-order lambda- calculus
The extended calculus of constructions (ECC) with inductive types
a second order type theory
Recursive programming with proofs
-Mohring, Extracting F!
-Mohring, Inductive de
-Mohring, Inductively de
A fold for all seasons
A natural extension of natural deduction
Generalized rules for quanti
A lattice-theoretical xpoint theorem and its applications
Positive recursive type assignment
Natural deduction for intuitionistic least and greatest
A cube of proof systems for the intuitionistic predicate
Primitive (co)recursion and course-of-value (co)iteration
Coding recursion
--TR
An introduction to the theory of lists
Extracting MYAMPERSANDohgr;''s programs from proofs in the calculus of constructions
Inductively defined types
Programming in Martin-LoMYAMPERSANDuml;f''s type theory: an introduction
Data structures and program transformation
Inductively defined types in the calculus of constructions
Recursive programming with proofs
A framework for defining logics
The extended calculus of constructions (ECC) with inductive types
A fold for all seasons
Fixed points and extensionality in typed functional programming languages
Type fixpoints
Programming with Proofs
Positive Recursive Type Assignment
Inductive Definitions in the system Coq - Rules and Properties
Structural Recursive Definitions in Type Theory
Codifying Guarded Definitions with Recursive Schemes
Mendler-style inductive types, categorically
Programming with Inductive and Co-Inductive Types
A categorical programming language
--CTR
Gilles Barthe , Tarmo Uustalu, CPS translating inductive and coinductive types, ACM SIGPLAN Notices, v.37 n.3, p.131-142, March 2002
G. Barthe , M. J. Frade , E. Gimnez , L. Pinto , T. Uustalu, Type-based termination of recursive definitions, Mathematical Structures in Computer Science, v.14 n.1, p.97-141, February 2004 | coding styles;least and greatest fixed points;coinductive types;schemes of total corecursion;typed lambda calculi;natural deduction |
506696 | Skepticism and floating conclusions. | The purpose of this paper is to question some commonly accepted patterns of reasoning involving nonmonotonic logics that generate multiple extensions. In particular, I argue that the phenomenon of floating conclusions indicates a problem with the view that the skeptical consequences of such theories should be identified with the statements that are supported by each of their various extensions. | Introduction
One of the most striking ways in which nonmonotonic logics can differ from classical logic, and even
from standard philosophical logics, is in allowing for multiple sanctioned conclusion sets, known as
extensions. The term is due to Reiter [12], who thought of default rules as providing a means for
extending the strictly logical conclusions of a knowledge base with plausible information. Multiple
extensions arise when a knowledge base contains conflicting default rules, suggesting different, often
inconsistent ways of supplementing its strictly logical conclusions.
The purpose of this paper is to question some commonly accepted patterns of reasoning involving
theories that generate multiple extensions. In particular, I argue that the phenomenon of floating
conclusions indicates a problem with the view that the skeptical consequences of such theories
should be identified with the statements that are supported by each of their various extensions.
Multiple extensions
The canonical example of a knowledge base with multiple extensions is the Nixon Diamond, depicted
in
Figure
1. Here, the statements Qn, Rn, and Pn represent the propositions that Nixon is a
Quaker, a Republican, and a pacifist; statements of the form A
ordinary logical implications and "default" implications respectively, with A
abbreviating A ) :B and A ! :B; and the special statement ? represents truth. What the
knowledge base tells us, of course, is this: Nixon is both a Quaker and a Republican, the fact that
he is a Quaker provides a good reason for concluding that he is a pacifist, and the fact that he is a
Republican provides a good reason for concluding that he is not a pacifist.
This example can be coded into default logic as the theory
representing the basic facts of the situation and
representing the two defaults. The theory yields two extensions:
f:Png). The first results when the basic facts of the situation are extended by
an application of the default concerning Quakers; the second results when the facts are extended
by an application of the default concerning Republicans.
In light of these two extensions, what are we to conclude from the initial information: is Nixon a
pacifist or not? More generally, when a default theory leads to more than one extension, what should
we actually infer from that theory-how should we define its set of consequences, or conclusions?
Several proposals have been discussed in the literature. One option is to suppose that we should
arbitrarily select a particular one of the theory's several extensions and endorse the conclusions
contained in it; a second option is to suppose that we should be willing to endorse a conclusion just
in case it is contained in some extension of the theory. These first two options are sometimes said
to reflect a credulous reasoning policy. A third option, now generally described as skeptical, is to
suppose that we should endorse a conclusion just in case it is contained in every extension of the
theory. 1
Of these three options, the first-pick an arbitrary extension-really does seem to embody a
sensible policy, or at least one that is frequently employed. Given conflicting defeasible information,
we often simply adopt some internally coherent point of view in which the conflicts are resolved in
some particular way, regardless of the fact that there are other coherent points of view in which
the conflicts are resolved in different ways. Still, although this reasoning policy may be sensible, it
1 The use of the credulous/skeptical terminology in this context was first introduced by Touretzky et al. [15], but
the distinction itself is older than this; it was already implicit in Reiter's paper on default logic, and was described
explicitly by McDermott [10] as the distinction between brave and cautious reasoning. Makinson [8] refers to the first
of the two credulous options described here as the choice option.
s
s
s
s
\Gamma'
@
@
@
@
@
@I
Rn
Pn
Qn
Figure
1: The Nixon Diamond
is hard to see how it could be codified in a formal consequence relation. If the choice of extension
really is arbitrary, different reasoners could easily select different extensions, or the same reasoner
might select different extensions at different times. Which extension, then, would represent the real
conclusion set of the original theory?
The second of our three options-endorse a conclusion whenever it is contained in some extension-
could indeed be codified as a consequence relation, but it would be a peculiar one. According to this
policy, the conclusion set associated with a default theory need not be closed under standard logical
consequence, and might easily be inconsistent, even in cases in which the underlying default theory
itself seems to be consistent. The conclusion set of the theory representing the Nixon Diamond, for
example, would contain both Pn and :Pn, since each of these formulas belongs to some extension
of the default theory, but it would not contain Pn - :Pn, since this formula is not contained in
any extension.
One way of avoiding these peculiar features of the second option is to think of the conclusions
generated by a default theory as being shielded by a kind of modal operator. Where A is a
statement, let B(A) mean there is good reason to believe that A; and suppose a theory provides us
with good reason to believe a statement whenever that statement is included in some extension of
the theory, some internally coherent point of view. Then we could define the initial conclusions of a
default theory as the set that extends W with a formula B(A) whenever A belongs to
some extension of \Delta, and we could go on to define the theory's conclusion set as the logical closure
of its initial conclusions.
This variant of the second option has some interest. It results in a conclusion set that is
both closed under logical consequence and consistent as long as W itself is consistent. And Reiter's
original paper on default logic [12, Section 4] provides a proof procedure, sound and complete under
certain conditions, that could be used in determining whether B(A) belongs to the conclusion set
as defined here. Unfortunately, however, this variant of the second option also manages to sidestep
our original question. We wanted to know what conclusions we should actually draw from the
information provided by a default theory-whether or not, given the information from the Nixon
Diamond, we should conclude that Nixon is a pacifist, for example. But according to this variant, we
are told only what there is good reason to believe-that both B(Pn) and B(:Pn) are consequences
of the theory, so that there is good reason to believe that Nixon is a pacifist, but also good reason
to believe that he is not. This may be useful information, but it is still some distance from telling
us whether or not to conclude that Nixon is a pacifist. 2
Of our three options for defining a notion of consequence in the presence of multiple exten-
sions, only the third, skeptical proposal-endorse a conclusion whenever it is contained in every
extension-seems to hold any real promise. This option leads to a single conclusion set, which is
both closed under logical consequence and consistent as long as the initial information is. And it
provides an answer that is at least initially attractive to our original question concerning proper
conclusions. In the case of the Nixon Diamond, for example, since neither Pn nor :Pn belongs to
every extension, this third option tell us that we should not conclude that Nixon is a pacifist, but
that we should not conclude that Nixon is not a pacifist either. Since there is a good reason for
each of these conflicting conclusion, we should remain skeptical.
Floating conclusions
logic defines a direct, unmediated relation between a particular default theory and the statement
sets that form its extensions. Another class of formalisms-known as argument systems-takes
a more roundabout approach, analyzing nonmonotonic reasoning through the study of interactions
among competing defeasible arguments. 3
Although the arguments themselves that are studied in these argument systems are often com-
plex, we can restrict our attention here entirely to linear arguments, analogous to the reasoning
paths studied in theories of defeasible inheritance. 4 These linear arguments are formed by starting
with a true statement and then simply stringing together strict and defeasible implications; each
such argument can be said to support the final statement it contains as its conclusion. As an
abstract example, the structure ? can be taken to represent an argument of the
form "A is true, which defeasibly implies B, which strictly implies :C," supporting the conclusion
:C. As a less abstract example, we can see that the Nixon Diamond provides the materials for
constructing the two arguments ? supporting the conflicting
conclusions Pn and :Pn.
Where ff is an argument, we will let ff represent the particular conclusion supported by ff.
Where \Phi is a set of arguments, we will let \Phi represent the set of conclusions supported by the
arguments in \Phi-that is, the set containing the statement ff for each argument ff belonging to \Phi.
The primary technical challenge involved in the development of an argument system is the
specification of the coherent sets of arguments that an ideal reasoner might be willing to accept on
the basis of a given body of initial information. We will refer to these coherent sets of arguments
as argument extensions, to distinguish them from the statement extensions defined by theories such
as default logic. Again, the actual definition of argument extensions is often complicated in ways
that need not concern us here. Without going into detail, however, we can simply note that, just
as theories like default logic allow multiple statement extensions, argument systems often associate
multiple argument extensions with a single body of initial information. In the case of the Nixon
Diamond, for example, an argument system patterned after multiple-extension theories of defeasible
2 Note that this objection is directed only against the use of modal operators to capture the epistemic interpretation
of default logic. Other interpretations, involving other modal operators, are possible; it is shown in [4], for example,
that a deontic interpretation, with default conclusions wrapped inside of deontic operators, generates a logic for
normative reasoning corresponding to that originally suggested by van Fraassen [16].
3 A recent survey of a variety of argument systems can be found in Prakken and Vreeswijk [11].
4 The development of the path-based approach to inheritance reasoning was initiated by Touretzky [14]; a survey
can be found in [5].
inheritance would generate the two extensions
The first results from supplementing the initial information with the argument that Nixon is a
pacifist because he is a Quaker, the second from supplementing this information with the argument
that Nixon is not a pacifist because he is a Republican.
When a knowledge base leads to multiple argument extensions, there are, as before, several
options for characterizing the appropriate set of conclusions to draw on the basis of the initial infor-
mation. Again, we might adopt a credulous reasoning policy, either endorsing the set of conclusions
supported by an arbitrary one of the several argument extensions, or perhaps endorsing a conclusion
as believable whenever it is supported by some extension or another. In the case of the Nixon
Diamond, this policy would lead us to endorse either \Phi 1
Png or \Phi 2
as the conclusion set of the original knowledge base, or perhaps simply to endorse the statements
belonging to the union of these two sets as believable.
As before, however, we might also adopt a kind of skeptical policy in the presence of these
multiple argument extensions, defining the appropriate conclusion set through their intersection.
In this case, though, since these new extensions contain arguments rather than statements, there are
now two alternatives for implementing such a policy. First, we might decide to endorse an argument
just in case it is contained in each argument extension associated with an initial knowledge base, and
then to endorse a conclusion just in case that conclusion is supported by an endorsed argument.
Formally, this alternative leads to the suggestion that the appropriate conclusions of an initial
knowledge base \Gamma should be the statements belonging to the set
is an extension of \Gammag):
Or second, we might decide to endorse a conclusion just in case that conclusion is itself supported
by each argument extension of the initial knowledge base \Gamma, leading to the formal suggestion that
the appropriate conclusions of the knowledge base should be the statements belonging to the set
f \Phi : \Phi is an extension of \Gammag;
where the order of and T is reversed.
Of course, these two alternatives for implementing the skeptical policy come to the same thing in
the case of the Nixon Diamond: both lead to fQn; Rng as the appropriate conclusion set. But there
are other situations in which the two alternatives yield different results. A well-known example,
due to Ginsberg, appears in Figure 2, where Qn and Rn are interpreted as before, Dn and Hn are
interpreted to mean that Nixon is a dove or a hawk respectively, and En as meaning that Nixon
is politically extreme (regarding the appropriate use of military force). What this diagram tells us
is that Nixon is both a Quaker and a Republican, that there is good reason to suppose that Nixon
is a dove if he is a Quaker, a hawk if he is a Republican, and that he is politically extreme if he is
either a dove or a hawk.
Again, a system patterned after multiple-extension inheritance theories would associate two
argument extensions with this knowledge base, as follows:
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
s
s
Rn
Hn
Dn
Qn
En
Figure
2: Is Nixon politically extreme?
Since no arguments except for the trivial ? ) Qn and ? ) Rn are contained in both of these ex-
tensions, the first of our two alternatives for implementing the skeptical policy, which involves intersecting
the argument extensions themselves, would lead to fQn; Rng as the appropriate conclusion
set, telling us nothing more than the initial information that Nixon is a Quaker and a Republican.
On the other hand, each of these two argument extensions supports the statement En-one through
the argument ? the other through the argument ? En. The
second of our two alternatives for implementing the skeptical policy, which involves intersecting
supported statements rather than the arguments that support them, would therefore lead to the
conclusion set fQn; Rn; Eng, telling us also that Nixon is politically extreme.
Statements like En, which are supported in each extension associated with a knowledge base,
but only by different arguments, are known as floating conclusions. This phrase, coined by Makinson
and Schlechta [9], nicely captures the picture of these conclusions as floating above the different
and conflicting arguments that might be taken to support them.
The phenomenon of floating conclusions was first investigated in the context of defeasible inheritance
reasoning, particularly in connection with the theory developed by Thomason, Touretzky,
and myself in [6]. In contrast to the multiple-extension accounts considered so far, that theory
first defined a single argument extension that was thought of as containing the "skeptically accept-
able" arguments based on a given inheritance network. The skeptical conclusions were then defined
simply as the statements supported by those skeptically acceptable arguments.
Ginsberg's political extremist example was meant to show that no approach of this sort, relying
on a single argument extension, could correctly represent skeptical reasoning. A single argument
extension could not consistently contain both the arguments ?
since the strict information in the knowledge base shows that each of these
arguments conflicts with an initial segment of the other. The single argument extension could not
contain either of these arguments without the other, since that would involve the kind of arbitrary
decision appropriate only for credulous reasoning. And if the single argument extension were to
contain neither of these two arguments, it would not support the conclusion En, which Ginsberg
considers to be an intuitive consequence of the initial information: "given that both hawks and
doves are politically [extreme], Nixon certainly should be as well" [3, p. 221]. 5
Both Makinson and Schlechta [9] and Stein [13] also consider floating conclusions in the context
of defeasible inheritance reasoning. Makinson and Schlechta share Ginsberg's view that the
appropriate conclusions to derive from a knowledge base are those that are supported by each of
its argument extensions:
It is an oversimplification to take a proposition A as acceptable . iff it is supported
by some [argument] path ff in the intersection of all extensions. Instead A must be
taken as acceptable iff it is in the intersection of all outputs of extensions, where the
output of an extension is the set of all propositions supported by some path within it
[9, pp. 203-204].
From this they likewise argue, not only that the particular theory developed in [6] is incorrect,
but more generally, that any theory attempting to define the skeptically acceptable conclusions by
reference to a single set of acceptable arguments will be mistaken. And Stein reaches a similar
judgment, for similar reasons:
The difficulty lies in the fact that some conclusions may be true in every credulous
extension, but supported by different [argument] paths in each. Any path-based theory
must either accept one of these paths-and be unsound, since such a path is not in every
extension-or reject all such paths-and with them the ideally skeptical conclusion-
and be incomplete [13, p. 284].
What lies behind these various criticisms, of course, is the widely-held assumption that the
second, rather than the first, of our two skeptical alternatives is correct-that floating conclusions
should be accepted, and that a system that fails to classify them among the consequences of a
defeasible knowledge base is therefore in error. The purpose of this paper is to question that
assumption.
4 An example
Why not accept floating conclusions? Their precarious status can be illustrated through any number
of examples, but we might as well choose a dramatic one.
Suppose, then, that my parents have a net worth of one million dollars, but that they have
divided their assets in order to avoid the United States inheritance tax, so that each parent currently
possess half a million dollars apiece. And suppose that, because of their simultaneous exposure to
a fatal disease, it is now settled that both of my parents will die within a month. This is a fact:
medical science is certain.
Imagine also, however, that there is some expensive item-a yacht, say-whose purchase I
believe would help to soften the blow of my impending loss. Although the yacht I want is currently
5 Although, as far as I know, this example was first published in the textbook cited here, it had previously been
part of the oral tradition for many years-I first heard it during the question session after the AAAI-87 presentation
of [6], when Ginsberg raised it as an objection to that theory.
available, the price is good enough that it is sure to be sold by the end of the month. I can now
reserve the yacht for myself by putting down a large deposit, with the balance due in six weeks.
But there is no way I can afford to pay the balance unless I happen to inherit at least half a million
dollars from my parents within that period, and if I fail the pay the balance on time, I will lose my
large deposit. Setting aside any doubts concerning the real depth of my grief, let us suppose that
my utilities determine the following conditional preferences: if I believe I will inherit half a million
dollars from my parents within six weeks, it is very much in my benefit to place a deposit on the
yacht; if I do not believe this, it is very much in my benefit not to place a deposit.
Now suppose I have a brother and a sister, both of whom are extraordinarily reliable as sources
of information. Neither has ever been known to be mistaken, to deceive, or even to misspeak-
although of course, like nearly any source of information, they must be regarded as defeasible.
My brother and sister have both talked with our parents about their wills, and feel that they
understand the situation. I have written to each of them describing my delicate predicament
regarding the yacht, and receive letters back. My brother writes: "Father is going to leave his
money to me, but Mother will leave her money to you, so you're in good shape.'' My sister writes:
"Mother is going to leave her money to me, but Father will leave his money to you, so you're in
good shape." No further information is now available: the wills are sealed, my brother and sister
are trekking together through the Andes, and our parents, sadly, have slipped into a coma.
Based on my current information, what should I conclude? Should I form the belief that I will
inherit half a million dollars-and therefore place a large deposit on the yacht-or not?
The situation is depicted in Figure 3, where the statement letters are interpreted as follows: F
represents the proposition that I will inherit half a million dollars from my father, M represents the
proposition that I will inherit half a million dollars from my mother, BA(:F - M ) represents the
proposition that my brother asserts that I will inherit my mother's money but not my father's, and
represents the proposition that my sister asserts that I will inherit my father's money
but not my mother's. The defeasible links BA(:F - M
reflect the fact that any assertion by my brother or sister provides good reason for concluding that
the content of that assertion is true. The strict links in the diagram record various implications
and inconsistencies. Notice that, although the contents of my brother's and sister's assertions-the
statements :F -M and F -:M-are jointly inconsistent, the truth of either entails the disjunctive
which is, of course, all I really care about. As long as I can conclude that I will
inherit half a million dollars from either my father or my mother, I should go ahead and place a
deposit on the yacht.
A multiple-extension approach would associate the following two argument extensions with this
knowledge base:
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
s
s
Figure
3: Should I buy the yacht?
Again, the first of our two alternatives for implementing the skeptical reasoning policy, which
involves intersecting arguments, would lead to fBA(:F - M ); SA(F - :M )g as the appropriate
conclusion set, telling me only that my brother and sister asserted what they did. But since each of
the two extensions contains some argument supporting the statement F -M , the second alternative,
which involves intersecting supported statements, leads to the conclusion set fBA(:F-M ); SA(F-
telling me also-as a floating conclusion-that I will inherit half a million dollars
from either my father or my mother.
In this situation, then, there is a vivid practical difference between the two skeptical alternatives.
If I were to reason according to the first, I would not be justified in concluding that I am about to
inherit half a million dollars, and so it would be foolish for me to place a deposit on the yacht. If
I were to reason according to the second, I would be justified in drawing this conclusion, and so it
would be foolish for me not to place a deposit.
Which alternative is correct? I have not done a formal survey, but most of the people to
whom I have presented this example are suspicious of the floating conclusion, and so favor the first
alternative. Most do not feel that the initial information from Figure 3 would provide sufficient
justification for me to conclude, as the basis for an important decision, that I will inherit half
a million dollars. Certainly, this is my own opinion-that the example shows, contrary to the
widely-held assumption, that it is at least coherent for a skeptical reasoner to withhold judgment
from floating conclusions. Although both my brother and sister are reliable, and each supports the
conclusion that I will inherit half a million dollars, the support provided by each of these reliable
sources is undermined by the other; there is no unopposed reason supporting the conclusion. Since
either my brother or sister must be wrong, it is therefore easy to imagine that they might both
be wrong, and wrong in this way: perhaps my father will leave his money to my brother and my
mother will leave her money to my sister, so that I will inherit nothing.
5 Comments on the example
First, in case this example does not yet seem convincing, it might help to modify things a bit.
Suppose, then, that I had written only to my brother, and received his response-that my father
had named him as sole beneficiary, but that my mother would leave her money to me. That is,
suppose my starting point is the information depicted in the left-hand side of Figure 3. In this new
situation, should I conclude that I will inherit half a million dollars, and therefore place a deposit
on the yacht?
Some might say no-that even in this simplified situation I should not make such a important
decision on the basis of my brother's word alone. But this objection misses the point. Most of what
we know, we know through sources of information that are, in fact, defeasible. By hypothesis, we
can suppose that my brother is arbitrarily reliable, as reliable as any defeasible source of information
could possibly be-as reliable as perception, for instance, or the bank officer's word that the money
has actually been deposited in my account. If we were to reject information like this, it is hard
to see how we could get by in the world at all. When a source of defeasible information that is,
by hypothesis, arbitrarily reliable tells me that I will inherit half a million dollars, and there is no
conflicting evidence in sight, it is reasonable for me to accept this statement, and to act on it. Note
that both of the two skeptical alternatives yield this outcome in our simplified situation, since the
initial information, represented by the left-hand side of Figure 3, generates only a single argument
extension, in which the conclusion that I will inherit half a million dollars is supported by a single
argument.
Now suppose that, at this point, I hear from my equally reliable sister with her conflicting
information-that she is my mother's beneficiary, but that my father will leave his money to me.
As a result, I am again in the situation depicted in the full Figure 3, with two argument extensions,
and in which the statement that I will inherit half a million dollars is supported only as a floating
conclusion. Ask yourself: should my confidence in the statement that I will inherit half a million
dollars be diminished in this new situation, now that I have heard from my sister as well as my
brother? If it seems that my confidence can legitimately be diminished-that this new information
casts any additional doubt on the outcome-then it follows that floating conclusions are somewhat
less secure than conclusions that are uniformly supported by a common argument. And that is all
we need. The point is not that floating conclusions might be wrong; any conclusion drawn through
defeasible reasoning might be wrong. The point is that a statement supported only as a floating
conclusion seems to be less secure than the same statement when it is uniformly supported by
a common argument. As long as there is this difference in principle, it is coherent to imagine a
skeptical reasoner whose standards are calibrated so as to accept statements that receive uniform
support, but to reject floating conclusions.
As a second comment, notice that, if floating conclusions pose a problem, it is not just a problem
for argument systems, but also for traditional nonmonotonic formalisms, such as default or model-
preference logics. Indeed, the problem is even more serious for these traditional formalisms. With
argument systems, where the extensions generated are argument extensions, it is at least possible
to avoid floating conclusions by adopting the first of our two skeptical alternatives-endorsing
only those arguments belonging to each extension, and then endorsing only the conclusions of the
endorsed arguments. Since arguments are represented explicitly in these systems, they can be
used to filter out floating conclusions. In most traditional nonmonotonic logics, arguments are
suppressed, and so the materials for carrying out this kind of filtering policy are not even available.
To illustrate, a natural representation of the information from our yacht example in default
logic is the theory
describes what my brother and sister said and
reflects the defaults that whatever my brother and sister say should be taken as true. This theory
has two extensions:
The extensions of default logic are statement extensions, and so the only possible policy for skeptical
reasoning appears to be: intersect the extensions. Since the statement F - M belongs to both
extensions, skeptical reasoning in default logic tells me, immediately and without ambiguity, that
I will inherit half a million dollars.
Of course, default logic is essentially a proof-theoretic formalism, and it is easy to see how
it could be modified so that the extensions defined would contain proofs rather than statements;
such a modification would then allow for floating conclusions to be filtered out by a treatment
along the lines of our first alternative. 6 It is harder to see how floating conclusions could be
avoided in model-preference logics. In a circumscriptive theory, for instance, the yacht example
could naturally be expressed by supplementing the facts BA(:F - M ) and SA(F - :M ) with
the statements (BA(:F - M ) - :Ab b
preferring those models in which as few as possible of the propositions Ab b
and Ab s
-the abnormalities
associated with the rare situations in which my brother or sister is mistaken-are true. Of
course, there can be no models in which neither Ab b
not Ab s
is true. The most preferred models
will therefore be those in which only one of these abnormalities holds. The statement F - M is
true in all of these models, and would therefore follow as a circumscriptive consequence. 7
6 Objections to the example
I have heard two objections worth noting to the yacht example as an argument against floating
conclusions.
The first focuses on the underlying methodology of logical formalization. Even though what
my brother literally said is "Father is going to leave his money to me, but Mother will leave her
money to you," one might argue that the real content of his statement-what he really meant-is
better conveyed through the two separate sentences "Father is going to leave his money to me"
and "Mother will leave her money to you." In that case, rather than formalizing my brother's
assertion through the single conjunction :F - M , it would be more natural to represent its content
through the separate statements :F and M ; and the content of my sister's assertion could likewise
be formalized through the separate statements F and :M .
Considered from the standpoint of default logic, the situation could then be represented through
the new default theory
6 One suggested modification of default logic that is particularly relevant, because it bears directly on examples of
the sort considered here, can be found in Brewka and Gottlob [2].
7 This form of circumscription, which involved minimizing the truth of statements rather than the extensions of
predicates, is a special case of the more usual form; see Lifschitz [7, pp. 302-303] for a discussion.
describing what now appear to be the four independent assertions made by my brother and sister,
and with
carrying the defaults that any assertion by my brother or sister should be taken as true, if possible.
This new default theory would then have four extensions:
And since not all of these extensions contain the statement F - M , the policy of defining skeptical
conclusions simply by intersecting the statements supported by each extension no longer leads, in
this case, to the conclusion that I will inherit half a million dollars.
The idea behind this objection is that the problems presented by floating conclusions might be
avoided if we were to adopt a different strategy for formalizing the statements taken as inputs by
the logical system, which would involve, among other things, articulating conjunctive inputs into
their conjuncts. This idea is interesting, has some collateral benefits, and bears certain affinities
to proposals that have been suggested in other contexts. 8 Nevertheless, in the present setting,
the strategy of factoring conjunctive statements into their conjuncts in order to avoid undesirable
floating conclusions suggests a procedure that might be described as "wishful formalization"-
carefully tailoring the inputs to a logical system so that the system then yields the desired outputs.
Ideally, a logic should take as its inputs formulas conforming as closely as possible to the natural
language premises provided by a situation, and then the logic itself should tell us what conclusions
follow from those premises. Any time we are forced to adopt a less straightforward representation
of the input premises in order to avoid inappropriate conclusions-replacing conjunctions with their
conjuncts, for example-we are backing away from that ideal. By tailoring the inputs in order to
assure certain outputs, we are doing some work for the logic that, in the ideal case, the logic should
be doing for us.
The second objection to the yacht example as an argument against floating conclusions concerns
the method for evaluating supported statements. Part of what makes this example convincing as a
reason for rejecting the floating conclusion that I will inherit half a million dollars is the fact that it is
developed within the context of an important practical decision, where an error carries significant
consequences: I will lose my large deposit. But what if the consequences were less significant?
Suppose the deposit were trivial: one dollar, say. In that case, many people would then argue that
the support provided for the proposition that I will inherit half a million dollars-even as a floating
8 Imagine, for example, that my brother asserts a statement of the form P - Q, where it turns out that P is
a logical contradiction-perhaps a false mathematical statement-but Q expresses a perfectly sensible proposition
that just happens to be conjoined with P for reasons of conversational economy. Here, the representation of the
situation through the default theory hW;Di with
would prevent us from drawing either P or Q as a conclusion, since the justification for the default could not be
satisfied. But if the situation were represented through the articulated theory hW;Di with
and we could at least draw the conclusion Q. This idea of articulating
premises into simpler components, in order to draw the maximum amount of information out of a set of input
statements without actually reaching contradictory conclusions, has also been studied in the context of relevance
logic; a carefully formulated proposal can be found in Section 82.4 of Anderson et al. [1].
conclusion- would be sufficient, when balanced against the possibility for gain, to justify the risk
of losing my small deposit. The general idea behind this objection is that the proper notion of
consequence in defeasible reasoning is sensitive to the risk of being wrong. The evaluation of a logic
for defeasible reasoning cannot, therefore, be made outside of some particular decision-theoretic
setting, with particular costs assigned to errors; and there are certain settings in which one might
want to act even on the basis of propositions supported only as floating conclusions.
This is an intriguing objection. I will point out only that, if accepted, it suggests a major revision
in our attitude toward nonmonotonic logics. Traditionally, a logic-unlike a system for probabilistic
or evidential reasoning-is thought to classify statements into only two categories: those that follow
from some set of premises, and those that do not. The force of this objection is that nonmonotonic
logics should be viewed, instead, as placing statements into several categories, depending on the
degree to which they are supported by a set of premises, with floating conclusions then classified,
not necessarily as unsupported, but perhaps only as less firmly supported than statements that are
justified by the same argument in every extension.
7 Other examples
Once the structure of the yacht example is understood, it is easy to construct other examples along
similar lines: just imagine a situation in which two sources of information, or reasons, support a
common conclusion, but also undermine each other, and therefore undermine the support that each
provides for the common conclusion.
Suppose you are a military commander pursuing an enemy that currently holds a strong defensive
position. It is suicide to attack while the enemy occupies this position in force, but you
have orders to press ahead as quickly as possible, and so you send out your reliable spies. After a
week, one spy reports back that there can now be only a skeleton force remaining in the defensive
position; he has seen the main enemy column retreating through the mountains, although he also
noticed that they sent out a diversionary group to make it appear as if they were retreating along
the river. The other spy agrees that only a skeleton force remains in the defensive position; he has
seen the main enemy column retreating along the river, although he notes that they also sent out
a diversionary group to make it appear is if they were retreating through the mountains. Based
on this information, should you assume at least that the main enemy force has retreated from the
defensive position-a floating conclusion that is supported by both spies-and therefore commit
your troops to an attack? Not necessarily. Although they support a common conclusion, each
spy undermines the support provided by the other. Perhaps the enemy sent out two diversionary
groups, one through the mountains and one along the river, and managed to fool both your spies
into believing that a retreat was in progress. Perhaps the main force still occupies the strong
defensive position, awaiting your attack.
Or suppose you attend a macroeconomics conference during a period of economic health, with
low inflation and strong growth, and find that the community of macroeconomic forecasters is
now split right down the middle. One group, working with a model that has been reliable in
the past, predicts that the current strong growth rate will lead to higher inflation, triggering an
economic downturn. By tweaking a few parameters in the same model, the other group arrives at a
prediction according to which the current low inflation rate will actually continue to decline, leading
to a dangerous period of deflation and triggering an economic downturn. Both groups predict an
economic downturn, but for different and conflicting reasons-higher inflation versus deflation-and
so the prediction is supported only as a floating conclusion. Based on this information, should you
accept the prediction, adjusting your investment portfolio accordingly? Not necessarily. Perhaps
the extreme predictions are best seen as undermining each other and the truth lies somewhere in
between: the inflationary and deflationary forces will cancel each other out, the inflation rate will
remain pretty much as it is, and the period of economic health will continue.
There is no need to labor the point by fabricating further examples in which floating conclusions
are suspect. But what about the similar cases, exemplifying the same pattern, that have actually
been advanced as supporting floating conclusions, such as Ginsberg's political extremist example
from
Figure
I have always been surprised that this particular example has seemed so persuasive to so many
people. The example relies on our understanding that individuals adopt a wide spectrum of attitudes
regarding the appropriate use of military force, but that Quakers and Republicans tend to be
doves and hawks, respectively-where doves and hawks take the extreme positions that the use of
military force is either never appropriate, or that it is appropriate in response to any provocation,
even the most insignificant. Of course, Nixon's own position on the matter is well known. But
if I were told of some other individual that he is both a Quaker and a Republican, I would not
be sure what to conclude. It is possible that this individual would adopt an extreme position,
as either a dove or a hawk. But it seems equally reasonable to imagine that such an individual,
rather than being pulled to one extreme of the other, would combine elements of both views into
a more balanced, measured position falling toward the center of the political spectrum-perhaps
believing that the use of military force is sometimes appropriate, but only as a response to serious
provocation. Given this real possibility, it might be appropriate to take a skeptical attitude, not
only toward the questions of whether this individual would be a dove or a hawk, but also toward
the question whether he would adopt a politically extreme position at all.
Another example appears in Reiter's original paper on default logic, where he suggests [12,
pp. 86-87] defaults representing the facts that people tend to live in the same cities as their spouses,
but also in the cities in which they work, and then asks us to consider the case of Mary, whose
spouse lives in Toronto but who works in Vancouver. Coded into default logic, this information
leads to a theory with two extensions, in one of which Mary lives in Toronto and in one of which
she lives in Vancouver. Reiter seems to favor the credulous policy of embracing a particular one
of these extensions, either concluding that Mary lives in Toronto or concluding that Mary lives in
But then, in a footnote, he also mentions what amounts to the skeptical possibility of
forming only the belief that Mary lives in either Toronto or Vancouver-where this proposition is
supported, of course, as a floating conclusion.
Given the information from this example, I would, in fact, be likely to conclude that Mary lives
either in Toronto or Vancouver. But I am not sure this conclusion should follow as a matter of
logic, even default logic. In this case, the inference seems to rely on a good deal of knowledge about
the particular domain involved, including the vast distance between Toronto and Vancouver, which
effectively rules out any sort of intermediate solution to Mary's two-body problem.
By contrast, consider the happier situation of Carol, who works in College Park, Maryland, but
whose spouse works in Alexandria, Virginia; and assume the same two defaults-that people tend
to live in the same cities as their spouses, but also tend to live in the cities in which they work.
Represented in default logic, this information would again lead to a theory with multiple extensions,
in each of which, however, Carol would live either in College Park or in Alexandria. Nevertheless,
I would be reluctant to accept the floating conclusion that Carol lives either in College Park or in
Alexandria. Just thinking about the situation, I would consider it equally likely that Carol and her
spouse live together in Washington, DC, within easy commuting distance of both their jobs.
Why is it so widely thought that floating conclusions should be accepted by a skeptical reasoner,
so that a system that fails to generate these conclusions is therefore incorrect? It is hard to be
sure, since this point of view is generally taken as an assumption, rather than argued for, but we
can speculate.
Suppose an agent believes that either the statement B or the statement C holds, that B implies
A, and that C also implies A. Classical logic then allows the agent to draw A as a conclusion;
this is a valid principle of inference, sometimes known as the principle of constructive dilemma.
The inference to a floating conclusion is in some ways similar. Suppose a default theory has two
, that the extension
contains the statement A, and that the extension
also contains the statement A. The standard view is that a skeptical reasoner should then draw A
as a conclusion, even if it is not supported by a common argument in the two extensions.
Notice the difference between these two cases, though. In the first case, the classical reasoning
agent believes both that B and C individually imply A, and also that either B or C holds. In the
second case, we might as well suppose that the skeptical reasoner knows that A belongs to both
the extensions
, so that both E 1
individually imply A. The reasoner is therefore
justified in drawing A as a conclusion by something like the principle of constructive dilemma-as
long as it is reasonable to suppose, in addition, that either E 1
or
is correct. This is the crucial
assumption, which underlies the standard view of skeptical reasoning and the acceptance of floating
conclusions. But is this assumption required? Is it necessary for a skeptical reasoner to assume,
when a theory leads to multiple extensions, that one of those extensions must be correct?
Suppose that each of the theory's multiple extensions is endorsed by some credulous reasoner.
Then the assumption that one of the theory's extensions must be correct is equivalent to the
assumption that one of these credulous reasoners is right. But why should a skeptical reasoner
assume that some credulous reasoner, following an entirely different reasoning policy, must be
right? Of course, there may be situations in which it is appropriate for a skeptical reasoner to
adopt this standard view-that one of the various credulous reasoners must be right, but that it is
simply unclear which one. That might be the extent of the skepticism involved. But there also seem
to be situations in which a deeper form of skepticism is appropriate-where each of the multiple
extensions is undermined by another to such an extent that it seems like a real possibility that
all of the credulous reasoners could be wrong. The yacht, spy, and economist examples illustrate
situations that might call for this deeper form of skepticism.
As a policy for reasoning with conflicting defaults, the notion of skepticism was originally
introduced into the field of nonmonotonic logic to characterize the particular system presented in
[6], which did not involve the assumption that one of a theory's multiple extensions must be correct,
and did not support floating conclusions. By now, however, the term is used almost uniformly to
describe approaches that do rely on this assumption, so that the "skeptical conclusions" of a theory
are generally identified as the statements supported by each of its multiple extensions, including
the floating conclusions. Of course, there is nothing wrong with this usage of the term, as a
technical description of the statements supported by each extension-except that it might tend to
cut off avenues for research, suggesting that we now know exactly how to characterize the skeptical
conclusions of a theory, so that the only issues remaining are matters concerning the efficient
derivation of these conclusions. On the contrary, if we think of skepticism as the general policy of
withholding judgment in the face of conflicting defaults, rather than arbitrarily favoring one default
or another, there is a complex space of reasoning policies that could legitimately be described as
skeptical, many of which involve focusing on the arguments that support particular conclusions,
not just the conclusions themselves.
Acknowledgments
This paper got its start in a series of conversations with Tamara Horowitz. I am grateful for
valuable comments to Aldo Antonelli, David Makinson, and Richmond Thomason, and to a number
of participants in the Fifth International Symposium on Logical Formalizations of Commonsense
Reasoning, particularly Leora Morgenstern, Rohit Parikh, Ray Reiter, and Mary Anne Williams.
--R
Entailment: The Logic of Relevance and Necessity
Essentials of Artificial Intelligence.
Moral dilemmas and nonmonotonic logic.
Some direct theories of nonmonotonic inheritance.
A skeptical theory of inheritance in nonmonotonic semantic networks.
General patterns in nonmonotonic reasoning.
"directly skeptical"
Logics for defeasible argumentation.
A logic for default reasoning.
Resolving ambiguity in nonmonotonic inheritance hierarchies.
The Mathematics of Inheritance Systems.
A clash of intuitions: the current state of nonmonotonic multiple inheritance systems.
Values and the heart's command.
--TR
The mathematics of inheritance systems
A skeptical theory of inheritance in nonmonotonic semantic networks
Floating conclusions and zombie paths
Resolving ambiguity in nonmonotonic inheritance hierarchies
Essentials of artificial intelligence
General patterns in nonmonotonic reasoning
Some direct theories of nonmonotonic inheritance
Circumscription
Well-founded semantics for default logic
--CTR
Shingo Hagiwara , Satoshi Tojo, Stable legal knowledge with regard to contradictory arguments, Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, p.323-328, February 13-16, 2006, Innsbruck, Austria
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, SCC-recursiveness: a general schema for argumentation semantics, Artificial Intelligence, v.168 n.1, p.162-210, October 2005
Yoshitaka Suzuki, Additive Consolidation with Maximal Change, Electronic Notes in Theoretical Computer Science (ENTCS), 165, p.177-187, November, 2006
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, Self-stabilizing defeat status computation: dealing with conflict management in multi-agent systems, Artificial Intelligence, v.165 n.2, p.187-259, July 2005 | default logic;nonmonotonic logic;skeptical reasoning |
506794 | Anomaly Detection in Embedded Systems. | By employing fault tolerance, embedded systems can withstand both intentional and unintentional faults. Many fault-tolerance mechanisms are invoked only after a fault has been detected by whatever fault-detection mechanism is used, hence, the process of fault detection must itself be dependable if the system is expected to be fault tolerant. Many faults are detectable only indirectly as a result of performance disorders that manifest as anomalies in monitored system or sensor data. Anomaly detection, therefore, is often the primary means of providing early indications of faults. As with any other kind of detector, one seeks full coverage of the detection space with the anomaly detector being used. Even if coverage of a particular anomaly detector falls short of 100 percent, detectors can be composed to effect broader coverage, once their respective sweet spots and blind regions are known. This paper provides a framework and a fault-injection methodology for mapping an anomaly detector's effective operating space and shows that two detectors, each designed to detect the same phenomenon, may not perform similarly, even when the event to be detected is unequivocally anomalous and should be detected by either detector. Both synthetic and real-world data are used. | Introduction
As computer systems become more miniaturized and more pervasive, they
will be embedded in everyday devices with increasing frequency, even to the
point at which domestic and industrial consumers may not be aware of their
presence. Some truck tires, for example, will soon have a processor and a
pressure sensor/transponder embedded in the rubber, because this is cheaper
than fitting in-hub pressure sensors in the wheels of old trailers on big rigs.
Some laptop-computer batteries contain an embedded computer to track the
charge remaining, thereby ensuring that battery's memory travels with the
battery even when it is moved to another laptop computer. Disk drives may
contain one or two embedded computers (one controller, one DSP chip). Even
operating systems like Unix are being embedded in television set-top boxes
(enabling pausing a live television broadcast), vending machines, Internet
appliances, and the International Space Station (for controlling vibration
damping) [1].
Many of these devices with embedded computers will be intrinsically
safety-critical or mission-critical, and therefore will require a higher level of
dependability than usual - automobile and aircraft engine controllers are one
example. It is presumed that fault tolerance, which is one way of achieving
high dependability, will be employed in such devices.
Several methods of fault tolerance require that a fault be detected before
bringing fault-tolerance measures to bear on it. One salient example is recovery
blocks [2] [3]. Fault detection, therefore, is an essential first step in
achieving dependability. If the detector is not reliable, the fault-tolerating
mechanisms will not be effective, because they will not be activated.
Faults can be detected either explicitly or implicitly. When a fault is
detected explicitly it is typically through pattern recognition, wherein a signature
is detected that is directly linked to a particular fault. When a fault
is detected implicitly, it is usually due to having detected some indirect indi-
cator, such as an anomaly, that may have been caused by the fault. System-
performance anomalies are often the only indicators of problems, because
some faults have no stable, explicit signature; they're indicated only through
unusual behaviors. One such example is the fault condition known as an
Ethernet broadcast storm, which is indicated indirectly by anomalously-high
packet traffic on a network [4]. Another example is an increasing error rate
reported by software sensors, and observed in system event logs; disk surface
failures can be indicated this way [5] [6]. In the Ethernet example, measures
of packet traffic served as a sensor. In system event logs, many different measures
are available [7]. As noted in [4] there can be many sensors measuring
the state of a network, system or process. These sensors can be hardware or
software, although recent trends have been mainly toward software sensors
[8].
The data produced by such sensors are referred to as sensor data or a
sensor-data stream. The data in the sensor-data stream can be numeric
or categorical. Numeric data are usually continuous, are on a ratio scale,
have a unique zero point, and have mathematical ordering properties (e.g.,
taking differences or ratios of these measures makes sense). Categorical data,
sometimes referred to as nominal data, are discrete, usually consist of a
series of unique labels as categories, and have no mathematical ordering
properties (e.g., an apple is not twice an orange) [9]. It seems likely that as
computing power increases, more of the sensor data will be in the form of
categorical data [8] [10], hence anomaly detectors will be required to operate
primarily on categorical data, presenting a real challenge to developers and
users of such sensors, because categorical data are much more difficult to
handle statistically than numeric data are. This paper focuses on detecting
anomalies in categorical data.
An anomaly occurring in such sensor data is often the indirect or implicit
manifestation of a fault or condition somewhere in the monitored system or
process. Detecting such anomalies, therefore, can be an important aspect of
maintaining the integrity, reliability, safety, security and general dependability
of a system or process. Since anomaly detection is on the front line of
many fault-tolerance and dependability mechanisms, it is essential that it is,
itself, reliable. One way to gauge its reliability is by its coverage.
Coverage is a figure of merit that gauges the effectiveness of a detection
or testing process. Historically, a system's coverage has been said to be the
proportion of faults from which a system recovers automatically; the faults
in this class are said to be covered by the recovery strategy [11].
Coverage can also be viewed as the probability that a particular class of
conditions or events is detected before a system suffers consequences from a
missed or false detection. Another definition of coverage, and the one used
in this paper, is: a specification or enumeration of the types of conditions
against which a particular detection scheme guards [12]. More succinctly, the
coverage of an anomaly detector is the extent to which it detects, correctly,
the events of a particular anomaly class. The motivation for the concern
with coverage is that one needs to know if and when one's anomaly detection
system will experience a Type I error (a true null hypothesis is incorrectly
rejected) or a Type II error (a false null hypothesis fails to be rejected), so
that one can take precautionary measures to compensate for such errors. If
an anomaly detector does not achieve complete coverage, its suitability for
use should be scrutinized carefully. Anomaly classes will be discussed in
Section 6.
This paper addresses the issues of how to assess the coverage of an
anomaly detector, how to acquire ground-truth test data to aid in that as-
sessment, how to inject anomalous events into the test data, and how to map
the coverage of the detector in terms of sweet spots (regions of adequate
detection) and blind regions (regions of inadequate detection). Once a de-
tector's coverage map is ascertained, it can be used to judge the suitability of
the detector for various situations. For example, if the environment in which
the detector is deployed will never experience a condition in the detector's
blind region, then the detector can be used without adverse consequences.
Problem and objective
A critical problem is that there is little clarity in the literature regarding the
conditions under which anomaly detection works well or works poorly. To
gain that clarity it is necessary to understand the details of precisely how an
anomaly detector works, i.e., what the detector sees, and what phenomena
affect its performance as the stream of sensor data passes through the detec-
tor's range of perception. Similarly, one would want to know how a sorting
algorithm views the data it is sorting, as well as which characteristics of the
data, e.g., presortedness, impinge on the algorithm's efficacy.
The objectives of the present work are (1) to understand the details of
how an anomaly detector works, as a stream of sensor data passes through the
detector's purview; and (2) to use that understanding to guide fault-injection
experiments in which anomalies are injected into normal background data,
the outcome of which is a map illustrating the detector's regions of sensitivity
and/or blindness to anomalies. The results will address such questions as:
ffl What is the coverage of an anomaly detector?
ffl How does one assess that coverage?
ffl Do all anomaly detectors have the same coverage, for a given set of
anomalies embedded in background data?
ffl Can anomaly detectors be composed to attain greater coverage than
that achieved by a single anomaly detector used alone?
These issues are addressed using fault injection, a well-known technique
for evaluating detection systems [13] [14]. Synthetic data are used to address
the usual problem of determining ground truth. To facilitate simplicity and
clarity, the most basic type of anomaly detection is used, namely that of a
sliding-window detector moving over a univariate stream of categorical data.
Anomalous sequences are considered to be contiguous. Temporal anomalies,
not addressed here, can be treated similarly if they are aggregated using an
appropriate feature-extraction mechanism.
3 What is an anomaly?
According to Webster's dictionary, an anomaly is something different, abnormal
or peculiar; a deviation from the common rule; a pattern or trait
taken to be atypical of the behavior of the phenomenon under scrutiny. This
definition, fitting as it may be for everyday purposes, is too vague to be
scientifically useful. A more apt definition is this: an anomaly is an event
(or object) that differs from some standard or reference event, in excess of
some threshold, in accordance with some similarity or distance metric on
the event. The reference event is what characterizes normal behavior. The
similarity metric measures the distance between normal and abnormal. The
threshold establishes the minimum distance that encompasses the variation
of normalcy; any event exceeding that distance is considered anomalous. The
specifics of establishing the reference event, the metric and the threshold are
often situation-specific and beyond the scope of this paper, although they
will be addressed peripherally in Section 7.1.
Determining what constitutes an anomalous element in a stream of numeric
data is intuitive; a data element that exceeds, say, the mean plus three
standard deviations may be considered anomalous. Determining what constitutes
an anomaly in categorical data, which is the specific problem addressed
here, is less intuitive, since it makes no sense to compute the mean and standard
deviation (or any other numerically-based measure) of categorical values
such as cat or blue, even if these categories are translated into numbers. In
categorical data, anomalous events are typically defined by the probabilities
of encountering particular juxtapositions of symbols or subsequences in the
data stream; i.e., symbols and subsequences in an anomaly are juxtaposed
in unexpected ways.
Categorical data sets are comprised of sequences of symbols. The collection
of unique symbols in a data set is called the alphabet. Typically, a data
set will be characterized in terms of what constitutes normal behavior for the
environment from which the data were drawn. The data set so characterized
is called the training data. Training data may be obtained from some appli-
cation, e.g., a process-control application that is providing monitored data
for consumption by various analysis programs. Training data are obtained
from the process over a period of time during which the process is judged to
be running normally. Within these normal data, the juxtapositions of symbols
and subsequences would be considered normal, provided that no faults
or unusual conditions prevailed during the collection period. Once the training
data are characterized (termed the training phase), characterizations of
new data, monitored while the process is in an unknown state (either normal
or anomalous), are compared to expectations generated by the training data.
Any sufficiently unexpected juxtaposition in the new data would be judged
anomalous, the possible manifestation of a fault.
Anomaly causes and manifestations
What causes an anomaly, and what does an anomaly look like? An example
from a semiconductor fabrication process illustrates. If one attaches a sensor
to an environment, such as the plasma chamber of a reactive ion etcher,
the sensor data will comprise a series of normal categorical values (given a
normally operating etcher). When a fault occurs in the fabrication process,
the fault will be manifested as an event (a series of sensor values) embedded
in otherwise normal sensor data. That event will contain one or more data
values that are related to the normal data values in one of two ways: (1)
the embedded event could contain symbols commonly found and commonly
juxtaposed in normal data; (2) the embedded event could contain symbols
and symbol juxtapositions that are anomalous with respect to those found
in normal data. Thus the fault could manifest itself as an event injected
into a normal stream of data, and that event could be regarded as normal or
it could be regarded as anomalous. There are three phenomena that could
make an event anomalous:
Foreign symbols. A foreign symbol is a symbol not included in the
training-set alphabet. For example, any symbol, such as a Q, not in the
training-set alphabet comprising A B C D E F would be considered a foreign
symbol. Detection of events containing foreign symbols, called foreign-
symbol-sequence anomalies, is straightforward.
Foreign n-grams/sequences. An n-gram (a set of n ordered elements)
not found in the training dataset (and also not containing a foreign symbol)
is considered a foreign n-gram or foreign sequence, because it is foreign to
the training dataset. A foreign n-gram event contains n-grams not present
in the training data. For example, given an alphabet of A B C D E F, the
set of all bigrams would contain AA AB AC . FF, for a total of 6
(in general, for an alphabet of ff symbols, the total possible
If the training data contained all bigrams except CC, then CC would be a
foreign n-gram. Note that if a foreign symbol (essentially a foreign unigram)
appears in an n-gram, that would be a foreign-symbol event, not a foreign
n-gram event. In real-world, computer-based data it is quite common that
not all possible n-grams are contained in the training data, partly due to the
relatively high regularity with which computers operate, and partly due to
the large alphabets in, for example, kernel-call streams.
Rare n-grams/sequences. A rare n-gram event, also called a rare
sequence, contains n-grams that are infrequent in the training data. In the
example above, if the bigram AA constituted 96% of the bigrams in the
sequence, and the bigrams BB and CC constituted 2% each, then BB and
CC would be rare bigrams. An n-gram whose exact duplicate is found only
rarely in the training dataset is called a rare n-gram. The concept of rare
is determined by a user-specified threshold. A typical threshold might be
.05, which means that a rare n-gram would have a frequency of occurrence
in the training data of not more than 5%. The selection of this threshold is
arbitrary, but should be low enough for "rare" to be meaningful.
5 The ken of an anomaly detector
An anomaly detector determines the similarity, or distance, between some
standard event and the possibly-anomalous events in its purview; it can't
make decisions about things it can't see. The purview of a sliding-window
detector is the length of the window. Since not all anomalies are the same
size as the detector window, and such size differentials can affect what the
A
A
A A A A
d d
d d
d d d d d
d
Background
Whole
Internal
Encompassing
Boundary Condition
Boundary Condition
Figure
1: The ken of an anomaly detector: different views of an anomaly
(depicted by AAAAAA) embedded in a sensor-data stream (depicted by
ddddd) from the perspective of a sliding-window anomaly detector. Arrows
indicate direction of data flow.
detector detects, it is useful to pursue the idea of a detector's ken, or range.
The word ken means the extent or range of one's recognition, comprehen-
sion, perception or understanding; one's horizon or purview. Thus it seems
appropriate to ask, what is the ken of an anomaly detector? The univariate
case is shown in Figure 1 which depicts a stream of sensor data (ddddd)
into which an anomalous event (AAAAAA) has been injected. The right-
directed arrows indicate that the data are moving to the right with respect
to the detector window, as time and events pass.
The width of the window through which the detector apprehends the
anomaly can take on any value, typically based on the constraints of the
environment or situation in which the detector is being used. The extent
to which the detector window overlaps the anomaly can be thought of as
the detector's view of the anomaly. It is natural to focus on the case in
which the window is the same size as the anomaly and the entire anomaly
is captured exactly within the window. This is called the whole view. There
are, however, a number of other cases, illustrated in the figure. When the size
of the detector window is less than the length of the anomaly, the detector
has what is called an internal view. For the case in which the detector
window is larger than the anomaly, both anomalous and normal background
data are seen - this is the encompassing view. Irrespective of the width
of the window, as time passes and an anomalous event moves through the
window, the event presents itself to the detector in different perspectives.
Of particular interest are situations termed external boundary conditions,
used interchangeably here with the term boundary conditions. These arise
at both ends of an injected sequence embedded in normal data, when the
leading or trailing element of the anomaly abuts the normal data. Boundary
conditions occur independently of the relative sizes of the detector window
and anomaly (except in the degenerate case of size one). In a boundary
condition, the detector sees part of the anomaly and part of the background
data. The background view sees only background data, and no anomalies.
It will be shown later that the detector views and conditions just discussed
will be important in determining precisely what an anomaly detector
is capable of detecting, as well as what may cause an anomaly detector to
raise an alarm, even when it should not. Note that these conditions depend
on the size of the injected event relative to the size of the detector window.
Table
1 summarizes the conditions.
Conditions
Internal x
Boundary x x x
Encompassing x
Table
1: Conditions of interest that ensue with respect to detector-window
size (DW) and anomaly size (AS).
6 Anomaly space
It is important to note that anomalies can be composed of subsequences of
various types, three of which were identified in Section IV: foreign symbols,
foreign n-grams and rare n-grams. A fourth type is a common n-gram, an
n-gram that appears commonly (not rarely) in the normal data. Henceforth
the terms n-gram and sequence will be used interchangeably, i.e., foreign
n-gram and foreign sequence refer to the same thing.
That an anomalous sequence can be composed of several different kinds
of subsequences, along with the concept of internal sequences and boundary
sequences, gives rise to the idea of creating a map of the anomaly space
for sliding-window detectors. Given such a map, it should be possible to
determine the extent to which that map is covered by a particular anomaly
detector. It is not the goal of this paper to do that, but rather to show
that two detectors can have unexpectedly different coverages, even when
encountering the same events embedded in the same data; that is, different
detection capabilities will arise from the use of different metrics and different
detectors.
An anomaly-space map is shown in Figure 2. The map is described in
the figure caption and in the paragraph following it. The window size of the
detector, relative to the size of the anomaly, is shown in the three columns
of the figure: detector window size less than anomaly size, detector window
size equal to the anomaly size, and detector window size greater than
the anomaly size. For each of these conditions the figure addresses three
kinds of anomalies: foreign-symbol-sequence anomalies (sequences comprising
only foreign symbols); foreign-sequence anomalies (sequences comprising
only foreign sequences); and rare-sequence anomalies (sequences comprising
only rare sequences).
The following material expands the description of a cell by selecting as
an example the anomaly type FF AI AB, depicted at the upper left of the
figure. This is a sequence of foreign symbols (FF) composed of alien internal
sequences (AI) and having alien external boundaries (AB). The term alien is
an umbrella term used to refer to sequences that do not exist in the normal
(training) data, irrespective of the characteristics that make them foreign,
unlike the more closely-defined terms foreign symbol and foreign sequence.
FF is a foreign-symbol-sequence anomaly composed only of foreign sym-
bols. In this specific case, when the anomalous sequence FF AI AB slides
past a detector window whose size is less than the size of the anomaly, the
detector will first encounter the leading edge of the anomaly. That leading
edge will be alien, i.e., the sequence containing the first element of the
anomaly and the normal element immediately preceding it is not a sequence
that exists in the normal (training) data, and therefore will be anomalous.
As the anomaly moves through the detector window, each internal, detector-
window-sized subsequence of the anomaly will be alien. As the anomaly
Foreign-Symbol-Sequence Anomalies
FF AI AB FF - AB FF AE AB
Foreign-Sequence Anomalies
FS AI AB FS AE AB
FS RI AB FS - AB FS RE AB
FS CI AB FS CE AB
FS AI RB FS AE RB
FS RI RB FS - RB FS RE RB
FS CI RB FS CE RB
FS AI CB FS AE CB
FS RI CB FS - CB FS RE CB
FS CI CB FS CE CB
Rare-Sequence Anomalies
RS AI AB RS AE AB
RS RI AB RS - AB RS RE AB
RS CI AB RS CE AB
RS AI RB RS AE RB
RS RI RB RS - RB RS RE RB
RS CI RB RS CE RB
RS AI CB RS AE CB
RS RI CB RS - CB RS RE CB
RS CI CB RS CE CB
Figure
2: Anomaly space. The first two letters in each cell identify the type
of anomalous sequence (FS: foreign-sequence anomaly; RS: rare-sequence
foreign-symbol-sequence anomaly). The next two letters identify
the type of condition (internal (alien, rare or common) or encompassing
(alien, rare or common)); the last two letters refer to the boundary conditions
(alien, rare or common). DW ! AS indicates detector window smaller than
anomaly analogously indicates window larger; when
AS there are no internal or encompassing conditions, indicated by dashes
replacing the middle two letters. Impossible conditions are struck out.
elements comprising anomalous sequence.
length of anomalous
sequence
A A A A A A
Sliding a size 6
detectortecto Size of detector window: 6
marks the elements of
the 26556 -3 data that have been incorporated
over an injected anomaly.
into the sequences comprising the external boundary conditions.
A: marks the elements of
thee 26310 anomalous sequence.
Size of foreign-symbol sequence injected: 6
elements comprising external boundary conditions.
Figure
3: Foreign-symbol-sequence anomaly injected into background data.
External boundary conditions are shown for detector window size of 6 and
anomaly size of
passes out of the window, its trailing edge will form another alien boundary.
Figure
3 illustrates a sliding-window detector moving over an anomaly
injected (synthetically) into a data stream. The detector window and the
anomaly size are the same: 6. The shaded boxes depict the
injected FF anomaly, which raises an alarm as the detector window is positioned
exactly over it. Ten sequences result from the interaction between
the background and the injected anomaly. These ten sequences comprise
the boundary conditions that may affect the response of the detector as its
window slides over the injected anomaly, depending on whether or not additional
anomalies are caused by the anomaly-background boundary interac-
tions. The composition of an injected anomaly, as well as the position of the
injection in the background data, must be carefully controlled to avoid the
creation of additional anomalies at the boundaries of the injection; Section
7.4 provides details.
Note that the anomalies depicted in Figure 2 reflect the restricted needs
of an experimental regime, and do not express all of the conditions that
might be encountered in a real-world environment. In the FSRIRB anomaly,
for example, all of the subsequences comprising the internal condition are
rare, and all the subsequences that make up the external condition are rare.
In the real world, the subsequences comprising these conditions might be a
mixture of rare, foreign and common. The anomaly space is constructed as it
is in order to effect experimental control and to reduce confounding in which
a detector's response cannot be attributed to any single phenomenon, but
rather is due to the interaction of several phenomena.
7 Mapping the detection regions
Different detectors may cover different parts of the anomaly space depicted in
Figure
2. This section describes an experiment showing how well a selected
portion of the anomaly space is covered by two different detectors, Markov
and Stide, whose detection mechanisms will be explained below. The selected
portion of the space is foreign-sequence anomalies as shown in the fifth row
of the foreign-sequence section of the figure: FS RI RB, FS - RB, and FS RE
RB. Because the last of these is not possible, focus is limited to the first two.
These cells were chosen because they contain events that would unequivocally
be termed anomalous by both of the detectors used in the experiment:
both anomalies are foreign sequences with rare boundary conditions. For the
case in which the detector window size is less than the anomaly size, the
subsequences that make up the internal conditions are all rare, hence FS RI
RB. For the case in which the detector window size is equal to the anomaly
size, no internal conditions will be extant, hence FS - RB. The sequence FS
RE RB is not possible, because the sequences that make up the encompassing
condition will contain the foreign sequence FS. Sequences that contain
foreign subsequences will themselves be foreign sequences; consequently it is
not possible to have a rare sequence that contains a foreign subsequence.
The following subsections describe the detectors used in the coverage-
mapping experiment, the methods for generating the data used to test detector
coverage (background data, anomaly data and anomaly-injected testing
data), and the regime for running the detectors in the experiment.
7.1 Detectors
To illustrate that different detectors may cover different parts of the anomaly
space, two detectors were tested: Markov and Stide. Each of these is described
below.
7.1.1 Markov detector
Most engineered processes, including ones used by or being driven by com-
puters, consist of a series of events or states. While the process is running,
the state of the process will change from time to time, typically in an orderly
fashion that is dictated by some aspect of the process itself. Certain kinds
of orderly behavior are plausible approximations to problems in real-world
anomaly detection and, moreover, they facilitate rigorous statistical treatment
and conclusions. The anomaly-detection work in this paper focuses on
the kind of orderly behavior that corresponds to Markov models.
The Markov anomaly detector determines whether the states (events) in
a sequential data stream, taken from a monitored process, are normal or
anomalous. It calculates the probabilities of transitions between events in
a training set, and uses these probabilities to assess the transitions between
events in a testing set. These states and probabilities can be described by a
Markov model. The key aspect of a Markov model is that the future state
of the modeled process depends only on the current state, and not on any
previous states [15, 16].
A Markov model consists of a collection of all possible states and a set
of probabilities associated with transitioning from one state to another. A
graphical depiction of a Markov model with four states is shown in Figure 4
in which the states are labeled with the letters A, B, C and D. Although the
arcs are not explicitly labeled in the figure, they can be thought of as being
labeled with the probabilities of transitioning from one state to another,
e.g., from state A to state B. The transition probabilities can be written
in a transition matrix, as shown in Figure 5, in which the letters indicate
states and the numbers indicate transition probabilities. The probability of
transitioning from D to A, for example, is 1; from D to any other state is 0.
The transition probabilities are based on a key property, called the Markov
assumption, and can be written formally as follows. If X t is the state of a
system at time t, then:
Figure
4: Four-state Markov model alphabetcomprisedoffoursymbols. Letters
indicate states; arrows indicate transition probabilities. A transition can
be made from any state to any other state, with a given probability.
Hence the probability of being in state X t+1 = y at time t+1 depends only on
the immediately preceding state X t = x at time t, and not on any previous
state leading to the state at time t. Therefore the transition probability, P xy ,
denoting the progression of the system from state x to state y, can be defined
as:
Readers interested in further details are encouraged to consult the large literature
on Markov models, e.g., [15].
Weather prediction provides a nice illustration of a Markov process. In
general, one can usually predict tomorrow's weather based on how the weather
is today. Over short time periods, tomorrow's noontime temperature depends
only on today's noontime temperature. The previous day's temperature is
correlated, but provides no additional information beyond that contained in
today's measurement. So, there is some reasonably high probability that
tomorrow will be like today. If tomorrow's weather is not like today's, then
one's expectations would be violated, and one would feel surprised. The degree
of surprise can be used in anomaly detection: the more surprised one is
to observe a certain event or outcome, the more anomalous is the event, and
the more it draws one's attention.
A 0.00 1.00 0.00 0.00
Transition sequence: ABCDABCD.
Figure
5: Transition matrix for four-state Markov model (alphabet comprised
of four symbols: A, B, C and D). Letters indicate states; numbers indicate
probabilities of transitioning from one state to another. Example: probability
of transitioning from D to A is 1; probability of transitioning from D to any
other state is 0.
Basing an anomaly detector on a discrete Markov process requires three
steps. First, a state transition matrix is constructed, using the training data;
the training data represent the conditions that are considered to be normal.
For example, if sensor data are collected from an aircraft turbine that is
running under normal operating conditions, these data would be used as
training data. From these data would be constructed the state transition
matrix that represents normal turbine behavior.
The second step is to establish a metric for surprise. This is generally a
distance (or similarity) measure that determines how dissimilar from normal
a process can be, while remaining within the bounds of what is considered to
be normal operating behavior. If the threshold of dissimilarity is exceeded,
then the observed behavior, as reflected in the sensor data, is judged to be
abnormal, or anomalous. In the case of the Markov-model approach, if a
transition is judged to be highly probable (e.g., has a probability of 0.9),
then its surprise factor is 0:1. If, in this example, the threshold
were set at 0:9, then a surprise factor of 0:1 would not be anomalous. If the
surprise factor had been 0:98, for example, then the transition would have
been considered anomalous. The more the surprise factor exceeds the surprise
threshold, the more anomalous the event will seem. Given a threshold of 0:9,
a surprise factor of 0:91 would be deemed anomalous, but a surprise factor
of 0:99 would be regarded as fractionally more anomalous.
The third step is to examine the test data to see if they fall within the
expectations established by the training data. As each state transition in the
test data is observed, its probability of occurring in normal data is retrieved
from the transition matrix derived from the training data. If the transition
under scrutiny has a surprise factor that exceeds the surprise threshold, then
the event in the testing data is considered anomalous.
The Markov-based anomaly detector that is used in this paper is based on
the ideas presented in this section. The states in the model do not necessarily
need to correspond to single events or unigrams; a state can be composed of
a series of events, too. In a case where multiple events comprise a state, the
collection of states in the Markov model spans the combinations of unigrams
of a specified length as present in the training data. Consider, for example,
the sequence A B C D E F. A 3-element window is moved through the
sequence one event at a time: the first window position would contain A B
C; the second window position would contain B C D, and so forth. In using
a Markov model to assess the surprise factor of the transition between the
first and second windows, the states would comprise a series of three events
or unigrams, i.e., equivalent to the window size. Notice that in the transition
from the first window to the second window, the event A is eliminated, and
the event D is added. Since the events B C are common to both windows,
it is the addition of event D which drives the surprise factor. Therefore, the
resulting surprise factor reflects on the event D following the series of events
A B C. A formal description of the training and testing processes used in
conjunction with such a Markov model is given next.
Markov training stage Primitives similar to those described in [17] are
defined to facilitate the description of the training procedure. Let \Sigma denote
the set of unique elements (i.e., the alphabet of symbols, or the set of states)
in a sequential stream of data. A state in a Markov model is denoted by s
and is associated with a sequence (window) of length N over the set \Sigma. A
transition is a pair of states, (s; s 0 ) that denotes a transition from state s to
state s 0 . The primitive operation shift(oe; z) shifts a sequence oe left by one,
and appends the element z, where z 2 \Sigma, to the end. For instance, if the
sequence which is equal to the new
sequence bcz. The primitive operation next(oe) returns the first symbol of
the sequence oe, then left shifts oe by one to the next symbol. This function is
analogous to popping the first element from the top of a stack, where the top
of the stack is the beginning of the sequence. For example, given a sequence
abcde, next(abcde) returns a and updates the sequence to bcde.
The construction of the Markov model for normal behavior based on
training data can be described as follows:
Initialize:
ffl current state = first N elements of training data and,
training-data stream minus first N elements.
Until all the sequences of size N have been scanned from the training data:
1. Let next(oe).
2. Set next state to shift(current state; c).
3. Increment counter for the state current state and for the transition
(current state; next state).
4. Set current state to be next state.
After the entire stream of training data has been processed, the probability
of the transition is computed as (s; s
F
are
the counters associated with the transition (s; s 0 ) and s respectively.
Markov testing stage Let 0.00 indicate normal, and let 1.00 indicate
anomalous. The surprise factor (sometimes called an anomaly signal) can be
calculated from test data as follows:
Initialize:
ffl current state = first N elements of training data and,
training-data stream minus first N elements.
Until all the sequences of size N have been scanned from the test data:
1. Let next(oe).
2. Surprise minus the transition probability of (current state; next state).
3. Set current state to be next state.
7.1.2 Stide detector
Stide is a sequence, time-delay, embedding anomaly detector inspired by natural
immune systems that distinguish self (normal) from nonself (anomalous)
[18] [19]. The reference to "time" recognizes the time-series nature of the
categorical data on which the detector is typically deployed. Stide has been
applied to streams of system kernel-call data in which the manifestations of
maliciously altered code are regarded as anomalies [20]. Stide mimics natural
immune systems by constructing templates of "self" and then matching
them against instances of "nonself." It achieves this in several stages.
Stide training stage A database consisting of templates of "self" is constructed
from a stream of data considered to be normal (self); these are the
training data. The stream is broken into contiguous, n-element, overlapping
subsequences, or n-grams. The value of n is typically determined empirically
[21]. Duplicate n-grams are removed from the collection, leaving only the
unique ones. These unique n-grams are stored for future fast access. This
completes the training stage.
Stide testing stage Stide compares n-grams from an unknown dataset
(testing data) to each of its unique "self" n-grams. Any unknown n-gram
that does not match a "self" n-gram is termed a mismatch.
Finally, a score is calculated on the basis of the number of n-gram comparisons
made within a temporally localized region (termed "locality frame")
[21]. Each comparison in step two (testing) receives a score of either zero
or one. If the comparison is an exact match, the score is zero; if the comparison
is not a match, the score is one. These scores are summed within a
local region to obtain an anomaly signal. An example illustrates. Within a
local region of 20 comparisons made between "self" and unknown, if all 20
are mismatches, the score will be 20 for that particular region; if only 8 are
mismatches, the score will be 8 for that region. There are many overlapping
regions of 20 in any given data stream. Stide calculates which of these 20-
element regions has the highest score, and concludes that that region is the
locus of the anomaly.
The Stide algorithm can be described formally as follows. Let N be the
length of a sequence. The similarity between the sequence
and the sequence defined by the function:
The expression above states that the function Sim(X;Y ) returns 0 if two
sequences of the same length are element-by-element identical; otherwise the
function returns 1.
Each sequence of size N in the test data is compared to every sequence of
size N in the normal database. Let Norm be the number of sequences of size
N in the normal database. Given the set of sequences in the normal database,
for the ordered set of sequences fX
in the test data, where X
where Z is the number of elements in the data sample, the final similarity
measure assigned the sequence X s is
The expression above states that when a sequence, X s , from the test data is
compared against all sequences in the normal database, the function Sim f (X s )
returns 1 if no identical sequence can be found (i.e., a mismatch); otherwise
the function returns 0 to indicate the presence of an identical sequence (a
match) in the normal database.
The locality frame count (LFC) for each size N sequence in the test
data is described as follows. Let L be the size of the locality frame and
let Z be the number of elements in a data sample. For the ordered set of
sequences
the LFC can be described by:
7.2 Constructing the synthetic training data
The training data serve as the "normal" data into which anomalous events
are injected. The requirements for the training data are that a large proportion
of the data be comprised of common sequences, that they contain a
small proportion of rare sequences, and that there is a relatively high predictability
from one symbol to another. The common sequences are required
to facilitate the creation of background test data that will contain no noise
in the form of naturally-occurring rare or foreign sequences. This is necessary
so that a detector's response to the injected anomaly can be observed
without confounding by such phenomena. The rare sequences in the training
data are needed so that anomalous events composed of rare sequences can be
drawn from the normal training data, and then injected into the test data;
see Section 7.3 below for details. Finally, a modicum of predictability is convenient
for emulating certain classes of real-world data (e.g., system kernel
calls) for which detectors like Stide are said to be well suited [22].
The alphabet has eight symbols: A B C D E F G and H. A larger alphabet
could have been used, but it would not have demonstrated anything
that an 8-symbol alphabet could not demonstrate; increasing the alphabet
size would not change the outcome. Moreover, substantially more computation
time is required as the alphabet size goes up. It is noted that alphabet
sizes in real-world data are typically much larger than 8; for example, the
number of unique kernel-call commands in BSM [23] audit data exceeds 200.
However, the current goal is to evaluate a detector in terms of its ability to
detect anomalies as higher-level abstract concepts, and while alphabet size
does influence the size of the set of foreign sequences and the set of possible
sequences that populate the normal dataset, foreign sequences and rare
sequences retain their character irrespective of alphabet size. Maintaining a
relatively small alphabet size facilitates a more manageable experiment, yet
permits direct study of detector response.
To accommodate the requirements for predictability and data content, the
training data were generated from an eight-by-eight state transition matrix
with probability 0.9672 in one cell of each row, and 0.004686 in every other
cell, resulting in a sequence of conditional entropy 0.1 (see [24] for details).
One million data elements (symbols) were generated so that there would
be a sufficient variety of rare sequences in the sample to use them in the
construction of anomalous sequences for the test data. Ninety-eight percent
of the training data consisted of repetitions of the sequence A B C D E F G
H, seeding the data set with common sequences. This is the data set used
to train the two detectors used in this study, i.e., to establish a model of
normalcy against which unknown data can be compared.
7.3 Constructing the synthetic test data
Test data, containing injected anomalies, are used to determine how well the
detector can capture anomalous events and correctly reject events that are
not anomalous. The test data consist of two components: a background, into
which anomalies are injected, and the anomalies themselves. Each is generated
separately, after which the anomalies are injected into the background
under strict experimental control. The background consisted of repeated sequences
of A B C D E F G H, the most common sequence in the training
data. This was done so that the test data would not conflict with the training
data, i.e., would not contain spurious rare or foreign sequences.
7.4 Constructing the anomalous injections
Once the background data are available, anomalous events must be injected
into them to finalize the test data. The anomalies must be chosen carefully
so that when they are injected into the test data they do not introduce unintended
anomalous perturbations, such as external boundary conditions. If
this were to happen, then a detector could react to those conditions, confounding
the outcomes of interest. Hence, scrupulous control is necessary.
The goal is to map the detection capability of both the Stide and the
Markov anomaly detectors, and to show that their detection capabilities
may vary with respect to identical and unequivocally anomalous phenomena.
Given this objective, a single anomaly type that both detectors must be able
to detect is selected from the anomaly space in Figure 2 for the experiments.
The anomaly type selected is a foreign sequence of length AS for which all
subsequences of length less than AS that make up the internal sequences
and the boundary sequences are rare. Rare is defined to be any sequence of
detector-window length that occurs in the training data less than one percent
of the time.
It is within the scope of this study to map out only one region or type in
the anomaly space in order to illustrate what can be learned and gained by the
effort. Once that anomaly type is determined, e.g., FS RI RB, as described
in Section 7, the next step is to inject a foreign sequence composed of rare
sequences into the test data. A catalog of rare n-grams is obtained from the
training data. Rare n-grams are drawn from the catalog, and composed to
form a foreign sequence of the appropriate size. For example, the bigrams
BA, AF, FH, HE, EC, CC and CF each occurred less than 0.06% of the
time in the training data; consequently, these are rare bigrams. Combining
these seven bigrams produces one octagram (BAFHECCF) whose internal
sequences are made up of rare sequences of size two. This octagram was
injected into the background data.
Once the composed foreign sequence is injected into the background data,
its boundary conditions must be checked to ensure that they are all rare (be-
cause rare boundary conditions are consistent with the anomaly class being
examined). If the boundary conditions are satisfied, the procedure is finished;
if not, an attempt is made to handcraft the boundary conditions with the help
of semiautomated tools. If the handcrafting fails, then a different set of rare
sequences is selected from the catalog, and a new foreign sequence is com-
posed. The new foreign sequence is injected into the background data, and
its boundary conditions are checked. This entire procedure is repeated until
a successful injection is obtained. Note that when the size of the detector
window is greater than the size of the injection, an encompassing condition
ensues, not an internal condition; however, care is still required to ensure
that the external boundary conditions remain pertinent, even though the
focus has been moved from internal conditions to encompassing conditions.
Eight injection sizes and fourteen detector-window sizes were tested. The
procedure outlined above for creating the anomalous events, and for injecting
them, is repeated for each combination of injection size and window size,
resulting in 112 total data sets.
7.5 Scoring detector performance
Anomaly detectors are capable of only two kinds of decisions: yes, an anomaly
exists; or, no, an anomaly does not exist. Detectors are usually judged
in terms of hits, misses and false alarms. A hit occurs when the detector
determines that an anomaly is present, and an anomaly actually is present.
A miss occurs when the detector determines that no anomaly is present
when actually there is one present. A false alarm occurs when the detector
determines that there is an anomaly present when in fact there is no anomaly
present. A perfect detector would have a 100% hit rate, no misses and no
false alarms.
To effect proper scoring, ground truth (a statement of undisputed fact
regarding the test data) must be known. That is, it must be determined
exactly where anomalies have been injected into the test data, so that when
the detector issues a decision, the correctness of that decision can always be
assured. The injector creates a key that indicates the exact location of every
injection. Using this key, one can determine whether or not a detector's
decisions are correct. The usual procedure for this is for the detector to
create an output file containing its decisions: 0 for no, and 1 for yes. This
file can be compared against the key, which contains a similar set of zeroes
and ones. If the two files match perfectly, the detector's performance is a
perfect 100%; otherwise, the percent of hits, misses and false alarms can be
calculated easily.
Using the injection key file, however, is not as straightforward as it might
first appear. Due to the interaction of the detector-window size and the
particular composition of the injected event, the detector's responses may
not always be aligned perfectly with the ones and zeroes in the key file. For
example, if the key file contains a one at the leading edge of an injected
event (which seems sensible), the detector's response might not match. In
fact, the detector might make a variety of responses, some of them multiple,
to an injected event, and the key file must be constructed to facilitate correct
scoring for any set of responses the detector might make. For example, the
detector might decide, incorrectly, that an anomaly exists at the leading-edge
boundary of an injected event; in fact, at that boundary there is no
anomaly, so the detector's decision would be wrong. Another example is
that the detector, depending on its window size relative to the size of the
injected event, may respond only to subsequences internal to the injected
event, but not to the event in toto. There is a danger that a detector will
respond several times to a single injected event (because it may encounter
several different views of that event), and thereby be judged mistakenly to
have false-alarmed - to have decided yes in error.
The problems with matching detector decisions against ground-truth keys
can be addressed in a variety of ways. In the present work the primary
concern is to determine whether or not a detector is completely blind to an
injected event, so the most worrisome response would be no response at all
- a miss. If the detector does not decide yes to any part of the injected
event, whether internal, whole, encompassing or boundary, then it is blind
to the event. Alternatively, if the detector does respond, there needs to be
a way to mitigate the problem of key mismatch, as described above. This
problem is addressed by requiring that the detector respond positively at
least once within the span of the injected event and the elements comprising
its boundary conditions (collectively called the incident span) in order to
have judged the event to be a hit.
7.6 Procedure
Each of the two detectors, Markov and Stide, was provided with the same set
of training data. From the training data, the detectors learned their models
of normal behavior. Then each detector was tested, using each of the 112
test data sets described in Section 7.4. The size of the detector window was
varied from two to fifteen, and the size of the injected events was varied
from two to nine. The restrictions on these dimensions were due to resource
limitations, since computation time and memory increase with an increase
in either dimension. Moreover, nothing new would be learned from raising
either parameter. Each detector, for each testing session, produced a decision
file which was compared to the key files for each test data set. Comparisons
that revealed detector blindness (no detection of injected anomalous events)
were charted.
Note that Stide's locality-frame-count feature was ignored, because it
operates only as an amplifier for detected anomalies (which Stide represents
as mismatches). If a foreign-sequence anomaly is not detected by a mismatch,
then applying the locality frame count will not make the anomaly visible.
Since the task at hand is to determine whether or not the injected anomalies
were detected, amplification, which can be viewed as a post-detection process,
is not relevant for mismatch/anomaly detection under present experimental
conditions. As a consequence of this, Stide's maximum anomalous response
is 1.
Detection and blind regions for the Markov and Stide detectors are depicted
in
Figures
6 and 7 respectively. These decision maps illustrate the detection
capability of both the Markov and the Stide detector with respect to an
injected foreign sequence composed of rare sequences.
The x-axis of each map marks the increasing size of the foreign sequence
injected into the test data; the y-axis marks the size of the detector window
required to detect a foreign sequence of a specified size. Each star indicates
successful detection of the foreign-sequence anomaly whose size is indicated
on the x-axis, using a detector window whose size is marked on the x-axis; detection
specifically means that at least one positive response occurred in the
incident span, where "positive" connotes the most-anomalous value possible
in the detector's range of responses. In both detectors, the most anomalous
response is 1. The areas that are bereft of stars indicate either detection
blindness or an undefined region. Detection blindness means that the detector
was unable to detect the injected foreign sequence whose corresponding
size is marked on the x-axis, i.e., the maximum anomalous response recorded
along the entire incident span was 0 - to the detector, the anomaly appears
as being completely normal. Note that no false alarms occurred, because
background data were constructed from common sequences which do not
raise alarms.
Size of foreign-sequence anomaly
Size
of
detector
window3579111315Undefined region
Detection region
Figure
Markov sweet and blind regions.
The undefined region is an artifact of the Markov detector and anomaly
type. Since the Markov detector is based on the Markov assumption, i.e.,
the next state is dependent only upon the current state, the smallest possible
window size is 2, or a bigram. This means that the next expected, single,
categorical element is dependent only on the current, single, categorical el-
ement. As a result, the y-axis marking the detector window sizes in Figure
6 begins at 2. Although it is possible to run Stide on a window size of 1,
doing so would produce results that do not include sequential ordering of
Size of foreign-sequence anomaly
Size
of
detector
window3579111315Blind region
Undefined region
Detection region
Figure
7: Stide sweet and blind regions.
events, a property that comes into play in all window sizes larger than 1.
This, together with the fact that there is no equivalent window size of 1 on
the side of the Markov detector, argued against running Stide with a window
of 1.
The y-axis also begins at 2, because the type of anomalous event on
which the detectors are being evaluated requires that a foreign sequence be
composed of rare sequences. A foreign sequence of size one is an event that
contains a single element that must be both foreign and rare at the same time;
this is not possible. As a result, both Figures 6 and 7 show an undefined
region corresponding to a detector window of size two and an anomaly of
size one.
By charting the performance spaces of Stide and the Markov-based detector
with respect to a foreign sequence composed of rare sequences, one is able
to observe the nature of the gain achieved by employing the conditional probabilities
of the Markov detector (that are absent in Stide). The significant
gain in detection capability endowed by the use of conditional probabilities
is illustrated by the blind region depicted in Figure 7. It is interesting to
note that, for the exact same datasets, using a detector window of length 9,
Stide's detection coverage is just 56% of Markov's. As the detector window
size decreases to 2, Stide's coverage decreases to only 12.5% of that of the
Markov detector. At this window size, however, the Markov detector still
has 100% coverage of the space, which is a tremendous difference.
The results show that although the Markov and Stide detectors each
use the concept of a sliding window, and are both expected to be able to
detect foreign sequences, their differing similarity metrics significantly impact
their detection capabilities. In the case of Stide, even though there is a
foreign sequence present in the data stream, it is visible only if the size of
the detector window is at least as large as the foreign sequence composed of
rare subsequences - a requirement that the Markov detector does not have.
Therefore, even if a fault does manifest as a foreign sequence in the data,
it doesn't necessarily mean that Stide, which claims to be able to detect
"unusual" sequences, will detect such a manifestation. It should be noted,
therefore, that the selection of a similarity metric can have critical effects on
the performance of a detector, and these choices should be made with care
and with understanding of the metric.
9 Real-world data
The results shown in previous sections were based on synthetic data that were
generated specifically to test the different anomaly detectors described. This
section provides a link to real-world data, and shows that the manifestations
of live anomalies in system kernel-call data are consistent with the anomaly-
space map of Figure 2.
The live experiment consisted of a cyberattack on a RedHat 6.2 Linux
system. The attack exploited a vulnerability in glibc (standard C library)
through the su program (a program that switches a user from one account to
another, given that the user provides appropriate credentials). The library
allows a user to write an arbitrary format string. The exploiting program
writes a carefully crafted string which interrupts the execution of su, allowing
the user to run a script or program with root privileges. The exploit permits
the user to switch accounts without providing credentials. Running su with
and without the exploit should produce kernel-call data with and without
anomalies due to the exploit itself. Kernel-call data on the victim machine
were logged using the IMMSEC kernel patch, provided by the Computer
Immune Systems Research Group at the University of New Mexico.
The attack was scripted so that it could be repeated reliably and auto-
matically. The following procedure was run three times, using standard su
(with normal user interaction, providing credentials to switch from user to
root) to obtain normal (training) data, and run three more times using the
su exploit to obtain the anomalous (test) data:
1. Start a session as a regular user.
2. Turn on syscall logging for the su program.
3. Run the exploit as the regular user; verify that it was successful in
giving the user a shell with root privileges.
4. Turn off syscall logging for the su program; move the log file to a
permanent location.
5. Clean up the environment and log out.
The monitored kernel-call data were examined by both an experienced
system programmer and a set of automated tools to find all the minimal foreign
sequences that appeared in the attack data, but not in the normal data.
A minimal foreign sequence is a foreign sequence in which no shorter foreign
sequence is embedded. The programmer and tool results were compared and
found to be mutually consistent. The system programmer confirmed, through
systematic analysis, that all of the foreign sequences were direct manifestations
of the attacks. Seventeen foreign-sequence anomalies were discovered
in the su exploit; no foreign symbols were found, and there was no variability
in the kernel-call data for the three attacks. The foreign-sequence anomalies
ranged in length from 2 to 5, with one anomaly of length 5, five anomalies
of length 3, and eleven anomalies of length 2. Some anomalies were unique;
others were repeated in the data. Details are shown in Figure 8.
When using a detector window of size 2, which is the smallest possible
size that covers an anomaly of length two, the real-world foreign-sequence
anomalies in Figure 8 had compositional characteristics like the ones shown
in the anomaly space in Figure 2. All of the anomaly descriptions are the
same as the ones described in the anomaly space, except for the second one,
which is eight characters instead of six or four, like the rest. The anomaly (FS
RI RB FB) shown in the figure is from live data, and its composition reflects
Anomaly Size Anomaly Contents Anomaly Description
Figure
8: Foreign-sequence anomalies, discovered in real-world information-
warfare attack data, showing the size of each of the 17 anomalies, the events
comprising each anomaly, and the anomaly description in accordance with
the anomaly space of Figure 2. Anomaly contents are numerical encodings
of kernel calls.
the broader set of conditions pertaining to uncontrolled, real-world data, as
opposed to the more compact formulations in the anomaly space which are
for well-behaved anomalies like the ones found in the synthetic data. The
FS, as usual, indicates the base type of the anomaly: foreign sequence. The
RI indicates that all of the internal conditions are rare. The RB indicates
that all of the sequences comprising the left boundary are rare, and the AB
indicates that all the sequences comprising the right-boundary are alien.
Discussion and conclusion
This paper has addressed fundamental issues in anomaly detection. The
results are applicable in any domain in which anomaly detection in categorical
data is conducted.
The paper has shown how to assess the coverage of an anomaly detec-
tor, and has also illustrated many subtleties involved in doing so. There
are myriad factors to be considered carefully; the process is not straightfor-
ward. Meticulous attention needs to be paid to the interactions between an
anomalous event and the normal environment in which it is embedded, i.e.,
external boundary conditions, internal conditions, encompassing conditions,
and common, rare and foreign sequences that compose an anomalous event.
Unless all of these factors are accounted for, error may be the biggest enemy
of a correct mapping.
The coverage maps for two different anomaly detectors were shown to be
strikingly different. This might come as a surprise to someone who believes
that applying any anomaly detector to a stream of sensor data would be
satisfactory, or that either of two detectors would detect the same events. One
detector, Stide, which was specifically designed to detect foreign sequences,
was shown to be blind to over half of a region it purports to cover. When
used in its original role as a detector for information-warfare intrusions, Stide
has been operated in a region of the detection space that is about six by six
in terms of window size vs. anomaly size. It is interesting that in that region
Stide is blind to 36% of the space, whereas the Markov detector covers 100%
of that same region.
It is not necessarily bad for an anomaly detector to have less than perfect
coverage, as long as the user knows the limitations of the detector. If a
detector has suboptimal coverage, it may be possible to deploy the detector
in situations where it doesn't need to operate in the part of the space in
which it is blind. It will never be possible to assure this, however, if the
detector's coverage is not mapped.
Can multiple anomaly detectors be composed to attain greater coverage
than that achieved by a single anomaly detector used alone? It seems
clear from the two maps produced here that one detector can be deployed
to compensate for the deficiencies of another. In the present case it may
appear that the Markov detector should simply replace Stide altogether, but
because each detector has a different operational overhead, it may not be
straightforward to determine the best mix for a compositional detection sys-
tem. Also, one should be reminded that in the present work only one cell of
the anomaly space depicted in Figure 2 has been examined; determination
of overall coverage awaits examination of the rest of the cells as well. When
deploying anomaly detectors in embedded or mission-critical systems, it is
essential to understand the precise capabilities of the detectors, as well as
the characteristics of the spaces in which they will operate.
Although the real-world experiment with live systems and data was limited
in scope, it still illustrates two important things. First, the anomaly
types depicted in Figure 2 were demonstrated to exist in real-world data;
they are not mere artifacts of a contrived environment. Second, the response
of a detector to a specified type of anomaly will not change, whether the
anomaly is found in synthetic data or in real-world data; consequently, the results
obtained from having evaluated an anomaly detector on synthetic data
will be preserved faithfully when applied to real data; that is, predictions
made with synthetic data will be sustained when transferred to real-world
environments.
Some important lessons have been learned. A blind region in an anomaly-
space map will always grow as the foreign sequence grows. This means that
longer foreign sequences may constitute vulnerabilities for the detection algorithms
considered here. Nevertheless, it is undoubtedly better to know the
performance boundaries of a detector so that compensations can be made
for whatever its weaknesses may be. Synthetic data have been effective
in mapping anomaly spaces. Synthetic data may be the only avenue for
creating such maps, because they permit running experiments in which all
confounding conditions can be controlled, allowing absolute calibration of
ground truth. Although real-world data is appealing for testing detection
systems, real-world ground truth will always be difficult to obtain, and not
all of the desired conditions for testing will occur in real data in a timely
way. Finally, and most importantly, the anomaly-space framework provides
a mechanism that bridges the gap between the synthetic and real worlds,
allowing evaluation results to transfer to any domain through the anomaly-
space abstraction.
Acknowledgements
The work herein was supported by the U.S. Defense Advanced Research
Projects Agency (DARPA) under contracts F30602-99-2-0537 and F30602-
00-2-0528. Many other people contributed in various ways; the authors are
grateful to Kevin Killourhy, Pat Loring, Bob Olszewski, Sami Saydjari and
Tahlia Townsend for their help. This paper draws on Kymie Tan's forthcoming
dissertation [25].
--R
"Little Linuxes,"
Principles and Prac- tice
"System structure for software fault tolerance,"
"A case study of ethernet anomalies in a distributed computing environment,"
Trend Analysis and Fault Prediction
"Symptom based diagnosis,"
Computer Event Monitoring and Analysis
"A survey of intrusion-detection techniques,"
"On the theory of scales of measurement,"
"On-line monitoring: A tutorial,"
"The concept of coverage and its effect on the reliability model of a repairable system,"
Reliable Computer Systems
"Fault injection techniques and tools,"
"Fault injection - a method for validating computer-system dependability,"
The Theory of Stochastic Processes
Time Series Analysis
"Markov chains, classifiers, and intrusion detection,"
"Computer immunology,"
"A sense of self for unix processes,"
"Intrusion detection using sequences of system calls,"
"De- tecting intrusions using system calls: Alternative data models,"
"Self-nonself discrimination in a computer,"
"Sunshield basic security module guide,"
"Benchmarking anomaly-based detection systems,"
Defining the operational limits of anomaly-based intrusion detectors
--TR
Reliable computer systems (2nd ed.)
Computer event monitoring and analysis
A survey of intrusion detection techniques
Computer immunology
Little Linuxes
Fault Tolerance
Fault Injection
On-Line Monitoring
Fault Injection Techniques and Tools
Benchmarking Anomaly-Based Detection Systems
Markov Chains, Classifiers, and Intrusion Detection
Self-Nonself Discrimination in a Computer
A Sense of Self for Unix Processes
--CTR
Rodrigo M. Santos , Jorge Santos , Javier D. Orozco, A least upper bound on the fault tolerance of real-time systems, Journal of Systems and Software, v.78 n.1, p.47-55, October 2005
Tom Goldring, Scatter (and other) plots for visualizing user profiling data and network traffic, Proceedings of the 2004 ACM workshop on Visualization and data mining for computer security, October 29-29, 2004, Washington DC, USA
Weng-Keen Wong , Andrew Moore , Gregory Cooper , Michael Wagner, What's Strange About Recent Events (WSARE): An Algorithm for the Early Detection of Disease Outbreaks, The Journal of Machine Learning Research, 6, p.1961-1998, 12/1/2005
Daniel P. Siewiorek , Ram Chillarege , Zbigniew T. Kalbarczyk, Reflections on Industry Trends and Experimental Research in Dependability, IEEE Transactions on Dependable and Secure Computing, v.1 n.2, p.109-127, April 2004 | coverage;anomaly;dependability;anomaly detection |
506833 | Optimal partition of QoS requirements on unicast paths and multicast trees. | We investigate the problem of optimal resource allocation for end-to-end QoS requirements on unicast paths and multicast trees. Specifically, we consider a framework in which resource allocation is based on local QoS requirements at each network link, and associated with each link is a cost function that increases with the severity of the QoS requirement. Accordingly, the problem that we address is how to partition an end-to-end QoS requirement into local requirements, such that the overall cost is minimized. We establish efficient (polynomial) solutions for both unicast and multicast connections. These results provide the required foundations for the corresponding QoS routing schemes, which identify either paths or trees that lead to minimal overall cost. In addition, we show that our framework provides better tools for coping with other fundamental multicast problems, such as dynamic tree maintenance. | INTRODUCTION
Broadband integrated services networks are expected to support
multiple and diverse applications, with various quality of
service (QoS) requirements. Accordingly, a key issue in the design
of broadband architectures is how to provide the resources
in order to meet the requirements of each connection.
Supporting QoS connections requires the existence of several
network mechanisms. One is a QoS routing mechanism, which
sets the connection's topology, i.e., a unicast path or multicast
tree. A second mechanism is one that provides QoS guarantees
given the connection requirements and its topology. Providing
these guarantees involves allocating resources, e.g., bandwidth
and buffers, on the various network elements. Such a consumption
of resources has an obvious cost in terms of network per-
formance. The cost at each network element inherently depends
on the local availability of resources. For instance, consuming
all the available bandwidth of a link, considerably increases the
blocking probability of future connections. Clearly, the cost
of establishing a connection (and allocating the necessary re-
sources) should be a major consideration of the connection (call)
admission process. Hence, an important network optimization
problem is how to establish QoS connections in a way that minimizes
their implied costs. Addressing this problem impacts both
the routing process and the allocation of resources on the selected
topology. The latter translates into an end-to-end QoS
requirement partition problem, namely local allocation of QoS
requirements along the topology.
The support of QoS connections has been the subject of extensive
research in the past few years. Several studies and proposals
considered the issue of QoS routing, e.g., [2], [8], [9], [21], [24]
and references therein. Mechanisms for providing various QoS
guarantees have been also widely investigated, e.g. [7], [22].
email:fdeanh@tx,ariel@eeg.technion.ac.il
Although there are proposals for resource reservation, most notably
RSVP [3], they address only the signaling mechanisms and
do not provide the allocation policy. Indeed, the issue of optimal
resource allocation, from a network perspective, has been
scarcely addressed. Some studies, e.g. [13], consider the spe-
cific, simple, case of constant link costs, which are independent
of the QoS (delay) supported by the link. Pricing, as a network
optimization mechanism, has been the subject of recent studies,
however they either considered a basic best effort service envi-
ronment, e.g. [14], [18], or simple, single link [17] and parallel
link [20] topologies.
In this paper, we investigate the problem of optimal resource
allocation for end-to-end QoS requirements on given unicast
paths and multicast trees. Specifically, we consider a frame-work
in which resource allocation is based on the partition of
the end-to-end QoS requirement into local QoS requirements
at each network element (link). We associate with each link a
cost function that increases with the severity of the local QoS
requirement. As will be demonstrated in the next section, this
framework is consistent with the proposals for QoS support on
broadband networks. Accordingly, the problem that we address
is how to partition an end-to-end QoS requirement into local
requirements, such that the overall cost is minimized. This is
shown to be intractable even in the (simpler) case of unicast
connections. Yet, we are able to establish efficient (polynomial)
solutions for both unicast and multicast connections, by imposing
some (weak) assumptions on the costs. These results provide
the required foundations for the corresponding QoS routing
schemes, which identify either paths or trees that lead to minimal
overall cost. Moreover, we indicate how the above frame-work
provides better tools for coping with fundamental multi-cast
problems, such as the dynamic maintenance of multicast
trees.
A similar framework was investigated in [4], [19]. There too,
it was proposed that end-to-end QoS requirements should be
partitioned into local (link) requirements and the motivation for
this approach was extensively discussed. [19] discussed unicast
connections and focused on loss rate guarantees. It considered
a utility function, which is equivalent to (the minus of) our cost
function. However, rather than maximizing the overall utility,
[19] focused on the optimization of the bottleneck utility over
the connection's path. That is, their goal was to partition the
end-to-end loss rate requirement into link requirements over a
given path, so as to maximize the minimal utility value over the
path links. Specifically, [19] investigated the performance of a
heuristic that equally partitioned the loss rate requirements over
the links. By way of simulations, it was indicated that the performance
of that heuristic was reasonable for paths with few (up
to five) links and tight loss rate requirements; this finding was
further supported by analysis. However, it was indicated that
performance deteriorated when either the number of links became
larger or when the connection was more tolerant to packet
loss. It was concluded that for such cases, as well as for alternate
QoS requirements (such as delay), the development of optimal
QoS partition schemes is of interest. [4] considered multicast
trees and a cost function that is a special case of ours. Each
tree link was assigned an upper bound on the cost, and the goal
was to partition the end-to-end QoS into local requirements, so
that no link cost exceeds its bound. Specifically, [4] considered
two heuristics, namely equal and proportional partitions,
and investigated their performance by way of simulations. It
was demonstrated that proportional partition offers better performance
than equal partition, however it is not optimal. [4]
too concluded that more complex (optimal) partition schemes
should be investigated. These two studies provide interesting
insights into our framework, and strongly motivate the optimization
problems that we investigate.
Another sequence of studied that is related to the present one
is [9], [16]. These studies investigated QoS partitioning and
routing for unicast connections, in networks with uncertain pa-
rameters. Their goal was to select a path, and partition the QoS
requirements along it, so as to maximize the probability of meeting
the QoS requirements. As shall be shown, the link probability
distribution functions considered in [9], [16] correspond to a
special case of the cost functions considered in the present pa-
per. The algorithms presented in [16] solve both the routing and
the QoS partition problem for unicast connections, under certain
assumptions. The present study offers an improved, less restric-
tive, solution for unicast, and, more importantly, a generalized
solution for multicast.
The general resource allocation problem is a constraint optimization
problem. Due to its simple structure, this problem
is encountered in a variety of applications and has been studied
extensively [12]. Optimal Partition of end-to-end QoS requirements
over unicast paths is a special case of that problem,
however the multicast version is not. Our main contribution
is in solving the problem for multicast connections. We also
present several algorithms for the unicast problem, emphasizing
network related aspect, such as distributed implementation.
The rest of this paper is structured as follows. Section II formulates
the model and problems, and relates our framework to
QoS network architectures. The optimal QoS partition problem
for unicast connections is investigated in Section III. The optimal
partition problem for multicast connections is discussed in
Section IV. This problem is solved using a similar approach to
that used for unicast, nonetheless the analysis and solution structure
turn out to be much more complex. Section V applies these
findings to unicast and multicast QoS routing. Finally, concluding
remarks are presented in Section VI. Due to space limits,
many technical details and proofs are omitted from this version
and can be found in [15].
II. MODEL AND PROBLEMS
In this section we present our framework and introduce the
QoS partition problem. We assume that the connection topology
is given, i.e., a path p for unicast, or a tree T for multicast.
The problem of finding such topologies, namely QoS routing,
is briefly discussed in Section V. For clarity, we detail here
only the framework for unicast connections. The definitions and
terminology for multicast trees are similar and are presented,
together with the corresponding solution, in Section IV.
A. QoS Requirements
A QoS partition of an end-to-end QoS requirement Q, on a
path p, is a vector l2p of local QoS requirements,
which satisfies the end-to-end QoS requirement, Q.
There are two fundamental classes of QoS parameters: bottleneck
parameters, such as bandwidth, and additive parame-
ters, such as delay and jitter. Each class induces a different
form of our problem, and the complexities of the solutions are
vastly different. For bottleneck parameters, we necessarily have
determined by the bottle-neck
link, i.e., allocating more than Q
induces a higher cost, yet does not improve the overall QoS, the
optimal partition is x
. For additive QoS require-
ments, a feasible partition, xp , must satisfy
In
this case, the optimal QoS partition problem is intractable [16],
however we will show that, by restricting ourselves to convex
cost functions, we can achieve an efficient (tractable) solution.
Some QoS parameters, such as loss rate, are multiplicative, i.e.,
l2p (x l ). For instance, for a loss rate QoS requirement
L, we have case too can be expressed as an
additive requirement, by solving for (\Gamma log Q); indeed, the end-
to-end requirement becomes (\Gamma log
an additive requirement.
There are QoS provision mechanisms, in which the (additive)
delay bounds are determined by a (bottleneck) "rate". A notable
example is the Guaranteed Service architecture for IP [23],
which is based on rate-based schedulers [7], [22], [25], [26].
In some cases, such mechanisms may allow to translate a delay
requirement on a given path into a bottleneck (rate) require-
ment, hence the partitioning is straightforward. However, such
a translation cannot be applied in general, e.g. due to complications
created by topology aggregation and hierarchical routing. 2
Hence, our study focuses on the general partition problem of
additive QoS requirements.
B. Cost Functions
As mentioned, we associate with each local QoS requirement
value x l , a cost c l (x l ), and make the natural assumption
that c l (x l ) is higher as x l is tighter. For instance, when x l
stands for delay, c l (x l ) is a non-increasing function, whereas
when x l stands for bandwidth, c l (x l ) is non-decreasing. The
overall cost of a partition is the sum of the local costs, i.e.,
The cost may reflect the resources, such as bandwidth, needed
to guarantee the QoS requirement. Alternatively, the cost may
be the price that the user is required to pay to guarantee a specific
QoS. The cost may be associated with either the set-up or
the run-time phase of a connection. Also, it may be used for
1 This is also true for a multicast tree T.
2 Indeed, the ATM hierarchical QoS routing protocol [21], requires local (per
cluster) QoS guarantees.
network management to discourage the use of congested links,
by assigning higher costs to those links.
A particular form of cost evolves in models that consider
uncertainty in the available parameters at the connection setup
phase [9], [16], which we now briefly overview. In such mod-
els, associated with each link is a probability of failure f l (x l ),
when trying to set up a local QoS requirement of x l . The optimal
QoS partition problem is then to find a QoS partition-
that minimizes the probability of failure; that is, it minimizes
the product
we have log
l2p log f l (x l ), we can restate this problem back as a summa-
tion, namely we define a cost function for each link, c l
log f l (x), and solve for these costs.
C. Problem Formulation
The optimal QoS partition problem is then defined as follows.
Problem OPQ (Optimal Partition of QoS): Given a path p
and an end-to-end QoS requirement Q, find a QoS partition
x
l g l2p
, such that c
x
for any (other) QoS
partition x 0
.
This study focuses on the solution of Problem OPQ for additive
QoS parameters, which, as mentioned, is considerably more
complex than its bottleneck version. In Section III we solve the
problem for unicast paths, and in Section IV we generalize the
solution to multicast trees. For clarity, and without loss of gen-
erality, we concretize the presentation on end-to-end delay requirements
III. SOLUTION TO PROBLEM OPQ
In this section we investigate the properties of optimal solutions
to Problem OPQ for additive QoS parameters and present
efficient algorithms. These results will be used in the next section
to solve Problem MOPQ, i.e., the generalization of Problem
OPQ to multicast trees. As mentioned, Problem OPQ is
a specific case of the resource allocation problem. The fastest
solution to this problem, [5], requires O(jpj log D=jpj). 3 In
Section III-B we present a greedy pseudo-polynomial solution.
This solution provides appealing advantages for distributed and
dynamic implementations, as discussed in Section III-C. In
Section III-D we present a polynomial solution that, albeit of
slightly higher complexity than that of [5], provides the foundations
of our solution to Problem MOPQ. Finally, in Section III-E
we discuss special cases with lower complexity.
As mentioned, we assume that the QoS parameter is end-to-
delay. We further assume that all parameters are integers,
and that the link cost functions are non-increasing with the delay
and (weakly) convex.
A. Notations
xp (D) is a feasible partition of an end-to-end delay requirement
D on the path p if it satisfies
l2p x l - D. We omit
the subscript p and/or the argument D when there is no ambigu-
ity. x
(D) denotes the optimal partition, namely the solution of
Problem OPQ for an end-to-end delay requirement D and a path
3 jpj log(D=jpj) is also a lower bound for solving Problem OPQ [11], that is
no fully polynomial algorithms exist.
p. We denote by jx p j the norm
l2p jx l j, hence x is feasible if
The average ffi-increment gain for a link l is denoted by
The average ffi-move gain
is denoted by \Delta e!l (x;
B. Pseudo-polynomial Solution
Problem OPQ is a special case of the general resource allocation
problem which has been extensively investigated [11], [12].
With the (weak) convexity assumption on the cost functions, it
is a convex optimization problem with a simple constraint. It
can be proved [12] that a greedy approach is applicable for such
problems, namely it is possible to find an optimal solution by
performing locally optimal decisions.
GREEDY-ADD (D; ffi; c(\Delta); p):
do
5 return x
Fig. 1. Algorithm GREEDY-ADD
Algorithm GREEDY-ADD (Figure 1) employs such a greedy
approach. It starts from the zero allocation and adds the delay
bit-by-bit, each time augmenting the link where the (negative)
ffi-increment gain is minimal, namely where it most affects the
cost. Using an efficient data structure (e.g. a heap), each iteration
requires O(log jpj), which leads to an overall complexity of
O( D
log jpj). In [11] it is shown that the solution is ffi-optimal
in the following sense: if x ffi is the output of the algorithm and
x is the optimal solution, then jx
Algorithm GREEDY-MOVE (Figure 2) is a modification of Algorithm
GREEDY-ADD that, as shall be explained in Section III-
C, has important practical advantages. The algorithm starts from
any feasible allocation and modifies it until it reaches an optimal
partition. Each iteration performs a greedy move, namely the
move with minimal (negative) ffi-move gain.
Fig. 2. Algorithm GREEDY-MOVE
Let '(x) be the distance of a given partition from the optimal
one, namely '(x) j jx \Gamma x j, where x is the optimal
partition that is nearest to x. The next lemma implies that Algorithm
GREEDY-MOVE indeed reaches a ffi-optimal solution.
Lemma 1: Each iteration of Algorithm GREEDY-MOVE decreases
'(x) by at least ffi, unless x is a ffi-optimal partition.
Lemma 1 implies that Line 3 can be used as a (ffi-)optimality
check. It also implies that the algorithm terminates with a ffi-
optimal solution and that the number of iterations is proportional
to '(x). Theorem 1 summarizes this discussion.
4 This also established the error due to our assumption of integer parameters.
Theorem 1: Algorithm GREEDY-MOVE solves Problem OPQ
in O( '(x)
log jpj).
Proof: By Lemma 1, there are at most '(x)=ffi iterations
and a ffi-optimal solution is achieved. Each iteration can be implemented
in O(log jpj) and the result follows.
C. Distributed and Dynamic Implementation
Algorithm GREEDY-MOVE can be employed in a distributed
fashion. Each iteration can be implemented by a control message
that traverses back and forth between the source and des-
tination. At each traversal e; l of Line 2 are identified and the
allocation change of the previous iteration is performed. This
requires O(jpj'=ffi) end-to-end messages. Such a distributed
implementation also exempts us from having to advertise the
updated link cost functions.
Algorithm GREEDY-MOVE can be used as a dynamic scheme
that reacts to changes in the cost functions after an optimal partition
has been established. Note that the complexity is proportional
to the allocation modification implied by the cost changes,
meaning that small allocation changes incur a small number of
computations.
D. Polynomial Solution
In this section we present an improved algorithm of polynomial
complexity in the input size (i.e., jpj and log D). In Section
IV, we derive a solution to Problem MOPQ using a similar
technique.
Algorithm BINARY-OPQ (Figure finds optimal solutions
for different values of ffi. The algorithm consecutively considers
smaller values of ffi, until the minimal possible value is reached,
at which point a (global) optimum is identified.
2 start from the partition
3 repeat
Fig. 3. Algorithm BINARY-OPQ
Obviously, the algorithm finds an optimal solution, since its
last call to GREEDY-MOVE is with 1. The number of iterations
is clearly of order O (log(D=jpj)). We need to bound
the number of steps required to find the ffi-optimal partition at
Line 4. Each iteration (except the first, for which
starts from a 2ffi-optimal partition and employs greedy moves
until it reaches a ffi-optimal partition. This bound is the same
for all iterations, since it is a bound on the distance between a
2ffi-optimal partition and a ffi-optimal partition.
Lemma 2: Let x GREEDY-MOVE(x; 2; c(\Delta); p). Then
This lemma, proven in [15], resembles the proximity theorem
presented in [11].
Theorem 2: Algorithm BINARY-OPQ solves Problem OPQ
in O (jpj log jpj log(D=jpj)).
Proof: By Lemma 2 and Theorem 1, each call
to GREEDY-MOVE requires O(jpj log jpj). Since there are
O(D=jpj) such calls, the result follows.
E. Faster Solutions
The following lemma, which is a different form of the optimality
test in Algorithm GREEDY-MOVE, provides a useful
threshold property of optimal partitions.
Lemma 3: Let \Delta j min l2p \Delta l
For all l 2 p, \Delta l (d; \Gamma1) is a non-increasing function (c l is
convex) and (by definition) \Delta l (d
the threshold \Delta relates to the optimal allocation as follows:
l - d , \Delta l (d;
This implies that an optimal solution to Problem OPQ can
be found by selecting the D largest elements from the set
For certain cost functions, this can be done analytically. For
instance, in [16] we provide an O(jpj) solution for cost functions
that correspond to delay uncertainty with uniform probability
distributions.
More generally, if the cost functions are strictly convex, then,
given \Delta , one can use (1) to find an optimal solution in O(jpj).
In [16], a binary search is employed for finding \Delta . Ac-
cordingly, the resulting overall solution is of O(jpj log \Delta
where pg. Note
that log \Delta max is bounded by the complexity of representing a
cost value.
IV. SOLUTION TO MULTICAST OPQ (MOPQ)
In this section we solve Problem OPQ for multicast trees.
Specifically, given a multicast tree, we need to allocate the delay
on each link, such that the end-to-end bound is satisfied on every
path from the source to any member of the multicast group, and
the cost associated with the whole multicast tree is minimized.
We denote the source (root) of the multicast by s and the set
of destinations, i.e., the multicast group, by M . A multicast tree
is a set of edges T ' E such that for all there exists a
path, p T (s; v), from s to v on links that belong to the tree T.
We assume there is only one outgoing link from the source s, 5
and denote this link by r.
all the outgoing links from v, i.e., all
of l's neighbors; when N(l) is an empty set then we call l a leaf.
T l is the whole sub-tree originating from l (including l itself).
The branches of T are denoted by -
frg. Observe that
A feasible delay partition for a multicast tree T, is
a set of link-requirements xT l2T such that
We can now define Problem OPQ for multicast trees.
Problem MOPQ (Multicast OPQ): Given a multicast tree T
and an end-to-end delay requirement D, find a feasible partition
x
(D) , such that c (x
(D)), for every (other)
feasible partition xT (D).
Remark 1: If there is more than one outgoing link from the
source, then we can simply solve Problem MOPQ indepen-
dently, for each tree T r i
corresponding to an outgoing link r i
5 See Remark 1.
6 Again, when no ambiguity exists, we omit the sub-script T and/or the argument
D.
from s. Thus, our assumption, that there is only one outgoing
link from r, does not limit the solution.
We denote by MOPQ(T; d) the set of optimal partitions on
a tree T with delay D. c T (d) denotes the tree cost function,
i.e., the cost of (optimally) allocating a delay d on the tree T. In
other words, c T
A. Greedy Properties
The general resource allocation problem can be stated with
tree-structured constraints and solved in a greedy fashion [12].
An efficient O(jTj log jTj log D) algorithm is given in [11].
However, that "tree version" of the resource allocation problem
has a different structure than Problem MOPQ. Indeed, the
simple greedy approach, namely repeated augmentation of the
link that most improves the overall cost, fails in our framework.
However, as we show below, some greedy structure is main-
tained, as follows: if at each iteration we augment the sub-tree
that most improves the overall cost, then an optimal solution is
achieved.
The main difference of our framework is that the constraints
are not on sub-trees, but rather on paths. The greedy approach
fails because of the dependencies among paths. On the other
hand, we note that the tree version of the resource allocation
problem may be applicable to other multicast resource allocation
problems, in which the constraints are also on sub-trees.
For example, suppose a feasible allocation must recursively sat-
isfy, for any sub-tree T e ,
some arbitrary (sub-tree) constraint.
We proceed to establish the greedy structure of Problem
MOPQ. First, we show that if all link cost functions are
convex, then so is the tree cost function.
Lemma 4: If fc l g l2T
are convex then so is c T (d).
By Lemma 4, we can replace T by an equivalent convex link.
Any sub-tree T l , can also be replaced with an equivalent convex
link, hence so can -
T. However, these results apply only if
the allocation on every sub-tree is optimal for the sub-tree. This
property is sometimes referred to as the "optimal sub-structure"
property [1], and is the hallmark of the applicability of both
dynamic-programming and greedy methods.
Lemma 5: Let x
let the sub-partition x e
(D e
l g l2Te
, where D
r . Then x e
Lemma 5 implies that, for any optimally partitioned trees, we
can apply the greedy properties of Section III. That is, the partition
on r and -
T is a solution to Problem OPQ on the 2-link path
T). This suggests that employing greedy moves between r
and -
T will solve Problem MOPQ, and this method can be applied
recursively for the sub-trees of T. Indeed, this scheme is
used by the algorithms presented in the next sections.
B. Pseudo-polynomial Solution
We employ greedy moves between r and -
T. The major
difficulty of this method is the fact that c -
T (d) is unavailable.
Computing c - T (d) for a specific d requires some x
MOPQ(T; d). Fortunately, we can easily compute c - T (d
given x
(d). Since the greedy approach is applicable, we may
simply perform a greedy augmentation and recompute the cost.
Note that adding ffi to -
adding ffi to all fT l g l2N(r)
. In
the worst case, this must be done recursively for all the sub-trees
and O(jTj) links are augmented.
Procedure TREE-ADD (Figure performs a ffi-augmentation
on a tree T. We assume that for each sub-tree T l the value of
(D T l ; ffi) for the current allocation is stored in the variable
(ffi). At Line 1 it is decided if r or -
T would be augmented.
(D T l ) or \Delta r
can be made by a simple comparison. If -
T should be augmented
then Procedure TREE-ADD is called recursively on its compo-
nents. Finally, \Delta T (\Sigmaffi) is updated at Lines 6-7.
3 else
4 for each l 2 N(r) do
(ffi)g a
a If r is a leaf we define the sum to be 1.
Fig. 4. Procedure TREE-ADD
Algorithm BALANCE (Figure 5) is a dynamic algorithm that
solves Problem MOPQ. It starts from any feasible tree partition
and performs greedy moves between r and -
T. The while loop
at Line 7 computes \Delta r! -
T (x; ffi). If it is negative then moving
from r to -
T reduces the overall cost. The augmentation of -
is done by calling TREE-ADD on each of its components. The
while loop at at Line 11 performs moves from -
T to r in a similar
way.
To be able to check the while condition and for calling TREE-
ADD, we must have
(\Sigmaffi) for all l 2 T. This requires
an optimal partition on each sub-tree. Algorithm BALANCE
makes sure that this is indeed the case by recursively calling
itself (Line 5 ) on the components of -
T. Since any allocation to
a leaf is an optimal partition on it, the recursion is stopped once
we reach a leaf. After the tree is balanced the algorithm updates
which is used by the calling iteration.
if T is a leaf then
4 return
5 (else) for each l 2 N(r) do
7 while
9 for each l 2 N(r) do
13 for each l 2 N(r) do
Fig. 5. Algorithm BALANCE
We proceed to analyze the complexity of BALANCE. We first
define a distance 'T (x) which is the tree version of the path
distance defined in Section III-B. Let ' r (xT
where x
is the optimal partition nearest to T. Let x l
x l
e
e2T l
. We define '(xT
Theorem 3: Algorithm BALANCE finds a ffi-optimal solution
to Problem MOPQ in O(jTj('(x)=ffi)
Proof: '(x)=ffi bounds the number of calls to Procedure
TREE-ADD. At the worst case, TREE-ADD requires
O(jTj) for each call. The recursive calls to BALANCE also require
O(jTj).
Remark 2: We can apply Algorithm BALANCE on the feasible
partition rg. Clearly,
'(x) - D in this case. Thus, Problem MOPQ can be solved in
O(jTjD=ffi).
C. Distributed and Dynamic Implementation
Algorithm BALANCE can be readily applied in a distributed
fashion. Each augmentation in Procedure TREE-ADD is propagated
from the root to the leafs. A straightforward implementation
requires O(t) time, 7 where t is the depth of the tree.
At most O(t) recursive calls to BALANCE are performed se-
quentially. Finally, the number of calls to TREE-ADD after
the sub-trees are balanced, is bounded by ' max
r (x)=ffi, where
). The overall complexity is, there-
fore, O
r (x)=ffi
. Note that for balanced trees
O(log jTj).
Algorithm BALANCE (as is the case for Algorithm GREEDY-
MOVE) can be used as a dynamic scheme that reacts to changes
in the cost functions after an optimal partition is established.
The complexity of Algorithm BALANCE is proportional to the
distance from the new optimal allocation. Again, small changes
(i.e., small '(x)) incur a small number of computations.
D. Polynomial Solution
We can now present a polynomial solution. Algorithm
BINARY-MOPQ uses an approach that is identical to the
one used for the solution of Problem OPQ. The algorithm consecutively
calls BALANCE for smaller values of ffi, until the minimal
possible value is reached, at which point an optimal partition
is identified.
4 repeat
Fig. 6. Algorithm BINARY-MOPQ
We will show that, at each iteration of the algorithm, ' max
r (x)
is bounded by tffi. Therefore, '(x)=ffi - tjTj and the over-all
complexity of this algorithm is O(jTj 2 t log(D=jTj)). The
Lemma 6 is the equivalent of Lemma 2 for multicast.
Lemma t.
7 assuming that traveling a link requires one time unit.
denote the value of \Delta T (ffi) at the termination of
T). Note that \Delta T (d; ffi) assumes a ffi-optimal
partition on the tree, hence it is different from (c T (d
which assumes the optimal partition.
Theorem 4 states the complexity of Algorithm BINARY-
MOPQ.
Theorem 4: Algorithm BINARY-MOPQ finds a solution to
Problem MOPQ in O(jTj 2 t log(D=jTj)).
Comparing this result to the O(jTjD=ffi) complexity of
Algorithm BALANCE (see Remark 2), indicates that Algorithm
BALANCE is preferable when D=ffi ! jTjt log(D=jTj).
Again, note that, for balanced trees
we can implement Algorithm BINARY-MOPQ in a distributed
fashion (as in Section IV-C), with an overall complexity of
O(t 3 log(D=jTj).
Remark 3: Algorithm BINARY-MOPQ starts from a coarse
partition and improves the result by refining the partition at each
iteration; this means that one may halt the computation once the
result is good enough, albeit not optimal.
Remark 4: It is possible to modify the algorithm to cope
with heterogeneity in the QoS requirements of the multicast
group members. In the worst case, the complexity of the solution
grows by a factor of O (jM j), while the complexity of the
pseudo-polynomial solution remains unchanged. The details of
this extension are omitted here.
V. ROUTING ISSUES
In the previous sections, we addressed and solved optimal
QoS partition problems for given topologies. These solutions
have an obvious impact on the route selection process, as the
quality of a route is determined by the cost of the eventual (opti-
mal) QoS partition over it. Hence, the unicast partition problem
OPQ induces a unicast routing problem, OPQ-R, which seeks a
path on which the cost of the solution to Problem OPQ is min-
imal. Similarly, Problem MOPQ induces a multicast routing
problem MOPQ-R. In this section we briefly discuss some current
and future work in the context of these routing problems.
A. OPQ-R
As was the case with Problem OPQ, with OPQ-R too there
is a significant difference between bottleneck QoS requirements
and additive ones. As explained in Section II, for a bottleneck
QoS requirement, the end-to-end requirement determines Q
Q for all links in the path (or tree), and a corresponding link
cost, c l (Q). Therefore, the routing problem OPQ-R boils down
to a "standard" shortest-path problem with link length c l (Q).
As noted, in the context of Problem OPQ, providing delay requirements
through rate-based schedulers [7], [22], [25], [26],
translates the additive requirement into a (simpler) bottleneck
requirement. However, in the context of Problem OPQ-R, such
a translation is not possible anymore, since paths differ also in
terms of constant (rate-independent) link delay components. Efficient
solutions for Problem OPQ-R, under delay requirements
and rate-based schedulers, have been presented in [9].
The general OPQ-R problem, under additive QoS require-
ments, is much more complex, and has been found to be intractable
[16]. Note that, even in a simpler, constant-cost frame-
work, where each link is characterized by a delay-cost pair
(rather than a complete delay-cost function), routing is an intractable
problem [6]. In the latter case, optimal routing can be
achieved through a pseudo-polynomial dynamic-programming
scheme, while ffl-optimal solutions can be achieved in polynomial
time [10].
The general OPQ-R problem, under the framework of the
present study, was solved in [16]. The solution is based on
dynamic-programming and assumes that the link cost functions
are convex. An exact pseudo-polynomial solution, as well as an
ffl-optimal polynomial solution, have been presented. We note
that a single execution of those algorithms finds a unicast route
from the source to every destination and for every end-to-end
delay requirement.
B. MOPQ-R
As could be expected, finding optimal multicast trees under
our framework is much more difficult than finding unicast paths.
Even with bottleneck QoS requirements, MOPQ-R boils down
to finding a Steiner tree, which is known to be an intractable
problem [6].
We are currently working on solving MOPQ-R for additive
QoS requirements. We established an efficient scheme for the
fundamental problem of adding a new member to an existing
multicast tree. This provides a useful building block for constructing
multicast trees. Another important building block is
the optimal sub-structure property (established in Section IV),
which is an essential requirement for the application of greedy
and dynamic programming solution approaches.
Interestingly, the above problem, of adding members to multicast
trees, may serve to illustrate the power of our framework
over the simpler, constant-cost framework. In the latter, there
is a single delay-cost pair for each link (rather than a complete
delay-cost function), and the goal is to find a minimal cost tree
that satisfies an end-to-end delay constraint. 8 Under that frame-
work, it is often impossible to connect a new member to the
"tree top", i.e., the leaves and their neighborhood. This is a consequence
of cost minimization considerations, which usually result
with the consumption of all (or most of) the available delay
at the leaves. For example, consider the network of Figure 7.
The source is S and the multicast group is fA; Bg; the end-to-
end delay bound is 10 and the link delay-cost pairs are specified.
Suppose we start with a tree for node A, i.e., the link (S; A).
Since A exhausts all the available end-to-end delay, we cannot
add B to the tree by extending A's branch with the (cheap) link
rather, we have to use the (expensive) link (S; B). Note
that we would get the same result even if there were an additional
link from S to A with shorter delay, say 9, and slightly
higher cost, say 11.
Our framework allows a better solution, as it lets the link
several delays and costs. For instance, it could
advertise a delay of 10 with a cost of 10 and a delay of 9 with a
cost of 11. When adding B to the tree, we can change the delay
allocation on (S; to 9 (thus paying 11 instead of 10),
which allows us to use the link (A; B) for adding B. The cost
of the resulting tree is 12, as opposed to 20 in the previous solution
(i.e., using link (S; B)). Note that, when adding B, one can
8 That framework was the subject of numerous studies on constrained multi-cast
trees.
'j 'i `j 'i
'j 'i
Fig. 7. Example: extending a multicast tree
consider the "residual" cost for each link, i.e., the cost of tightening
the delay bound on existing allocations. In our example,
the residual cost function of link (S; A) is 0 for a delay of 10
(i.e., the current allocation) and 1 for a delay of 9 (i.e., the added
cost for tightening the requirement). The last observation implies
that adding a new member to an existing tree boils down to
finding an optimal unicast path, with respect to the residual cost
functions, from the new member to the source; i.e., an instance
of Problem OPQ-R, for which efficient solutions have been established
in [16].
VI. CONCLUSIONS
We investigated a framework for allocating QoS resources on
unicast paths and multicast trees, which is based on partitioning
QoS requirements among the network components. The quality
of a partition is quantified by link cost functions, which increase
with the severity of the QoS requirement. We indicated that this
framework is consistent with the major proposals for provisioning
QoS on networks. Indeed, the problem of how to efficiently
partition QoS requirements among path or tree links has been
considered in previous studies, however till now only heuristic
approaches have been addressed. The present study is the first
to provide a general optimal solution, both for unicast paths and
multicast trees.
We demonstrated how the various classes of QoS requirements
can be accommodated in within our framework. We
showed that the partitioning problems are simple when dealing
with bottleneck requirements, such as bandwidth, however they
become intractable for additive (or multiplicative) requirements,
such as delay, jitter and loss rate. Yet we established that, by
introducing a mild assumption of weak convexity on the cost
efficient solutions can be derived.
We note that weak convexity essentially means that, as the
QoS requirement weakens, the rate of decrease of the cost function
diminishes. This is a reasonable property, as cost functions
are lower-bounded, e.g. by zero. Moreover, it indeed makes
sense for a cost function to strongly discourage severe QoS
requirements, yet gradually become indifferent to weak (and,
eventually, practically null) requirements. Hence, the scope of
our solutions is broad and general.
Specifically, we presented several greedy algorithms for
the unicast problem (OPQ). Algorithm GREEDY-MOVE, is
a pseudo-polynomial solution, which can be implemented in
a distributed fashion. The complexity of this solution is
O('(x) log jpj), where '(x) - D is the distance between the
initial allocation, x, and the optimal one. It can also be applied
as a dynamic scheme to modify an existing allocation.
This is useful in dynamic environments where the cost of resources
changes from time to time. Note that the complexity is
proportional to '(x), meaning that small cost changes require
a small number of computations to regain optimality. Algorithm
BINARY-OPQ is a polynomial solution from which we
later build our solution to the multicast problem (MOPQ). The
complexity of this solution is O (jpj log jpj log(D=jpj)).
Next, we addressed the multicast problem MOPQ. We began
by showing that the fundamental properties of convexity
and optimal sub-structure generalize to multicast trees. Then,
we established that Problem MOPQ also bears a greedy struc-
ture, although much more complex than its OPQ counterpart.
Again, the greedy structure, together with the other established
properties, provided the foundations for an efficient solu-
tions. Algorithm BALANCE is a pseudo-polynomial algorithm
which can be applied as a dynamic scheme. Its complexity
again, the distance between
the initial allocation and the optimal one. A distributed
implementation requires O(t 2 ' max
r (x)), where t is the depth
of the tree and ' max
r (x) is the maximal distance of any link's
allocation from it optimal one. Note that for balanced trees
Algorithm BINARY-MOPQ is a polynomial solution
with a complexity of O(jTj 2 t log(D=jTj)). A distributed
implementation of this algorithm requires O(t 3 log(D=jTj). We
note that our solutions are applicable to heterogeneous multicast
members, each with a different delay requirement.
Lastly, we discussed the related routing problems, OPQ-R
and MOPQ-R. Here, the goal is to select either a unicast path
or multicast tree, so that, after the QoS requirements are optimally
partitioned over it, the resulting cost would be minimized.
Again, unicast proves to be much easier than multicast. In par-
ticular, for bottleneck QoS requirements, OPQ-R boils down to a
simple shortest-path problem. For additive requirements, OPQ-
R is intractable, yet an efficient, ffl-optimal solution has been
established in [16]. For multicast, all the various versions of
MOPQ-R are intractable. We are currently investigating Problem
MOPQ-R under additive requirements, and have obtained
an efficient scheme for adding new members to a multicast tree.
Several important issues are left for future work. One is
multicast routing, i.e., Problem MOPQ-R, for which just initial
(yet encouraging) results have been obtained thus far. Another
important aspect is the actual implementation of our solutions
in within practical network architectures. In this respect, it
is important to note that a compromise with optimality might
be called for. Indeed, while our solutions are of reasonable
complexity, a sub-optimal solution that runs substantially faster
might be preferable in practice. Relatedly, one should consider
the impact of the chosen solution for QoS partitioning on the
routing process. The latter has to consider the quality of a selection
(i.e., path or tree) in terms of the eventual QoS parti-
tion. This means that simpler partitions should result in simpler
routing decisions, which provides further motivation for compromising
optimality for the sake of simplicity. The optimal
solutions established in this study provide the required starting
point in the search of such compromises.
Lastly, we believe that the framework investigated in this
study, where QoS provisioning at network elements is characterized
through cost functions, provides a powerful paradigm for
dealing with QoS networking. We illustrated the potential benefits
through an example of dynamic tree maintenance. Further
study should consider the implications and potential advantages
of our framework, when applied to the various problems and
facets of QoS networking.
--R
Introduction to Algo- rithms
A framework for QoS-based routing in the internet - RFC no
Call admission and resource reservation for multicast sessions.
The complexity of selection and ranking in X
Computers and Intractability.
Efficient network QoS provisioning based on per node traffic shaping.
QoS routing mechanisms and OSPF extensions.
Approximation schemes for the restricted shortest path prob- lem
Lower and upper bounds for the alloction problem and other nonlinear optimization problems.
Resource Allocation Problems.
Multicast routing for multimedia communication.
Optimal partition of QoS requirements on unicast paths and multicast trees.
QoS routing in networks with uncertain pa- rameters
A new approach to service provisioning in ATM networks.
Pricing Congestable Network Re- sources
Allocation of local quality of service constraints to meet end-to-end requirements
Incentive pricing in multi-class communication networks
Private network-network interface specification v1
A generalized processor sharing approach to flow control in integrated services networks: the multiple node case.
Specification of guaranteed quality of service - RFC no
Service disciplines for guaranteed performance service in packet-switching networks
--TR
Introduction to algorithms
Approximation schemes for the restricted shortest path problem
Multicast routing for multimedia communication
A new approach to service provisioning in ATM networks
Lower and upper bounds for the allocation problem and other nonlinear optimization problems
A generalized processor sharing approach to flow control in integrated services networks
Efficient network QoS provisioning based on per node traffic shaping
QoS routing in networks with uncertain parameters
QoS routing in networks with inaccurate information
--CTR
Ariel Orda , Alexander Sprintson, A scalable approach to the partition of QoS requirements in unicast and multicast, IEEE/ACM Transactions on Networking (TON), v.13 n.5, p.1146-1159, October 2005
Wen-Lin Yang, Optimal and heuristic algorithms for quality-of-service routing with multiple constraints, Performance Evaluation, v.57 n.3, p.261-278, July 2004
Wen-Lin Yang, A comparison of two optimal approaches for the MCOP problem, Journal of Network and Computer Applications, v.27 n.3, p.151-162, August 2004
Sun-Jin Kim , Mun-Kee Choi, Evolutionary algorithms for route selection and rate allocation in multirate multicast networks, Applied Intelligence, v.26 n.3, p.197-215, June 2007
Ariel Orda , Alexander Sprintson, Precomputation schemes for QoS routing, IEEE/ACM Transactions on Networking (TON), v.11 n.4, p.578-591, August
Bin Xiao , Jiannong Cao , Zili Shao , Qingfeng Zhuge , Edwin H. -M. Sha, Analysis and algorithms design for the partition of large-scale adaptive mobile wireless networks, Computer Communications, v.30 n.8, p.1899-1912, June, 2007 | routing;broadband networks;convex costs;QoS partitioning;QoS-dependent costs;multicast;unicast |
506837 | Generalized loop-back recovery in optical mesh networks. | Current means of providing loop-back recovery, which is widely used in SONET, rely on ring topologies, or on overlaying logical ring topologies upon physical meshes. Loop-back is desirable to provide rapid preplanned recovery of link or node failures in a bandwidth-efficient distributed manner. We introduce generalized loop-back, a novel scheme for performing loop-back in optical mesh networks. We present an algorithm to perform recovery for link failure and one to perform generalized loop-back recovery for node failure. We illustrate the operation of both algorithms, prove their validity, and present a network management protocol algorithm, which enables distributed operation for link or node failure. We present three different applications of generalized loop-back. First, we present heuristic algorithms for selecting recovery graphs, which maintain short maximum and average lengths of recovery paths. Second, we present WDM-based loop-back recovery for optical networks where wavelengths are used to back up other wavelengths. We compare, for WDM-based loop-back, the operation of generalized loop-back operation with known ring-based ways of providing loop-back recovery over mesh networks. Finally, we introduce the use of generalized loop-back to provide recovery in a way that allows dynamic choice of routes over preplanned directions. | Introduction
For WDM networks to oer reliable high-bandwidth services, automatic self-healing capabilities, similar
to those provided by SONET, are required. In particular, pre-planned, ultrafast restoration of
service after failure of a link or node is required. As WDM networks mature and expand, the need has
emerged for self-healing schemes which operate over a variety of network topologies and in a manner
which is bandwidth e-cient. While SONET provides a known and robust means of providing recovery
in high-speed networks, the techniques used for SONET are not always immediately applicable
to WDM systems. Certain issues, such as wavelength assignment and wavelength changing, make
WDM self-healing dierent from SONET self-healing. Our purpose is to present a method for service
restoration in optical networks which has the following characteristics:
Speed: we want the speed of recovery to be of the order of the speed of switching and require
minimal processing overhead.
Transparency: we seek a method of recovery which can be done at the optical layer, without
regard for whatever protocol(s) may be running over the optical layer.
Flexibility: our method should not constrain primary routings and should provide a large choice
of back-up routes to satisfy such requirements as bounds on average or maximum back-up length.
In this paper, we present an approach which altogether moves away from rings. The rationale
behind our approach is that, while ring recovery makes sense over network topologies which are
composed of interconnected rings, rings are not fundamental to network recovery over mesh networks.
Indeed, even embedding rings over a given topology can have signicant implications for hardware costs
([BGSM94]). We present generalized loop-back, a new method of achieving loop-back recovery over
arbitrary two-link-redundant and two-node-redundant networks to restore service after the failure or
a link or a node, respectively. Loop-back recovery over mesh networks without the use of ring covers
was rst introduced in [FMB98, MFB99]. We represent each network by a graph, with each node
corresponding to a vertex and each two-ber link to an undirected edge. The graph corresponding
to a link (node)-redundant network is edge (vertex)-redundant. The principle behind generalized
loop-back is to create primary and secondary digraphs, so that, upon failure of a link or node, the
secondary digraph can be used to carry back-up tra-c that provides loop-back to the primary graph.
Each primary or secondary digraph may correspond to a wavelength on a ber or a full ber. The
secondary digraph is the conjugate of the primary digraph. Each direction in a link is associated with
a given primary graph. Our algorithms perform the choice of directions to establish our digraphs.
Our approach meets our three goals: speed, transparency and
exibility. Although we use preplanning
of directions, our network management protocol determines, in real time, the back-up route that
will be utilized. We do not however, require processing as in traditional dynamic recovery schemes. In
eect, our network management protocol provides dynamic real-time discovery of routings
along pre-planned directions determined by our direction selection algorithms. Since
our protocol (see Section 2.3) requires very simple processing and the optical layer remains responsible
for recovery (ensuring transparency), we have speed of recovery combined with
exibility. In
particular, depending on availability of links or wavelengths (which may be aected by congestion or
failures in the network), dierent back-up routes may be selected, but the selection will be automatic
and will not require active comparison, by the network management, of the dierent possible routes.
In Section 1.1, we give an overview of relevant work in the area of of network protection and
restoration. In Section 2.1, we discuss generalized loop-back recovery for link failure. In Section 2.2,
we present our method for loop-back recovery for node failures in arbitrary vertex-redundant networks.
A simple network protocol, presented in Section 2.3, allows for distributed operation of recovery from
link or node failure. When we consider recovery from node failures, we must contend with the fact
that a node may carry several tra-c
ows, all of which are disrupted when the node fails.
Section 3 of our paper considers a range of dierent applications for generalized loop-back:
We address the goal of
exibility in the choice of back-up routings. We present a means of
selecting directions for generalized loop-back so as to avoid excessive path lengths. Our algorithm
allows a choice among a large number of alternative directions. The choice of directions may
greatly aect the length of back-up paths. To avoid excessive loss and jitter along recovery paths,
we present heuristic algorithms that signicantly reduce the average and maximum length of
recovery paths over random choices of directions.
We may use generalized loop-back to perform wavelength-based recovery, which we term WDM-based
loop-back recovery, instead of ber-based recovery in mesh networks. We illustrate why the
method of cover of rings using double-cycle covers is not directly applicable to WDM loop-back
recovery.
Generalized loop-back can yield several choices of backup routes for a given set of directions. We
brie
y illustrate how generalized loop-back can be used to avoid the use of certain links.
Finally, in Section 4, we present conclusions and directions for further research.
1.1 Background
Methods commonly employed for link protection in high-speed networks can be classied as either
dynamic or pre-planned, though some hybrids schemes also exist ([SOOH93]). The two types
oer a tradeo between adaptive use of back-up (or \spare") capacity and speed of restoration
restoration typically involves a search for a free path using back-up
capacity ([HKSM94, GK93, Bak91]) through broadcasting of help messages ([CBMS93, FY94, Gro87,
Wu94]). The performance of several algorithms is given in ([BCS93, CBMS93]).
Overheads due to message passing and software processing render dynamic processing slow. For dynamic
link restoration using digital cross-connect systems, a two second restoration time is a common
goal for SONET ([FY94, Gro87, KA93, Sos94, Wu94, YH88]). Pre-planned methods depend mostly
on look-up tables and switches or add-drop multiplexers. For optical networks, switches may operate
in a matter of microseconds or nanoseconds. Thus, to meet our speed requirement, we consider
pre-planned methods, even though pre-planned methods suer from poorer capacity utilization than
dynamic systems, which use of real-time availability of back-up capacity.
Within pre-planned methods, we may distinguish between path and link or node restoration. Path
restoration refers to recovery applied to connections following a particular path across a network.
Link or node restoration refers to recovery of all the tra-c across a failed link or node, respectively.
Path restoration may be itself subdivided into two dierent types: live (dual-fed) back-up and event-triggered
back-up. In the rst case, two live
ows, a primary and a back-up, are transmitted. The
two
ows are link-disjoint if we seek to protect against link failure, or node-disjoint (except for the
end nodes) if we seek to protect against node failure. Upon failure of a link or node on the primary
ow, the receiver switches to receiving on the back-up. Recovery is thus extremely fast, requiring
action only from the receiving node, but back-up capacity is not shared among connections. In the
second case, event-triggered path restoration, the back-up path is only activated when a failure occurs
on a link or node along the primary path. Backup capacity can be shared among dierent paths
([WLH97]), thus improving capacity utilization for back-up channels and allowing for judicious planning
([BPG92, HBU95, GBV91, HB94, GKS96, SNH90, VGM93, Fri97, NHS97]). However, recovery
involves coordination between the sender and receiver after a failure event and action from nodes
along the back-up path. These coordination eorts may lead to signicant delays and management
overhead.
Pre-planned link or node restoration can be viewed as a compromise between live and event-triggered
path restoration. Pre-planned link restoration is not as capacity-e-cient as event-triggered
path restoration ([CWD97, RIG88, LZL94]), but is more e-cient than live back-up path restoration,
since sharing of back-up bandwidth is allowed. The tra-c along a failed link or node is recovered,
without consideration for the end points of the tra-c carried by the link or node. Thus, only the
two nodes adjacent to the failure need to engage in recovery. The back-up is not live, but triggered
by a failure. A comparison of the trade-os between end-to-end recovery and patch of a segment (we
assume a segment to be a single link or node), is given in [DW94]. An overview of the dierent types
of protection and restoration methods is given in [RM99] and comparisons between path protection
and event-triggered path protection are given in [RM99, RIG88, JVS95, JAH94, XM99].
Link or node restoration also benets from a further advantage, which makes it very attractive for
pre-planned recovery: since it is not dependent upon specic tra-c patterns, it can be pre-planned
once and for all. Thus, link or node restoration is particularly attractive at lower layers,
where network management may not be aware, at all locations of the network, of the origination and
destination, or of the format ([Wu94]) of all the tra-c being carried at that location. Therefore, in this
paper we concentrate on pre-planned link and node restoration in order to satisfy our transparency
requirement. Moreover, link restoration satises the rst part of our
exibility goal, since restoration
is done without consideration for primary routings.
For pre-planned link restoration, the main approaches have been through the use of covers of rings
and, more recently, through pre-planned cycles ([GS98]). The most direct approach is to design the
network in term of rings. The building blocks of SONET networks are generally self-healing rings
(SHRs) and diversity protection (DP) ([WCB91, Was91, WB90, SWC93, SGM93, SF96, STW95,
HT92]). SHRs are unidirectional path-switched
rings (UPSRs) or bi-directional line-switched rings (BLSRs), while DP refers to physical redundancy
where a spare link (node) is assigned to one or several links (nodes) ([Wu92] pp. 315-32). In rings, such
as BLSR, link or node restoration is simply implemented using loop-back. The waste of bandwidth due
to back-hauling may be remedied by looping back at points other than the failure location ([Mag97,
KTK94]).
Using only DP and SHRs is a constraint which has cost implications for building and expanding
networks ([WKC89]); see [Sto92] for an overview of the design of topologies under certain reliability
constraints. However, rings are not necessary to construct survivable networks ([NV91, WH91]).
Mesh-based topologies can also provide redundancy ([Sto92, JHC93, WKC88]). Ring-based architectures
may be more expensive than meshes ([BGSM94]), and as nodes are added, or networks are
interconnected, ring-based structure may cease to be maintained, thus limiting their scalability. Even
if we constrain ourselves to always use ring-based architectures, such architectures may not easily
bear changes and additions as the network grows ([WKC89, Wu92, WKC88]. For instance, adding
a new node, connected to its two nearest node neighbors, will preserve mesh structure, but may not
preserve ring structure. Our arguments indicate that, for reasons of cost and extensibility, mesh-based
architectures are more promising than interconnected rings.
Covering mesh topologies with rings is a means of providing both mesh topologies and distributed,
ring-based restoration. There are several approaches to covers of rings for networks in order to ensure
link restorability. One approach is to cover nodes in the network by rings ([Was91]). In this manner,
a portion of links are covered by rings. If primary routing is restricted to the covered links, then
link restoration can be eected on each ring in the same manner as in a traditional SHR, by routing
the back-up tra-c around the ring in the opposite direction to the primary tra-c. Using such an
approach, the uncovered links can be used to carry unprotected tra-c, i.e. tra-c which may not be
restored if the link which carries it fails.
To allow every link to carry protected tra-c, other ring-based approaches ensure every link is
covered by a ring. One approach to selecting such covers is to cover a network with rings so that
every link is part of at least one ring ([Gro92]). This approach suers from some capacity drawbacks.
With ber-based restoration, every ring is a four-ber ring. A link covered by two rings requires eight
bers; a link covered by n rings requires 4n bers. Alternatively, the logical bers can be physically
routed through four physical bers, but only at the cost of signicant network management overhead.
Minimizing the amount of ber required to obtain redundancy using ring covers is equivalent to nding
the minimum cycle cover of a graph, an NP-complete problem ([Tho97, ILPR81]), although bounds
on the total length of the cycle cover may be found ([Fan92]).
A second approach to ring covers, intended to overcome the di-culties of the rst approach, is
to cover every link with exactly two rings, each with two bers. The ability to perform loop-back
style restoration over mesh topologies was rst introduced in [ESH97, ES96]. In particular, [ESH97]
considers link failure restoration in optical networks with arbitrary two-link redundant arbitrary mesh
topologies and bi-directional links. The approach is an application of the double-cycle ring cover
([Jae85, Sey79, Sze73]). For planar graphs, the problem can be solved in polynomial-time; for non-planar
graphs, it is conjectured that double cycle covers exist, and a counterexample would have to
obey certain properties ([God85]). Node recovery can be eected with double cycle ring covers, but
such restoration requires cumbersome hopping among rings. In subsection 3.2, we consider double-
cycle covers in the context of wavelength-based recovery.
In order to avoid the limitations of ring covers, an approach using pre-congured cycles, or p-
cycles, is given in [GS98]. A p-cycle is a cycle on a redundant mesh network. Links on the p-cycle
are recovered by using the p-cycle as a conventional BLSR. Links not on the p-cycle are recovered
by selecting, along the p-cycle, one the paths which connect the nodes which are the end-points of
the failed link. We may note that some di-culty arises from the fact that several p-cycles may
be required to cover a network, making management among p-cycles necessary. The fact that a
single p-cycle may be insu-cient arises from the fact that a Hamiltonian might not exist, even in a
two-connected graph. Even nding p-cycles which cover a large number of nodes may be di-cult.
Some results ([Fou85, Jac80, ZLY85]) and conjectures ([HJ85, Woo75]) exist concerning the length of
maximal cycles in two-connected graphs. The p-cycle approach is in eect a hybrid ring approach,
which mixes path restoration (for links not on the p-cycle) with ring recovery (for links on the p-cycle).
Generalized Loop-back
2.1 Generalized Loop-back for Recovery from Link Failures
The gist of our approach is to eliminate the use of rings. Instead, a primary (secondary) digraph
(corresponding to a set of unidirectional bers or wavelengths) is backed up by another, secondary
(primary) digraph (corresponding to a set of unidirectional bers or wavelengths in the reverse direction
of the primary (secondary) digraph). After a failure occurs, we broadcast the stream carried
by the primary (secondary) digraph along the failed link onto the secondary (primary) digraph. We
later show a protocol which ensures that only a single connection arrives to each node on the back-up
path.When the back-up path reaches the node which lost its connection along the primary (secondary)
digraph because of the failure, the tra-c is restored onto the primary (secondary) digraph.
To illustrate our method, consider a simple sample network. Our algorithm works by assigning
directions to each of the two bers on each link. Figure 1.b shows in dashed arrow lines the directions
of the primary digraph for each link and in thin dashed lines the directions of the secondary digraph
for each link. The topology of the network is shown in bold lines without arrows. A break in a link
is shown by discontinued lines. The shortest back-up path is node 3 4.
Node 3 eliminates a duplicate connection which arrives to it via node 6 ! node 5 ! node 4 ! node 3.
Node 7 eliminates a duplicate connection which arrives to it via node 2 ! node 1 ! node 8 ! node
7. Note that back-haul need not always occur. For instance, in Figure 1.b, if the original connection
went from node 4 to node 2 via node 3, then after recovery the connection would commence at node
4 and traverse, in order, nodes 3, 6, 7 en route to node 2.
Not every assignment of directions provides the possibility for loop-back recovery. As an example,
consider in Figure 1.a the same network topology as in Figure 1.b with dierent directions. The
directions are provided in such a way that, when no failures are present, all nodes are reachable from
each other on the primary wavelength on ber 1 and on the secondary wavelength on ber 2. However,
the same link failure as in Figure 1.b is not recoverable. This example illustrates the importance of
proper selection of the directions on the links.
We may now formalize our approach. We dene an undirected graph E) to be a set of
nodes N and edges E. With each edge [x; y] of an undirected graph, we associate two directed arcs
(x; y) and (y; x). We assume that if an edge [x; y] fails, then arcs (x; y) and (y; x) both fail. A directed
graph is a set of nodes N and a set of directed arcs A. Given a set of directed arcs, A,
dene the reversal of A to be Ag. Similarly, given any directed graph
to be the reversal of P .
Let us consider that we have a two vertex (edge)-connected graph, or redundant graph
i.e. removal of a vertex (edge) leaves the graph connected. Our method is based on the construction
of a pair of directed spanning sub-graphs, each of which can
be used for primary connections between any pair of nodes in the graph. In the event of a failure,
connections on y are looped back around the failure using R. Similarly, connections on R are looped
back around the failure using B. For instance, if G were a ring, then B and R would be the clockwise
and counter-clockwise cycles around the ring.
To see how loop-back operates in a general mesh network, consider rst the case where an edge
fails. Assume (w; y) and (x; z) are arcs of R and that the shortest loop-back path around [x; y]
is node x ! node z ! ! node w ! node y. We create two looping arcs, Bloop x;z and Rloop y;w .
Bloop x;z is created at node x by attaching the tail of (z; x) 2 A to the head of (x; z) 2 A so that
signals which arrive for transmission on (x; y) in B are now looped back at x to R. Similarly, Rloop y;w
is created by attaching the tail of (w; y) 2 A to the head of (y; w) 2 A, so that any signal which arrives
for transmission on (w; y) in R is looped back to B at y. Figure 2 illustrates our loop-back example.
Edge [x; y] can be successfully bypassed as long as there exists a working path with su-cient capacity
from x to y in R and a working path with su-cient capacity from y to x in B.
Let us consider that we have an edge-redundant undirected graph E). We seek a directed
spanning sub-graph G, and its associated reversal
connected, i.e. there is a directed path in B from any node to any other
node.
condition 2: (i;
A.
Since R is connected i B is connected, condition 1 insures that any connection can be routed on
B or R. Condition 1 also ensures that loop-back can be performed. Suppose edge [x; y] fails. Also
suppose without loss of generality that (x; y) is an arc of B. In order to eect loop-back, we require
that there exist a path from x to y in Rn(y; x) and a path from y to x in Bn(x; y). Such paths
are guaranteed to exist because B and R are connected and such paths obviously do not traverse
(x; y) or (y; x). Hence, connectivity is su-cient to guarantee loop-back connectivity in the event of
an edge failure. Since condition 2 implies that (i; j) cannot be an arc of B and R, condition 2 ensures
that loop-back connections on R do not travel over the same arc as primary connections on B, and
vice-versa. Therefore, any algorithm which builds a graph B with properties 1 and 2 will su-ce. The
algorithm presented below is one such algorithm.
We start by choosing an arbitrary directed cycle G with at least 3 nodes (k 3).
Such a cycle is guaranteed to exist if G is edge-redundant. If this cycle does not include all nodes in
the graph, we then choose a new directed path or cycle that starts and ends on the cycle and passes
through at least one node not on the cycle. If the new graph does not include all nodes of the graph,
we again construct another directed path or cycle, starting on some node already included, passing
through one or more nodes not included, and then ending on another already included node. The
algorithm continues to add new nodes in this way until all nodes are included. We now formally
present the algorithm followed by the proof of its correctness.
ALGORITHM FOR SELECTING DIRECTIONS TO RECOVER FROM NODE
FAILURES
1.
2. Choose any cycle (c in the graph with k 3.
3.
4. If N
5.
6. Choose a path or cycle pc
) such that x j;0
and such that the
other vertices, x j;i , 1 i L j are chosen outside of N j 1 . For a path, we require x j;0 6= x j;L j
and
A. For a cycle, we require L j < 3 and x
7.
8. Go to step 4.
We rst show that the algorithm for the edge-redundant case terminates if the graph is two edge-
connected, i.e. edge-redundant. We shall proceed by contradiction. The algorithm would fail to
terminate correctly i, at step 6, no new path or cycle pc j could be found but a vertex in N was
not included in N j 1 . We therefore assume, for the sake of contradiction, that such a vertex exists.
Because the graph is connected, there is an edge which connects some x in N j 1 to some
y in NnN j 1 . Because the graph is edge-redundant, there exists a path between x and y which does
not traverse e. Let be the last edge from which this path exits N
. Note that w and x can be the same, but if y. Similarly, y and z may
be the same, but then x 6= w. Now, there exists a path from x to w, passing through y, which would
be selected at step 6. Thus, we have a contradiction.
It is easy to see that Condition 2 is satised. Note that if (i; j) is already included in the directed
sub-graph, then Step 6 ensures that (j; i) cannot be added.
Therefore, all that remains to be shown is that B is connected. We use induction on the sub-graphs
obviously connected. Indeed, B 1 is an unidirectional ring. Assume B j 1 is connected, for
2. We need to show for all x; y 2 N j , there is a directed path from x to y in B j . There are 4
cases: (1) x, y
Case 1 follows from the induction hypothesis and the fact that A j is a superset of A j 1 .
For case 2, we have that x; y 2 pc j . Pick vertices l and k such that x
i.e. y comes after x on the path
is a path from x to y in
. If l < k, i.e. y comes before x on the path
), then there exists a path from x to x j;L j
on pc j and a path from x j;0 to y on pc j . If x
, then (x; x
is a path
from y to x in B j . If x j;0 6= x j;L j
, then, by the induction hypothesis, there exists a path p(x j;L j
from x j;L j
to x j;0 in B j 1 and hence on B j . Therefore, (x; x
a path from x to y.
For case 3, we have x 2 pc j , y 2 N j 1 . Pick k such that x by the induction
hypothesis, there exists a path from x j;L j
to y. Vertex x is therefore connected to y since there is a
path from x to x j;L j
on pc j .
For case 4, we have that y 2 pc j , x 2 N j 1 . There is a path from x to x j;0 by the induction
hypothesis, and from x j;0 to y on pc j .
A very simple network management protocol will enable recovery using our choice of directions
created by the above algorithm. When recovering from an arc failure on the primary (secondary)
digraph, the protocol need only broadcast on the secondary (primary) digraph. Each node retains
only the rst copy of the broadcast and releases all unnecessary connections created by the broadcast.
This simple concept is embedded in the protocol presented in Section 2.3.
2.2 Generalized Loop-back for Recovery from Node Failures
While the previous section dealt with recovery from link failures, we consider in this section the event
where a node fails. Note that the failure of a node entails the failure of all links incident upon the
failed node. The failure of a node therefore requires dierent techniques than those used to deal with
link failures.
Let us rst overview the operation of loop-back in a mesh network when there is failure of a node.
Each node connected by a link to the failed node, i.e. adjacent to the failed node, independently
performs loop-back in the same manner as if the link connecting the failed node to the looping node
had failed. We assume that only one primary connection per wavelength is incident upon each node
but that there may be several outputs along one wavelength per node. Thus, we allow the use of
multicasting at nodes. The purpose of our restriction on the connections through a node is to ensure
that, after loop-back, there are no collisions in the back-up graph. Multicasting applications are
particularly attractive for WDM networks, because splitting at optical nodes oers a simple and
eective way of performing multicasting. Note that two types of tra-c are looped back: tra-c
destined for the failed node and tra-c which only traversed the failed node. Let us rst consider the
rst type of tra-c in the case where a node, say j, performs loop-back on the link between j and
node k, the failed node. Node j receives on a back-up channel tra-c intended for node k. Only two
cases are possible: either link [j; k] failed but node k is still operational or node k failed. Note that
we have made no assumption regarding the ability of the network management system to distinguish
between the failure of a node and the failure of a link. Indeed, the nodes may only be aware that
links have ceased to function, without knowing whether the cause is a single link failure or a node
failure. Since we have a node-redundant network, our loop-back mechanism can recover from failure
of node j, which entails failure of link [j; k]. Hence, even if there has been failure of link [j; k] only,
node j can eliminate all back-up tra-c destined to node k, because the back-up mechanism ensures
that back-up tra-c destined for node k arrives to node k even after failure of node j. If node k failed,
then eliminating back-up tra-c destined for k will prevent such back-up tra-c from recirculating in
the network, since recirculation would cause collisions and congestion. Thus, regardless of whether a
node failure or a link failure occurred, back-up tra-c destined for the failed node will be eliminated
when a node adjacent to the failed node receives it. In SONET SHRs, a similar mechanism eliminates
tra-c intended for a failed node.
We may now illustrate our mechanism with a specic example applied to the network we have been
considering. Figure 3 shows a sample set of directions which can be selected for generalized loop-back
recovery from node failure. Let us rst consider the case where we have a primary connection along
the full line from node 1 to node 3 via node 2 and node 2 fails. The shortest loop-back path is node
3. Let us now consider the case where we have a primary
connection along the full line from node 1 to node 2 and node 2 fails. Then, the back-up path goes
from node 8 to node 7, which eliminates the connection, because node 7 is adjacent to node 2.
We model our network as a vertex-redundant undirected graph E). We seek a directed
spanning sub-graph G, and its associated reversal
Condition 1: B is connected, i.e. there is a directed path in B from any node to any other
node.
Condition 2: (i;
A.
Condition 3: For all x; n; y 2 N such that (x; n), (n; y) are arcs of B, there exists a directed
path from x to y in R which does not pass through n.
As in the edge-redundant case, Condition 1 insures that any connection can be routed on B or R.
However, unlike the edge-redundant case, connectivity is insu-cient to guarantee loop-back connectivity
after failure. Also as in the edge-redundant case, Condition 2 insures that loop-back connections
on R do not travel over the same arc as primary connections on B, and vice-versa. Condition 3 insures
that loop-back can be successfully performed and is equivalent to the statement that all 3 adjacent
nodes in B, x, n, y are contained in a cycle of B.
We perform loop-back for node failures in the same manner as described above for link failures.
For instance, let us select two distinct nodes w and z. Let p 1 be the path in B, i.e. the path traversed
over 1 on ber 1, from w to z and let n be a node other than w or z traversed by p 1 . We consider the
nodes x and y such that (x; n) and (n; y) are traversed in that order in p 1 . Thus, (x; n), (n; y) are in
A. Let p 2 be a path in R which does not include vertex n and which goes from vertex x to vertex y.
We perform loop-back from w to z using paths p 1 , p 2 at node n by traversing the following directed
circuit:
from w to x, we use path
at x, we loop-back from primary to secondary
from x to y, we use path
at y, we loop-back from secondary to primary
from y to z, we use path p 1 .
As discussed previously, this loop-back is more general than the type of loop-back used in a
ring. In particular, the loop-back is not restricted to use a back-haul route traversing successively
In order to guarantee loop-back, it is su-cient to select B and R so that, in the event
of any vertex (edge) failure aecting B or R, there exists a working path around the failure on the
other sub-graph.
Any sub-graph with Conditions 1-3 is su-cient to perform loop-back as described above. The
algorithm below guarantees these conditions by amending the algorithm for the edge-redundant case.
The edge-redundant algorithm fails to insure Condition 3 for two reasons. The rst reason is that
cycles are allowed in Step 6, i.e. pc possible in iteration j, and hence failure
of node x j;0 would leave both B and R disconnected. The second and more fundamental reason is
that the ordering of the nodes on the added paths in steps 6 and 7 is very unrestrictive.
Our algorithm starts by choosing a directed cycle of at least 3 vertices containing some arbitrary
s]. If this cycle does not include all nodes in the graph, we then choose a directed path
that starts on some node in the cycle, passes through some set of nodes not on the cycle, and ends
on another node on the cycle. If the cycle and path above do not include all vertices of the graph,
we again construct another directed path, starting on some node already included, passing through
one or more nodes not included, and then ending on another already included node. The algorithm
continues to add new nodes in this way until all nodes are included.
It is simple to show that, in a vertex-redundant graph, for any edge e, a cycle with 3 vertices must
exist containing e. It can also be seen that, for any such cycle, a path can be added as above, and
subsequent paths can be added, in arbitrary ways, until all nodes are included. It is less simple to
choose the direction of the added paths, and hence the B and R directed sub-graphs. The technique
we present relies in part on results presented [MFBG99, MFB97, FMB97, MFGB98]. We now present
the algorithm followed by the proof of its correctness.
ALGORITHM FOR SELECTING DIRECTIONS TO RECOVER FROM NODE
FAILURES
1. 1. Pick an arbitrary edge
2. (a) Choose any cycle (s; c in the graph with k 2.
(b) Order these nodes by assigning values such that
3.
4. If N
5.
6. (a) Choose a path
2, in the graph such that x
). The other vertices, x j;i , 1 i < L j , are chosen outside of
(b) Order the new vertices by assigning values such that v(x j;0
7.
8. Go to step 4.
Note in step 6b that v max v(x j;L j
We rst show that the algorithm for the node-redundant
case terminates if the graph is vertex-redundant. We shall proceed by contradiction. The algorithm
would fail to terminate correctly, i at step 6 no new path p j could be found but a vertex in N was
not included in N j 1 . We assume for the sake of contradiction that such a vertex exists. Because of
the graph is connected, there is an edge [x; y] which connects some x in N j 1 to some y in NnN j 1 .
Pick a vertex q 2 N j 1 , such that q 6= x. Because the graph is node-redundant, there exists a path
between y and q which does not use x. Let be the last edge from which this path exits
. Note that possible. Now, there exists a path
from x to w, passing through y, which would be selected at step 6 in the algorithm. Therefore, we
have a contradiction.
We now prove that B satises Conditions 1-3. The fact that B is connected follows by induction
on j using almost identical arguments as used in the proof for the link-redundant case. In particular,
we can see by induction on j that there is a directed path in B j from x 2 N j to any y 2 N j . Since
these properties hold for each j, they also hold for the nal directed sub-graph B. We may therefore
state that B is connected. As in the edge-redundant case, Condition 2 is satised by the restrictions
on adding new arcs.
Finally, we prove that B satises Condition 3. We need to prove the fact that, for all x; n; y 2 N
such that (x; n), (n; y) are arcs of B, there exists a directed path from x to y in R which does not
pass through n. Since R is the reversal of B, we can prove the equivalent statement that there exists
a directed path from y to x in B which does not pass through n. The key observation is to note that
has a special property. In particular, it is the only arc in B for which the value of the
originating node is lower than the value of the terminating node, i.e.
all (i; From this property it immediately follows that all
directed cycles in B contain (t; s). To see this, let x be a cycle and note that, if (t; s)
were not traversed in this cycle, then v(x could not be
an arc in B. Also, since B is connected, we also have that (t; s) is the unique arc into s in B for,
otherwise, we could construct a cycle through s which did not pass through t.
Only two cases need to be considered to prove the desired property: First
consider is connected, there exists a path from y to x in B and this path need not
include s, since the only way to reach s is through t. Now consider n 6= s. There exists paths
n) from y to n and p(n; from n to x, both in B.
is a cycle, it includes s. Similarly, (n; xm ; xm n) is a cycle and
hence includes s. Therefore, there is a path starting at y, proceeding on p(y; n) until s (which is before
n in p(y; n)), starting in p(n; x) at s (which is after n in p(n; x) ), and ending at x.
2.3 Protocol
We now overview a protocol which ensures proper recovery, using generalized loopback for node or link
recovery. Our protocol is more involved than that need to recover only from a link failure, since we
must contend with the failure of all links adjacent to the failed node. However, our algorithm will also
operate properly for link failures without node failures. Our protocol uses negative acknowledgements
and labels, to establish correct rerouting. The signaling for the protocol may be performed over an
out-of-band control channel, or any in-band control channel scheme, such as a subcarrier multiplexed
signal.
Consider the failure of a primary ber from x to y. Failure of the ber may be due to failure of
the ber itself or of node y. When x detects the failure, it writes "y" into the failure label and loops
the primary stream back into the back-up digraph, splitting it across all outgoing arcs in the back-up
digraph. As the tra-c enters each new node, the node forwards the tra-c, again splitting it over
all outgoing arcs. Backup bers leaving a node can be pre-congured to split an incoming stream,
shortening the time required to
ood failure information across outgoing links. For nodes with only
one incoming stream, the route is fully pre-planned, and no tra-c is lost during the decision process.
For nodes with more than one incoming stream, the rst of the streams to receive tra-c is chosen
for forwarding. A stream that becomes active after the rst- typically owing to tra-c from the same
failure arriving via a dierent route- is dropped, and a negative acknowledgement (NACK)is returned
on the reverse back-up arc. A node that receives a NACK on an outgoing link ceases to forward tra-c
on that link. If all outgoing links for a node are NACKed, the node propagates a NACK on its own
incoming link, in eect releasing the connection on that link. If all outgoing links at x are NACKed,
recovery has failed (possibly multi-failure scenarios or scenarios where several connections over the
same wavelength were present at a failed node).
The NACK-based protocol can be extended with hop-count and signal-power (splitting) restrictions
to reduce the area over which a failure propagates, but such restrictions require more careful selection of
the back-up digraph to guarantee recovery from all single failures and to prevent signicant degradation
of multi-failure recovery possibilities.
The use of NACKs serves to limit the use of back-up arcs to those necessary to recovery. Another
approach to achieving this goal is to mark the successful route and forward tear-down messages
down all other arcs. The NACK scheme is superior to this approach in two ways. First, tear-down
messages must catch up with the leading edge of the tra-c, but cannot travel any faster. In the
worst case, a route is torn down only to allow a cyclic route to recreate itself, resulting in long-term
tra-c instabilities in the back-up digraph. To avoid this possibility, tear-down requires that nodes
remember the existence of failure tra-c between the time that they tear down the route and the time
that the network settles (a global phenomenon). A second point in favor of the NACK-based scheme
is that it handles multicast (possibly important for node failures) naturally by discarding only unused
routes. A tear-down scheme must know the number of routes to recover in advance or discover it
dynamically; the rst option requires a node to know the routes through its downstream neighbors,
while the second option is hard because of timing issues (when have all routes been recovered?).
Meanwhile, y detects the loss of the stream and begins listening for tra-c with its name or x's
name on the back-up stream. The second case handles failure of node x, which results in
ooding of
tra-c destined for x. Note that tra-c for x can also be generated owing to failure of a primary arc
ending at x, but, in such a case, y does not detect any failure and does not listen for back-up tra-c.
Once a stream with either name is received, it is spliced into the primary tra-c stream, completing
recovery. Other paths are torn down through NACKs.
Note that if a stream ends at a failed node x, no node listens for the back-up tra-c for node x,
and all connections carrying back-up tra-c for node x are eventually torn down.
While our protocol for node failure is more complicated than that for link failure, it is still relatively
simple. Node failure in ring-based systems is a very complex operation whenever a node is on more than
one ring. For double cycle cover, node recovery requires hopping among rings, and thus necessitates
a centralized controller with global knowledge of the network. Even for simple double-homed SONET
rings, node recovery involves the use of matched nodes. Proper operation of matched nodes requires
signicant inter-ring signaling as well as dual-fed path protection between rings.
Applications
3.1 Choice of Routings
In this Section, we present heuristic algorithms for selecting directions in the back-up graph. We
seek to select directions in such a way to avoid excessive length for back-up paths. We consider three
dierent algorithms.
The rst algorithm, which we term Heuristic 1, rst nds, for each link, a shortest loop which
includes that link. A loop is a directed cycle, thus a shortest loop is a directed cycle with the
minimum number of links. We order the shortest loops of all the links in ascending order of length.
Shortest loops with equal lengths are ordered arbitrarily with respect to each other. Beginning with
the rst shortest loop, in ascending order, we assign, whenever possible, directions according to the
directions of the arcs along the shortest loop.
The second algorithm, Heuristic 2, also relies on considering shortest loops but takes into account
the fact that the choice of direction on a link may aect other links. We create a heuristic measure of
this eect, which we call associate number (AN). The AN of link is the number of dierent shortest
loops which pass through that link. In particular, the AN can help us distinguish among links with
equal length shortest loops. We order the links in ascending order of AN. We begin, as for Heuristic 1,
by nding, for each link, a shortest loop which includes that link. Links with equal ANs are ordered
arbitrarily with respect to each other. Beginning with the rst link and progressing in ascending order
we assign directions, whenever possible, according to the shortest loop of the link being considered.
The last algorithm we consider is a random assignment of directions. While the number of possible
directions is exponential in the number of links, we may signicantly reduce that number by requiring
that the directions be feasible.
We apply our algorithms to three networks, NJ LATA, LATAX and ARPANET, shown in gures
4, 5 and 6. We consider the maximum length of a back-up path and the average length. Table 1
shows the results obtained from running the dierent algorithms for the three networks we consider.
times for each network and the best result was kept for the maximum and the
average. Note that the same choice of directions did not always yield both the best maximum and
the best average. Heuristic 2 was run in the same way as Heuristic 1. For the random algorithm, we
limited ourselves to 52 runs for NJ LATA, 128 runs for runs for ARPANET. The best
maximum and the best average were chosen in each case. Comparing the running time of running the
times versus the above number of times for the random algorithm, we obtain
that Heuristic 1 yielded a run time improvement of 72 %, 90 %, 88 % over random choice of directions
for NJ LATA, LATAX and ARPANET, respectively. Heuristic 2 yielded a run time improvement of 73
%, 91 %, 90 % over random choice of directions for NJ LATA, LATAX and ARPANET, respectively.
From our simulations, Heuristic 2 was slightly better than Heuristic 1 in terms of run time and average
back-up length.
Table
1: Comparison of the best results between heuristic algorithms and the method of selecting directions
randomly
Random Heuristic 1 Heuristic 2
Max. Avg. Max. Avg. Max. Avg.
3.2 WDM-based Loop-back Recovery
In ber-based restoration, the entire tra-c carried by a ber is backed by another ber. In ber-
based restoration it does not matter whether the system is a WDM system. If tra-c is allowed in
both directions in a network, ber-based restoration relies on four bers, as illustrated in Figure 7. In
WDM-based recovery, restoration is performed in a wavelength-by-wavelength basis.
WDM-based recovery requires only two bers, although it is applicable to a higher number of
bers.
Figure
8 illustrates WDM-based recovery. A two-ber counter-propagating WDM system can
be used for WDM-based restoration, even if tra-c is allowed in both directions. Note that WDM
restoration as shown on Figure 8 does not require any change of wavelength. Thus, tra-c initially
carried by 1 is backed up by the same wavelength. Obviating the need for wavelength changing is
economical and e-cient in WDM networks. One could, of course, back up tra-c from 1 on ber
ber 2, if there were advantages to such wavelength changing, for instance in terms of
wavelength assignment for particular tra-c patterns. We can easily extend the model to a system
with more bers, as long as the back-up for a certain wavelength on a certain ber is provided by
some wavelength on another ber. Moreover, we may change the ber and/or wavelengths from one
ber section to another. For instance, the back-up to 1 on ber 1 may be 1 on ber 2 on a two-ber
section and 2 on ber 3 on another section with four bers. Note, also, that we could elect not to
back up 1 on ber 1 and instead use 1 on ber 1 for primary tra-c. The extension to systems
with more bers, inter-wavelength back-ups and heterogeneous back-ups among ber sections can be
readily done.
There are several advantages to WDM-based recovery systems over ber-based systems. The rst
advantage is that, if bers are loaded with tra-c at half of total capacity or less, then only two
bers rather than four are needed to provide recovery. Thus, a user need only lease two bers, rather
than paying for unused bandwidth over four bers. On existing four-ber systems, bers could be
leased by pairs rather than fours, allowing two leases of two bers each for a single four-ber system.
The second advantage is that, in ber based-systems, certain wavelengths may be selectively given
restoration capability. For instance, half the wavelengths on a ber may be assigned protection, while
the rest may have no protection. Dierent wavelengths may thus aord dierent levels of restoration
QoS, which can be re
ected in pricing. In ber-based restoration, all the tra-c carried by a ber is
restored via another ber.
If each ber is less than half full, WDM-based loop-back can help avoid the use of counterpropagating
wavelengths on the same ber. Counterpropagating wavelengths on the same ber are intended
to enable duplex operation in situations which do not require a full ber's worth of capacity in each
direction and which have scarce ber resources. However, counterpropagation on the same ber is
onerous and reduces the number of wavelengths that a ber can carry with respect to unidirectional
propagation. Our WDM-based loop-back may make using 2 unidirectional bers preferable to using
counterpropagating bers, where one ber is a back-up for the other.
We may now draw a comparison between generalized loop-back and double cycle covers for WDM-based
loop-back recovery. The ability to perform restoration over mesh topologies was rst introduced
in [ESH97, ES96]. In particular, [ESH97] considers link failure restoration in optical networks with
arbitrary two-link redundant arbitrary mesh topologies and bi-directional links. The scheme relies on
applying methods for double cycle covers to restoration.
Let us rst discuss how double-cycle ring covers can be used to perform recovery. A double cycle
ring cover covers a graph with cycles in such a way that each edge is covered by two cycles. Cycles
can then be used as rings to perform restoration. Each cycle corresponds either to a primary or a
secondary two-ber ring. For two-edge connected planar graphs, a polynomial-time algorithm exists
for creating double cycle covers. For two-edge connected non-planar graphs, the fact that double cycle
covers exist is a conjecture, thus no algorithm except for exhaustive search exists. In this subsection,
we consider an example network and its possible double cycle covers. On the basis of these double
cycle covers, we discuss whether double cycle covers can be used in the context of WDM loop-back,
described in the previous section.
Let us consider a link covered by two rings, rings 1 and 2. If we assign a direction to ring 1 and
the opposite direction to ring 2, then ring-based recovery using the double cycle cover uses ring 2
to back up ring 1. In eect, this recovery is similar to recovery in conventional SHRs, except that
the two rings which form four-ber SHRs are no longer co-located over their entire length. Figure 9
shows the two possible double cycle covers, shown in thin lines, for a certain ber topology, shown in
bold lines. In the case of four ber systems, with two bers in the same direction per ring, we have
ber-based recovery, because bers are backed up by bers. For the type of WDM-based loop-back
we consider in this section, each ring is both primary for certain wavelengths and secondary for the
remaining wavelengths. For simplicity, let us again consider just two wavelengths. Figure 10 and 11
show that we cannot use one ring to provide WDM-based loop-back back-up for another unless we
perform wavelength changing and add signicant network management. We cannot assign primary
and secondary wavelengths in such a way that a wavelength is secondary or primary over a whole ring.
We may point out another drawback of the back-up paths aorded by double cycle covers. For
both ring covers shown in Figure 9, some links are covered by rings which are not of the same length.
For instance, in Figure 10, a break on a link may cause one direction to be backed up on ring 1,
while another direction may be backed up on ring 4. Thus, the back-up may traverse only three links
along ring 1 in one direction and seven links along 4 in the other direction. The two directions on
a link will therefore have dierent delays in their restoration time and incur dierent timing jitters.
Such asymmetry in the propagation for the back-up path does not occur in SHRs or in generalized
loop-back, since the back-up paths for both directions traverse the same links.
3.3 Plurality of Back-up Routes for Generalized Loop-back
We have mentioned that our algorithm can be used to perform recovery even when there is a change
in the conditions of the networks. In this section, we give a brief example of how such
exibility is
aorded. Figure 12 illustrates our example. We have single network, with a recovery sub-graph built
for link failure restoration. In case of failure of the link between node 2 and node 3, the recovery
back-up path is shown by a curving directed line. The shortest back-up path uses the link between
nodes 9 and 10, as shown on Figure 12. Suppose that the link between nodes 9 and 10 becomes
unusable for the back-up path, for instance because the link has failed or because all the wavelengths
on that link are used to accommodate extra tra-c. Then, the back-up path for a failure between
nodes 2 and 3 can be the path node 3 shown
by a curved line on Figure 12.b. Thus, the same back-up sub-graph can be used both when the link
between nodes 9 and 10 is available and when it is not available.
Note that not all links can be allowed to become unavailable. If the link between nodes 3 and 10
becomes unavailable, the restoration after failure of the link between nodes 2 and 3 is not possible.
However, it is possible to determine whether certain links are necessary for recovery in the case of
failure of other links. Since there are two paths in the back-up sub-graph from node 10 to node 9, the
link between nodes 9 and 10 is not necessary and that link can be freed up to carry extra tra-c, if
the need arises.
Recent results [MLT00] have shown that signicant savings, of the order of 25 %, can be achieved
using generalized loop-back over a variety of networks without sacricing the length of the longest
back-up and the ability to recover from double failures.
4 Summary and Conclusions
We have presented generalized loop-back, a novel way of implementing loop-back on mesh networks
without using ring-based schemes. We have established routings and protocols to ensure recovery
after a link or a node failure. Our method is guaranteed to work in polynomial time regardless of
whether the graph representation of the network is planar or not, whereas double cycle covers have
polynomial-time solution only for planar graphs. The gist of our method is to assign directions to
bers and the wavelengths traveling through them. Our method allows great
exibility in planning
the conguration of a network, as long as it has redundancy, while providing the bandwidth utilization
advantages typically associated with loop-back protection in SHRs. Recovery, as in SONET BLSR,
is performed by the nodes adjacent to the link or node failure. Moreover, our loop-back recovery
method does not require the nodes performing loop-back to distinguish between a node and a link
failure. We have shown that simple heuristic algorithms yield satisfactory results in terms of average
and maximum length of back-up paths. We have compared our method to the previously known
method for loop-back for link failure on mesh networks. That method ([ESH97]) is based upon double
cycle covers and we have shown that such a method may not be applied to WDM-based loop-back
systems. Moreover, we have shown by a simple example that generalized loop-back allows recovery to
be performed in a bandwidth-e-cient manner.
There are several areas of further work. One of them is considering the issue of wavelength
assignment jointly with back-up considerations, whether the back-up be loop-back, APS or hybrid.
Another issue is the determination of the back-up path. Broadcasting, or
ooding, in the back-up
wavelength causes that wavelength to be unavailable for other uses in parts of the network which
are not required to provide back-up. Some methods for determining back-up paths are presented in
[FMB98].
Another area for further research is the use of generalized loop-back to perform bandwidth-e-cient
recovery. As we discussed in Section 2.1, link and node restoration generally are less e-cient, in terms
of capacity utilization, than even-triggered path restoration. However, our scheme allows recovery
of links which are not included in the back-up sub-graph, as long as the end nodes are included in
the back-up sub-graph. This operation can be viewed as being akin to p-cycles, but with greater
exibility in the choice of the back-up sub-graph. Eliminating links from the back-up sub-graph is
economical in bandwidth but entails some degradation in terms of other performance metrics, such as
length of back-up path or recovery from two failures. Preliminary results show that signicant savings
in terms of bandwidth utilization can be achieved without appreciable aecting other performance
metrics ([MLT00]).
--R
A distributed link restoration algorithm with robust preplanning.
Performance analysis of fast distributed network restoration algorithms.
An architecture for e-cient survivable networks
Protection planning in transmission networks.
A multi-layer restoration strategy for recon gurable networks
A fast distributed network restoration algorithm.
Spare capacity assignment for di
Comparison of capacity e-ciency of DCS network restoration routing techniques
Automatic protection switching for link failures in optical networks with bi-directional links
Link failure restoration in optical networks with arbitrary mesh topologies and bi-directional links
Covering graphs by cycles.
Longest cycles in 2-connected graphs of independence number
Optimal spare capacity design for various protection switching methods in ATM networks.
Double search self-healing algorithm and its characteristics
Near optimal spare capacity placement in a mesh restorable network.
Techniques for
Dynamic bandwidth-allocation and path- restoration in SONET self-healing networks
A girth requirement for the double cycle cover conjecture.
The selfhealing TM network.
Case studies of survivable ring
An optimal spare-capacity assignment model for survivable networks with hop limits
The hop-limit approach for spare-capacity assignment in survivable networks
Dynamic recon
Covering graphs with simple circuits.
Hamilton cycles in regular 2-connected graphs
A survey of the double cycle cover conjecture.
Topologocial optimization of a communication network subject to a reliability constraint.
Veerasamy and J.
Distributed control algorithms for dynamic restoration in DCS mesh networks: Performance evaluation.
An ATM VP-based self-healing ring
ATM virtual path self-healing based on a new path restoration protocol
A bandwidth e-cient self-healing ring for B-ISDN
A dynamic recon
Design of survivable communication networks under performance constraints.
Survivable WDM mesh networks
Sums of circuits.
Interconnection of self-healing rings
An algorithm for survivable network design employing multiple self-healing rings
Distributed self-healing control in SONET
Service application for SONET DCS distributed restoration.
Design of Survivable Networks.
Survivable network planning methods and tools in taiwan.
A capacity comparison for SONET self-healing ring networks
Polyhedral decomposition of cubic graphs.
On the complexity of
Two strategies for spare capacity placement in mesh restorable networks.
An algorithm for designing rings for survivable
Feasibility study of a high-speed SONET self-healing ring architecture in future intero-ce networks
A multi-period design model for survivable network architecture selection for SDH/SONET intero-ce networks
Strategies and technologies for planning a cost-eective survivable network architecture using optical switches
Survivable network architectures for broad-band ber optic networks: Model and performance comparison
Backup vp preplanning strategies for survivable multicast ATM.
Maximal circuits of graphs ii.
Fiber Network Service Survivability.
A passive protected self-healing mesh network architecture and applications
A novel passive protected SONET bidirectional self-healing ring architecture
Restoration strategies and spare capacity requirements in self-healing ATM networks
Fitness: Failure immunization technology for network service survivability.
An improvement of jackson's result on hamilton cycles in 2-connected graphs
--TR
Covering graphs by cycles
A passive protected self-healing mesh network architecture and applications
On the Complexity of Finding a Minimum Cycle Cover of a Graph
Optimal capacity placement for path restoration in STM or ATM mesh-survivable networks
Restoration strategies and spare capacity requirements in self-healing ATM networks
Redundant trees for preplanned recovery in arbitrary vertex-redundant or edge-redundant graphs
Fiber Network Service Survivability
Spare Capacity Assignment in Telecom Networks Using Path Restoration
--CTR
Timothy Y. Chow , Fabian Chudak , Anthony M. Ffrench, Fast optical layer mesh protection using pre-cross-connected trails, IEEE/ACM Transactions on Networking (TON), v.12 n.3, p.539-548, June 2004
Mansoor Alicherry , Randeep Bhatia, Simple pre-provisioning scheme to enable fast restoration, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.400-412, April 2007
Canhui Ou , Laxman H. Sahasrabuddhe , Keyao Zhu , Charles U. Martel , Biswanath Mukherjee, Survivable virtual concatenation for data over SONET/SDH in optical transport networks, IEEE/ACM Transactions on Networking (TON), v.14 n.1, p.218-231, February 2006 | mesh networks;WDM;loop-back;network restoration |
506843 | A mutual exclusion algorithm for ad hoc mobile networks. | A fault-tolerant distributed mutual exclusion algorithm that adjusts to node mobility is presented, along with proof of correctness and simulation results. The algorithm requires nodes to communicate with only their current neighbors, making it well-suited to the ad hoc environment. Experimental results indicate that adaptation to mobility can improve performance over that of similar non-adaptive algorithms when nodes are mobile. | Introduction
A mobile ad hoc network is a network wherein a pair of nodes communicates
by sending messages either over a direct wireless link, or over a sequence of
wireless links including one or more intermediate nodes. Direct communication
is possible only between pairs of nodes that lie within one another's transmission
radius. Wireless link \failures" occur when previously communicating nodes
move such that they are no longer within transmission range of each other. Like-
wise, wireless link \formation" occurs when nodes that were too far separated to
This is an extended version of the paper presented at the Dial M for Mobility Workshop,
Dallas TX, Oct. 30, 1998.
Supported by GE Faculty of the Future and Dept. of Education GAANN fellowships.
Supported in part by NSF PYI grant CCR-9396098 and NSF grant CCR-9972235.
Supported in part by Texas Advanced Technology Program grant 010115-248 and NSF
grants CDA-9529442 and CCR-9972235.
communicate move such that they are within transmission range of each other.
Characteristics that distinguish ad hoc networks from existing distributed networks
include frequent and unpredictable topology changes and highly variable
message delays. These characteristics make ad hoc networks challenging environments
in which to implement distributed algorithms.
Past work on modifying existing distributed algorithms for ad hoc networks
includes numerous routing protocols (e.g., [8,9,11,13,16,18,19,22{24]), wireless
channel allocation algorithms (e.g., [14]), and protocols for broadcasting and multicasting
(e.g., [8,12,21,26]). Dynamic networks are xed wired networks that
share some characteristics of ad hoc networks, since failure and repair of nodes
and links is unpredictable in both cases. Research on dynamic networks has focused
on total ordering [17], end-to-end communication, and routing (e.g., [1,2]).
Existing distributed algorithms will run correctly on top of ad hoc routing
protocols, since these protocols are designed to hide the dynamic nature of
the network topology from higher layers in the protocol stack (see Figure 1(a)).
Routing algorithms on ad hoc networks provide the ability to send messages from
any node to any other node. However, our contention is that eciency can be
gained by developing a core set of distributed algorithms, or primitives, that are
aware of the underlying mobility in the network, as shown in Figure 1(b). In
this paper, we present a mobility aware distributed mutual exclusion algorithm
to illustrate the layering approach in Figure 1(b).
User Applications
Distributed
Primitives
Routing
Protocol
Distributed Primitives
Routing Protocol
Ad Hoc Network
User Applications
Ad Hoc Network
(b)
(a)
Figure
1. Two possible approaches for implementing distributed primitives
The mutual exclusion problem involves a group of processes, each of which
intermittently requires access to a resource or a piece of code called the critical
section (CS). At most one process may be in the CS at any given time. Providing
shared access to resources through mutual exclusion is a fundamental problem
in computer science, and is worth considering for the ad hoc environment, where
stripped-down mobile nodes may need to share resources.
Distributed mutual exclusion algorithms that rely on the maintenance of a
logical structure to provide order and eciency (e.g., [20,25]) may be inecient
when run in a mobile environment, where the topology can potentially change
with every node movement. Badrinath et al.[3] solve this problem on cellular mobile
networks, where the bulk of the computation can be run on wired portions
of the network. We present a mutual exclusion algorithm that induces a logical
directed acyclic graph (DAG) on the network, dynamically modifying the logical
structure to adapt to the changing physical topology in the ad hoc environment.
We then present simulation results comparing the performance of this algorithm
to a static distributed mutual exclusion algorithm running on top of an ad hoc
routing protocol. Simulation results indicate that our algorithm has better average
waiting time per CS entry and message complexity per CS entry no greater
than the cost incurred by a static mutual exclusion algorithm running on top of
an ad hoc routing algorithm.
The next section discusses related work. In Section 3, we describe our system
assumptions and dene the problem in more detail. Section 4 presents our mutual
exclusion algorithm. We present a proof of correctness and discuss the simulation
results in Sections 5 and 6, respectively. Section 7 presents our conclusions.
2. Related Work
Token based mutual exclusion algorithms provide access to the CS through
the maintenance of a single token that cannot simultaneously be present at more
than one node in the system. Requests for CS entry are typically directed to
whichever node is the current token holder.
Raymond [25] introduced a token based mutual exclusion algorithm in which
requests are sent, over a static spanning tree of the network, toward the token
this algorithm is resilient to non-adjacent node crashes and recoveries,
but is not resilient to link failures. Chang et al.[7] extend Raymond's algorithm
by imposing a logical direction on a sucient number of links to induce a token
oriented DAG in which, for every node i, there exists a directed path originating
at i and terminating at the token holder. Allowing request messages to be sent
over all links of the DAG provides resilience to link and site failures. However,
this algorithm does not consider link recovery, an essential feature in a system of
mobile nodes.
Dhamdhere and Kulkarni [10] show that the algorithm of [7] can suer from
deadlock and solve this problem by assigning a dynamically changing sequence
number to each node, forming a total ordering of nodes in the system. The token
holder always has the highest sequence number, and, by dening links to point
from a node with lower to higher sequence number, a token oriented DAG is
maintained. Due to link failures, a node i that wants to send a request for the
token may nd itself with no outgoing links to the token holder. In this situation,
oods the network with messages to build a temporary spanning tree. Once the
token holder becomes part of such a spanning tree, the token is passed directly to
node i along the tree, bypassing other requests. Since priority is given to nodes
that lose a path to the token holder, it seems likely that other requesting nodes
could be starved as long as link failures continue. Also,
ooding in response
to link failures and storing messages for delivery after link recovery make this
algorithm ill-suited to the highly dynamic ad hoc environment.
Our token based algorithm combines ideas from several papers. The partial
reversal technique from [13], used to maintain a destination oriented DAG in a
packet radio network when the destination is static, is used in our algorithm to
maintain a token oriented DAG with a dynamic destination. Like the algorithms
of [25], [7], and [10], each node in our algorithm maintains a request queue containing
the identiers of neighboring nodes from which it has received requests
for the token. Like [10], our algorithm totally orders nodes. The lowest node is
always the current token holder, making it a \sink" toward which all requests
are sent. Our algorithm also includes some new features. Each node dynamically
chooses its lowest neighbor as its preferred link to the token holder. Nodes sense
link changes to immediate neighbors and reroute requests based on the status of
the previous preferred link to the token holder and the current contents of the
local request queue. All requests reaching the token holder are treated symmetri-
cally, so that requests are continually serviced while the DAG is being re-oriented
and blocked requests are being rerouted.
3. Denitions
The system contains a set of n independent mobile nodes, communicating by
message passing over a wireless network. Each mobile node runs an application
process and a mutual exclusion process that communicate with each other to
ensure that the node cycles between its REMAINDER section (not interested in
the CS), its WAITING section (waiting for access to the CS), and its CRITICAL
section. Assumptions 1 on the mobile nodes and network are:
1. the nodes have unique node identiers,
2. node failures do not occur,
3. communication links are bidirectional and FIFO,
4. a link-level protocol ensures that each node is aware of the set of nodes with
which it can currently directly communicate by providing indications of link
formations and failures,
5. incipient link failures are detectable, providing reliable communication on a
per-hop basis, and
6. partitions of the network do not occur.
The rest of this section contains our formal denitions. We explicitly model
only the mutual exclusion process at each node. Constraints on the behavior of
the application processes and the network appear as conditions on executions.
The system architecture is shown in Figure 2.
We assume the node identiers are 0; 1. Each node has a (mutual
exclusion) process, modeled as a state machine, with the usual set of states, some
of which are initial states, and a transition function. Each state contains a local
variable that holds the node identier and a local variable that holds the current
neighbors of the node. The transition function is described in more detail shortly.
Application Process
Mutual Exclusion Process
Network
node i
RequestCS
EnterCS
Figure
2. System Architecture
Section 7 for a discussion of relaxing assumption 6.
A conguration describes the instantaneous state of the whole system; for-
mally, it is a set of n states, one for each process. In an initial conguration, each
state is an initial state and the neighbor variables describe a connected undirected
graph.
A step of the process at node i is triggered by the occurrence of an input
event. Input events are:
the application process on node i requests access to the CS,
entering its WAITING section.
the application process on node i releases its access to the CS,
entering its REMAINDER section.
Recv i (j; m): node i receives message m from node j.
receives notication that the link l incident on i is now up.
LinkDown i (l): node i receives notication that the link l incident on i is now
down.
The eect of a step is to apply the process' transition function, taking as input
the current state of the process and the input event, and producing as output a
(possibly empty) set of output events and a new state for the process. Output
events are:
the mutual exclusion process on node i informs its application
process that it can enter the CRITICAL section.
Send i (j; m): node i sends message m to node j.
The only constraint on the state produced by the transition function is that the
neighbor set variable of i must be properly updated in response to a LinkUp or
LinkDown event.
are called application events, while
are called network events.
An execution is a sequence of the form C
where the C k 's are congurations, the in k 's are input events, and the out k 's are
sets of output events. An execution must end in a conguration if it is nite. A
positive real number is associated with each in i , representing the time at which
that event occurs. An execution must satisfy a number of additional conditions,
which we now list. The rst set of conditions are basic \syntactic" ones.
C 0 is an initial conguration.
If in k occurs at node i, then out k and i's state in C k are correct according to
i's transition function operating on in k and i's state in C k 1 .
The times assigned to the steps must be nondecreasing. If the execution is
innite, then the times must increase without bound. At most one step by
each process can occur at a given time.
The next set of conditions require the application process to interact properly
with the mutual exclusion process.
If in k is RequestCS i , then the previous application event at node i (if any) is
If in k is ReleaseCS i , then the previous application event at node i must be
The remaining conditions constrain the behavior of the network to match the
informal description given above. First, we consider the mobility notication.
occurs at time t if and only if LinkUp j (l) occurs at time t, where
l joins i and j. Furthermore, LinkUp i (l) only occurs if j is currently not a
neighbor of i (according to i's neighbor variable). The analogous condition
holds for LinkDown.
A LinkDown never disconnects the graph.
Finally, we consider message delivery. There must exist a one-to-one and
onto correspondence between the occurrences of Send j (i; m) and Recv i (j; m), for
all i, j and m. This requirement implies that every message sent is received and
the network does not duplicate or corrupt messages nor deliver spurious messages.
Furthermore, the correspondence must satisfy the following:
If Send i (j; m) occurs at some time t, then the corresponding Recv j (i; m) occurs
at some later time t 0 , and the link connecting i and j is continuously up between
t and t 0 . This implies that a LinkDown event for link l cannot occur if any
messages are in transit on l.
Now we can state the problem formally. In every execution, the following
must hold:
If in k is EnterCS i , then the previous application event at node i must be
RequestCS i . I.e., CS access is only given to requesting nodes.
Mutual Exclusion: If in k is EnterCS i , then any previous EnterCS j event must
be followed by a ReleaseCS j prior to in k .
there are only a nite number of LinkUp i and LinkDown i
events, then if in k is RequestCS i , then there is a following EnterCS i .
For the last condition, the hypothesis that link changes cease is needed because
an adversarial pattern of link changes can cause starvation.
4. Reverse Link (RL) Mutual Exclusion Algorithm
In this section we rst present the data structures maintained at each node in
the system, followed by an overview of the algorithm, the algorithm pseudocode,
and examples of algorithm operation. Throughout this section, data structures
are described for node i, 0 i n 1. Subscripts on data structures to indicate
the node are only included when needed.
4.1. Data Structures
status: Indicates whether node is in the WAITING, CRITICAL, or REMAINDER
section. Initially, status = REMAINDER.
The set of all nodes in direct wireless contact with node i. Initially, N
contains all of node i's neighbors.
myHeight: A three-tuple (h1,h2,i) representing the height of node i. Links are
considered to be directed from nodes with higher height toward nodes with
lower height, based on lexicographic ordering. E.g., if myHeight
and myHeight and the link between
these nodes would be directed from node 1 to node 2. Initially at node 0,
initialized so that the
directed links form a DAG in which every node has a directed path to node 0.
height[j]: An array of tuples representing node i's view of myHeight j for all
Initially, In node i's viewpoint,
then the link between i and j is incoming to node i if height[j] >
myHeight, and outgoing from node i if height[j] < myHeight.
tokenHolder: Flag set to true if node holds token and set to false otherwise.
Initially,
next: When node i holds the token, next = i, otherwise next is the node on an
outgoing link. Initially, next next is an outgoing neighbor
otherwise.
Q: Queue containing identiers of requesting neighbors. Operations on Q include
Enqueue(), which enqueues an item only if it is not already on Q, De-
queue() with the usual FIFO semantics, and Delete(), which removes a specied
item from Q, regardless of its location. Initially,
receivedLI[j]: Boolean array indicating whether LinkInfo message has been received
from node j, to which a Token message was recently sent. Any height
information received at node i from a node j for which receivedLI[j] is false will
not be recorded in height[j]. Initially, receivedLI i
forming[j]: Boolean array set to true when link to node j has been detected
as forming and reset to false when rst LinkInfo message arrives from node j.
Initially, forming i
formHeight[j]: An array of tuples storing value of myHeight when new link to
rst detected. Initially, formHeight i
4.2. Overview of the RL Algorithm
The mutual exclusion algorithm is event-driven. An event at a node i consists
of receiving a message from another node j 6= i, or an indication of link
failure or formation from the link layer, or an input from the application on node
i to request or release the CS. Each message sent includes the current value of my-
Height at the sender. Modules are assumed to be executed atomically. First, we
describe the pseudocode triggered by events and then we describe the pseudocode
for procedures.
Requesting and releasing the CS: When node i requests access to the CS, it
enqueues its own identier on Q and sets status to WAITING. If node i does
not currently hold the token and i has a single element on its queue, it calls
ForwardRequest() to send a Request message. If node i does hold the token, i
can set status to CRITICAL and enter the CS, since it will be at the head of
Q. When node i releases the CS, it calls GiveTokenToNext() to send a Token
message if Q is non-empty, and sets status to REMAINDER.
Request messages: When a Request message sent by a neighboring node j is
received at node i, i ignores the Request if receivedLI[j] is false. Otherwise, i
changes height[j], and enqueues j on Q if the link between i and j is incoming
at i. If Q is non-empty, and status = REMAINDER, i calls GiveTokenToNext(),
holds the token. Non-token holding node i calls RaiseHeight() if the
link to j is now incoming and i has no outgoing links or i calls ForwardRequest()
non-empty and the link to next has reversed.
Token messages: When node i receives a Token message from some neighbor j,
lowers its height to be lower than that of the
last token holder, node j, informs all its outgoing neighbors of its new height by
sending LinkInfo messages, and calls GiveTokenToNext(). Node i also informs j
of its new height so that j will know that i received the token.
messages: If receivedLI[j] is true when a LinkInfo message is received
at node i from node j, j's height is saved in height[j]. If receivedLI[j] is false,
checks if the height of j in the message is what it was when i sent the Token
message to j. If so, i sets receivedLI[j] to true. If forming[j] is true, the current
value of myHeight is compared to the value of myHeight when the link to j was
rst detected, formHeight[j]. If myHeight and formHeight[j] are dierent, then a
LinkInfo message is sent to j. Identier j is added to N and forming[j] is set to
false. If j is an element of Q and j is an outgoing link, then j is deleted from Q. If
node i has no outgoing links and is not the token holder, i calls RaiseHeight() so
that an outgoing link will be formed. Otherwise, if Q is non-empty, and the link
to next has reversed, i calls ForwardRequest() since it must send another Request
for the token.
Link failures: When node i senses the failure of a link to a neighboring node j, it
removes j from N , sets receivedLI[j] to true, and, if j is an element of Q, deletes
j from Q. Then, if i is not the token holder and i has no outgoing links, i calls
RaiseHeight(). If node i is not the token holder, Q is non-empty, and the link to
next has failed, i calls ForwardRequest() since it must send another Request for
the token.
Link formation: When node i detects a new link to node j, i sends a LinkInfo
message to j with myHeight, sets forming[j] to true, and sets
myHeight.
Procedure ForwardRequest: Selects node i's lowest height neighbor to be next.
Sends a Request message to next.
Procedure GiveTokenToNext: Node i dequeues the rst node on Q and sets next
equal to this value. If next = i, i enters the CS. If next 6= i, i lowers height[next] to
(myHeight.h1, myHeight.h2 1; next), so any incoming Request messages will be
sent to next, sets tokenHolder = false, sets receivedLI[next] to false, and then sends
a Token message to next. If Q is non-empty after sending a Token message to
next, a Request message is sent to next immediately following the Token message
so the token will eventually be returned to i.
Procedure RaiseHeight: Called at non-token holding node i when i loses its last
outgoing link. Node i raises its height (in lines 1-3) using the partial reversal
method of [13] and informs all its neighbors of its height change with LinkInfo
messages. All nodes on Q to which links are now outgoing are deleted from Q. If
Q is not empty at this point, ForwardRequest() is called since i must send another
Request for the token.
4.3. The RL Algorithm
When node i requests access to the CS:
1. status := WAITING
2. Enqueue(Q; i)
3. If (not tokenHolder) then
4. If
5. Else GiveTokenToNext()
When node i releases the CS:
1. If (jQj > 0) then GiveTokenToNext()
2. status := REMAINDER
When Request(h) received at node i from node j:
// h denotes j's height when message was sent
1. If (receivedLI[j]) then
2. height[j] := h // set i's view of j's height
3. If (myHeight < height[j]) then Enqueue(Q;
4. If (tokenHolder) then
5. If ((status = REMAINDER) and (jQj > 0)) then GiveTokenToNext()
6. Else // not tokenHolder
7. If (myHeight < height[k], 8 k 2 N) then RaiseHeight()
8. Else if
9. ForwardRequest() // reroute request
When Token(h) received at node i from node j:
// h denotes j's height when message was sent
1. tokenHolder := true
2. height[j] := h
3. Send LinkInfo(h.h1, h.h2 1; i) to all outgoing k 2 N and to j
4. myHeight.h1 := h.h1
5. myHeight.h2 := h.h2 - 1 // lower my height
6. If (jQj > 0) then GiveTokenToNext() Else next := i
When LinkInfo(h) received at node i from node j:
// h denotes j's height when message was sent
1. N := N [ fjg
2. If ((forming[j]) and (myHeight 6= formHeight[j])) then
3. Send LinkInfo(myHeight) to j
4. forming[j] := false
5. If (receivedLI[j]) then height[j] := h
6. Else if
7. If (myHeight > height[j]) then Delete(Q;
8. If ((myHeight < height[k], 8k 2 N) and (not tokenHolder)) then RaiseHeight()
// reroute request
9. Else if ((jQj > 0) and (myHeight < height[next])) then ForwardRequest()
When failure of link to j detected at node i:
1. N := N fjg
2. Delete(Q;
3. receivedLI[j] := true
4. If (not tokenHolder) then
5. If (myHeight < height[k], 8k 2 N) then RaiseHeight()
// reroute request
6. Else if ((jQj > 0) and (next 62 N)) then ForwardRequest()
When formation of link to j detected at node i:
1. Send LinkInfo(myHeight) to j
2. forming[j] := true
3. formHeight[j] := myHeight
Procedure
1. next := l
2. Send Request(myHeight) to next
Procedure GiveTokenToNext(): // only called when jQj > 0
1. next := Dequeue(Q)
2. If (next 6= i) then
3. tokenHolder := false
4. height[next] := (myHeight.h1, myHeight.h2 1, next)
5. receivedLI[next] := false
6. Send Token(myHeight) to next
7. If (jQj > 0) then Send Request(myHeight) to next
8. Else // next
9. status := CRITICAL
10. Enter CS
Procedure
1.
2. S :=
3. If (S 6= ;) then myHeight.h2 := min l2S fheight[l].h2g 1
4. Send LinkInfo(myHeight) to all k 2 N
Raising own height can cause some links to become outgoing
5. For (all k 2 N such that myHeight > height[k]) do Delete(Q;
Must reroute request if queue non-empty, since just had no outgoing links
6. If (jQj > 0) then ForwardRequest()
4.4. Examples of Algorithm Operation
We rst discuss the case of a static network, followed by a dynamic network.
An illustration of the algorithm on a static network (in which links do not fail
or form) is depicted in Figure 3. Snapshots of the system conguration during
algorithm execution are shown, with time increasing from 3(a) to 3(e). The direct
wireless links are shown as dashed lines connecting circular nodes. The arrow on
each wireless link points from the higher height node to the lower height node.
The request queue at each node is depicted as a rectangle, the height is shown as
a 3-tuple, and the token holder as a shaded circle. The next pointers are shown as
solid arrows. Note that when a node holds the token, its next pointer is directed
towards itself.
In
Figure
nodes 2 and 3 have requested access to the CS (note that
nodes 2 and 3 have enqueued themselves on Q 2 and Q 3 ) and have sent Request
messages to node 0, which enqueued them on Q 0 in the order in which the Request
messages were received. Part (b) depicts the system at a later time, where node
1 has requested access to the CS, and has sent a Request message to node 3 (note
that 1 is enqueued on Q 1 and Q 3 ). Figure 3(c) shows the system conguration
after node 0 has released the CS and has sent a Token message to node 3, followed
by a Request sent by node 0 on behalf of node 2. Observe that the logical direction
(d)
(b) (c)
Figure
3. Operation of Reverse Link Mutual Exclusion Algorithm on Static Network
of the link between node 0 and node 3 changes from being directed away from
node 3 in part (b), to being directed toward node 3 in part (c), when node 3
receives the Token message and lowers its height. Notice also the next pointers
of nodes 0 and 3 change from both nodes having next pointers directed toward
node 0 in part (b) to both nodes having next pointers directed toward node 3
in part (c). Part (d) shows the system conguration after node 3 sent a Token
message to node 1, followed by a Request message. The Request message was
sent because node 3 received the Request message from node 0. Notice that the
items at the head of the nodes' request queues in part (d) form a path from the
token holder, node 1, to the sole remaining requester, node 2. Part (e) depicts
the system conguration after Token messages have been passed from node 1 to
node 3 to 0, and from node 0 to 2. Observe that the middle element, h2, of
each node's myHeight tuple decreases by 1 for every hop the token travels, so
that the token holder is always the lowest height node in the system.
We now consider the execution of the RL algorithm on a dynamic network.
The height information allows each node i to keep track of the current logical
direction of links to neighboring nodes, particularly to the node chosen to be
next. If the link to next changes and jQj > 0, node i must reroute its request by
calling ForwardRequest().
Figure
4(a) shows the same snapshot of the system execution as is shown
in
Figure
3(a), with time increasing from 4(a) to 4(e). Figure 4(b) depicts the
system conguration after node 3 has moved in relation to the other nodes in
(a) (b) (c)
(d) (e)
Figure
4. Operation of Reverse Link Mutual Exclusion Algorithm on Dynamic Network
the system, resulting in a network that is temporarily not token oriented, since
node 3 has no outgoing links. Node 0 has adapted to the lost link to node 3 by
removing 3 from its request queue. Node 2 takes no action as a result of the
loss of its link to node 3, since the link to next 2 was not aected and node 2 still
has one outgoing link. In part (c), node 3 has adapted to the loss of its link to
node 0 by raising its height and has sent a Request message to node 1 (that has
not yet arrived at node 1). Part (d) shows the system conguration after node
1 has received the Request message from node 3, has enqueued 3 on Q 1 , and has
raised its height due to the loss of its last outgoing link. In part (e), node 1 has
propagated the Request received from node 3 by sending a Request to node 2,
also informing node 2 of the change in its height. Node 2 subsequently enqueued
1 on Q 2 , but did not raise its own height or send a Request, because node 2 has
an intact link to next 2 , node 0, to which it already sent an unfullled request.
5. Correctness of Reverse Link Algorithm
The following theorem holds because there is only one token in the system
at any time.
Theorem 1. The algorithm ensures mutual exclusion.
To prove no starvation, we rst show that, after link changes cease, eventually
the system reaches a \good" conguration, and then we apply a variant
function argument.
We will show that after link changes cease, the logical directions on the links
imparted by height values will eventually form a \token oriented" DAG. Since
the height values of the nodes are totally ordered, there cannot be any cycles in
the logical graph, and thus it is a DAG. The hard part is showing that this DAG
is token oriented, dened next.
Denition 1. A node i is the token holder in a conguration if tokenHolder
true or if a Token message is in transit from node i to next i .
Denition 2. The DAG is token oriented in a conguration if for every node
there exists a directed path originating at node i and terminating
at the token holder.
To prove Lemma 3, that the DAG is eventually token oriented, we rst
show, in Lemma 1, that this condition is equivalent to the absence of \sink"
nodes [13], as dened below. We then show, in Lemma 2, that eventually there
are no more calls to RaiseHeight(). Throughout, we assume that eventually link
changes cease.
Denition 3. A node i is a sink in a conguration if
Lemma 1. In every conguration of every execution, the DAG is token oriented
if and only if there are no sinks.
Proof: The only-if direction follows from the denition of a token oriented DAG.
The if direction is proved by contradiction. Assume, in contradiction, that there
exists a node i in a conguration such that tokenHolder false and for which
there is no directed path starting at i and ending at the token holder. Since there
are no sinks, i must have at least one outgoing link that is incoming at some other
node. Since the number of nodes is nite, the network is connected, and all links
are logically directed such that no logical path can form a cycle, there must exist
a directed path from i to the token holder, a contradiction.
To show that eventually there are no sinks (Lemma 3), we show that there
are only a nite number of calls to RaiseHeight().
Lemma 2. In every execution with a nite number of link changes, there exists
a nite number of calls to RaiseHeight().
Proof: In contradiction, consider an execution with a nite number of link
changes but an innite number of calls to RaiseHeight(). Then, after link changes
cease, some node calls RaiseHeight() innitely often. We rst note that if one node
calls RaiseHeight() innitely often, then every node calls RaiseHeight() innitely
often. To see this, consider that a node i would call RaiseHeight() innitely
often only if it lost all its outgoing links innitely often. But this would happen
innitely often at node i only if a neighboring node j raised its height innitely
often, and neighboring node j would only call RaiseHeight() innitely often if its
neighbor k raised its height innitely often, and so on. However, Claim 1 shows
that at least one node calls RaiseHeight() only a nite number of times.
1. No node that holds the token after the last link change ever calls
subsequently.
Proof: Suppose the claim is false, and some node that holds the token after
the last link change calls RaiseHeight() subsequently. Let i be the rst node to
do so. By the code, node i does not hold the token when it calls RaiseHeight().
Suppose that node i sends the token to neighboring node j at time t 1 , setting its
view of j to be outgoing, and at a later time, t 3 , node i calls RaiseHeight(). The
reason i calls RaiseHeight() at time t 3 is that it lost its last outgoing link. Thus,
at time t 2 between time t 1 and t 3 , the link between i and j has reversed direction
in i's view from outgoing to incoming. By the code, the direction change at node
i must be due to the receipt of a LinkInfo or Request message from node j. We
discuss these cases separately below.
Case 1: The direction change at node i is due to the receipt of a LinkInfo message
from node j at time t 2 . By the code, when i sends the token to j at t 1 , it sets
receivedLI[j] to false. Therefore, when the LinkInfo message is received at i from
j at time t 2 , node i must have already reset receivedLI[j] to true or i would still
see the link to j as outgoing and would not call RaiseHeight() at time t 2 . Since
called after receiving the LinkInfo message from j at time t 2 , i
must have received the LinkInfo message node j sent when it received the token
from i before time t 2 , by the FIFO assumption on message delivery. Then node
must have received the token and sent it to another node, k 6= i, after which j
raised its height and sent the LinkInfo message that node i received at time t 2 .
However, this violates our assumption that i is the rst node to call RaiseHeight()
after the last link change, a contradiction.
Case 2: The direction change at node i is due to the receipt of a Request message
from node j at time t 2 . By a similar argument to case 1, any Request received
from node j would be ignored at node i as long as receivedLI[j] is false. But
this means that node j must have called RaiseHeight() after it received the token
from node i and subsequently sent the Request received by i at time t 2 . Again,
this violates the assumption that i is the rst node to call RaiseHeight() after the
last link change, a contradiction.
Therefore, node i will not call RaiseHeight() at time t 2 and the claim is true.
Therefore, by Claim 1, there is only a nite number of calls to RaiseHeight()
in any execution with a nite number of link changes.
Lemma 3 follows from Lemma 2, since if a node becomes a sink, it will
eventually be informed via LinkInfo messages and will then call RaiseHeight().
Lemma 3. Once link changes cease, the logical direction on links imparted by
height values will eventually always form a token oriented DAG.
Consider a node that is WAITING in an execution at some point after link
changes and calls to RaiseHeight() have ceased. We rst dene the \request
chain" of a node to be the path along which its request has propagated. Then we
modify the variant function argument in [25] to show that the node eventually
gets to enter the CS.
Denition 4. Given a conguration, a request chain for any node l with a
non-empty request queue is the maximal length list of node identiers
queue is not empty,
the link between p i 1 and p i is outgoing at p i 1 and incoming at p i ,
no Request message is in transit from p i 1 to p i , and
no Token message is in transit from p i to p i 1 .
Lemma 4 gives useful information about what is going on at the end of a
request chain:
Lemma 4. The following is true in every conguration: Let l be a node with a
non-empty request queue and let request chain. Then
(a) l is in Q l i l is WAITING,
(c) either p j is the token holder,
or a Token message is in transit to p j ,
or a Request message is in transit from p j to next p j ,
or a LinkInfo message is in transit from next p j to p j with next p j higher than
or next p j sees the link to p j as failed.
Proof: By induction on the execution.
Property (a) can easily be shown to hold, since a node enqueues its own
identier when its application requests access to the CS, at which point it changes
its status to WAITING. By the code, at no point will a node dequeue its own
identier until just before it enters the CS and sets its status to CRITICAL.
Properties (b) and (c) are vacuously true in the initial conguration, since
no node has a non-empty queue.
Suppose (b) and (c) are true in the (t 1) st conguration, C t 1 , of the
execution. It is possible to show these properties are true in the t th conguration,
by considering in turn every possibility for the t th event. Most of the events
applied to C t 1 are easily shown to yield a conguration C t in which properties
(b) and (c) are true. Here we discuss the events for which the outcome is less
clear by presenting the problematic cases that can appear to disrupt a request
chain. We note that, in the following cases, non-token holding nodes are often
required to nd an outgoing link due to link reversals or failures. It is not hard
to show that a node i that is not the token holder can always nd an outgoing
link due to the performance of RaiseHeight().
Case 1: Node i receives a Request(h) from node j and does not enqueue j on
its request queue. To ensure that j's Request is not overlooked, causing possible
starvation, we show that either a LinkInfo or a Token message is sent to j from
a Request from j is received at i and j is not enqueued.
Case 1.1: receivedLI[j] is false at i. It must be that i sent the token to j in some
previous conguration and i has not yet received the LinkInfo message that j
must send to i upon receipt of the token. If the token is not in transit from i
to j or held by j in C t 1 , then earlier j had the token and passed it on. The
Request received by i was sent before the LinkInfo message that j must send
to i upon receipt of the token. So if j is WAITING in C t 1 , it has already
sent a newer Request and properties (b) and (c) hold for this request chain in
C t by the inductive hypothesis.
Case 1.2: receivedLI[j] is true at i. Then if j is not enqueued on i's request
queue, it must be that myHeight i > h. Since viewed i as outgoing when
it sent the Request, node i must have either called
in N i or the relative heights of i and j changed between the time link (i;
was rst detected and before j was added to N i . In either case, node j must
eventually receive a Linkinfo message from i and see that its link to next j has
reversed, in which case j will take action resulting in the eventual sending of
another Request.
Case 2: Node i receives an input causing it to delete identier j from its request
queue. To ensure that j's Request is not forgotten when i calls Delete(Q; j), we
show that either node j received a Token message prior to the deletion, in which
case j's Request is satised, or node j is notied that the link to i failed, in which
case j will take the appropriate action to reroute the request chain.
Case 2.1: Node i calls Delete(Q; receives a LinkInfo message from j
indicating that i's link to j has become outgoing at i. Then, since i enqueued
j, it must be that in some earlier conguration i saw the link to j as incoming.
Since the receipt of the LinkInfo message from j caused the link to change
from incoming to outgoing in i's view, it must be that the LinkInfo was sent
by j when j received the token and lowered its height. If the token is not held
by j in C t 1 , then earlier j had the token and passed it on. If j is WAITING
in C t 1 , it has already sent a newer Request and properties (b) and (c) hold
for this request chain in C t by the inductive hypothesis.
Case 2.2: Node i calls Delete(Q; received an indication that link
must receive the same indication, in which case it can
take appropriate action to advance any request chains.
Case 3: Node i receives an input which makes it see the link to next i as incoming
or failed. In this case, any request chains including node i in C t 1 end at i in C t .
We show that node i takes the correct action to propagate these request chains
by sending either a new Request or a LinkInfo message.
Case 3.1: Node i receives a LinkInfo message from neighbor indicating
that i's link to j has become incoming at i. If the link to j was i's last outgoing
link, then in C t i will call RaiseHeight(). Node i will delete the identiers
of any nodes on outgoing links from its request queue. Node i will send a
LinkInfo message to each neighbor, including nodes whose identiers were
removed from i's request queue. If i's request queue is non-empty it will call
ForwardRequest() and send a Request message to the node chosen as next i in
Case 3.2: Node i receives an indication that the link to next i has failed. In C t , i
will take the same actions as it did in case 3.1, when its link to next i reversed.
Therefore, no action taken by node i can make properties (b) and (c) false
and the lemma holds.
Lemma 5. Once link changes and calls to RaiseHeight() cease, for every cong-
uration in which a node l's request chain does not include the token holder, then
there is a later conguration in which l's request chain does include the token
holder.
Proof: By Lemma 3, after link changes cease, eventually a token oriented
DAG will be formed. Consider a conguration after link changes and calls to
RaiseHeight() cease in which the DAG is token oriented, meaning that all LinkInfo
messages generated when nodes raise their heights have been delivered.
The proof is by contradiction. Assume node l's request chain never includes
the token holder. So the token can only be held by or be in transit to nodes that
are not in l's request chain. By our assumption on the execution, no LinkInfo
messages caused by a call to RaiseHeight() will be in transit to a node in l's request
chain, nor will any node in l's request chain detect a failed link to a neighboring
node. Therefore, by Lemma 4(c), a Request message must be in transit from
a node in l's request chain to a node that is not in l's request chain, and the
number of nodes in l's request chain will increase when the Request message is
received. At this point, l's request chain will either include the token holder,
another Request message will be in transit from a node in l's request chain to
a node that is not in l's request chain, or l's request chain will have joined the
request chain of some other node. While the number of nodes in l's request chain
increases, the number of nodes not in l's request chain decreases, since there are
a nite number of nodes in the system. So eventually l's request chain includes
all nodes. Therefore, if the token is not eventually contained in l's request chain,
it is not in the system, a contradiction.
Let l be a node that is WAITING after link changes and calls to Raise-
cease. Given a conguration s in the execution, a function V l for l is
dened to be the following vector of positive integers. Let
l's request chain. V l (s) has either depending
on whether a Request message is in transit from p m or not. In either case, v 1 is
the position of p 1 (= l) in Q l , and for 1 < j m, v j is the position of p j 1 in
(Positions are numbered in ascending order with 1 being the head of the
queue.) If a Request message is in transit, then V l
l (s) has only m elements. These vectors are compared
lexicographically.
Lemma 6. V l is a variant function.
Proof: The key points to prove are:
l never has more than n entries and every entry is between 1 and n
the range of V l is well-founded.
(2) Most events can be easily seen not to increase V l . Here we discuss the remaining
events.
When the Request message at the end of l's request chain is received by node
j from node p m , l's request chain increases in length to m decreases
from
m+1 is p m 's position in Q j after the Request message is received.
When a Token message is received by the node p m at the end of l's request
chain, it is either
- kept at p m , so V l decreases from hv
- or sent toward l, so V l decreases from hv
- or sent away from l, followed by a Request message, so V l decreases from
(3) To see that the events that cause V l to decrease will continue to occur, consider
the following two cases:
Case 1: The token holder is not in l's request chain. By Lemma 5, eventually
the token holder will be in l's request chain.
Case 2: The token holder is in l's request chain. Since no node stays in the
CS forever, at some later time the token will be sent and received,
decreasing the value of V l , by part (2) of this proof.
Once V l equals h1i, l enters the CS. We have:
Theorem 2. If link changes cease, then every request is eventually satised.
6. Simulation Results
In this section we discuss the static and dynamic performance of the Reverse
Link (RL) algorithm compared to a mutual exclusion algorithm designed
to operate on a static network. We simulated Raymond's token based mutual
exclusion algorithm [25] as if it were running on top of a \routing" layer that
always provided shortest path routes between nodes. In this section, we will refer
to this simulation as \Raymond's with routing" (RR). Raymond's algorithm was
used because it is the static algorithm from which the RL algorithm was adapted
and because it does not provide for link failures and recovery and must rely on
the routing layer to maintain logical paths if run in a dynamic network. In order
to make our results more generally applicable, we made best-case assumptions
about the underlying routing protocol used with Raymond's algorithm: that it
always provides shortest paths and its time and message complexity are zero. If
our simulation shows that the RL algorithm is better than the RR combination
in some scenario, then the RL algorithm will also be better than Raymond's
algorithm in that scenario when any real ad hoc routing algorithm is used. If
our simulation shows that the RL algorithm is worse than the RR combination
in some scenario, then it might or might not be worse in an actual situation,
depending on how much worse it is in the simulation and what are the costs of
the routing algorithm.
We simulated a node system under various scenarios. We chose to simulate
on a system because for networks larger than nodes the time
needed for simulation was very high. Also, we envision ad hoc networks to be
much smaller scale than wired networks like the Internet. Typical numbers of
nodes used for simulations of ad hoc networks range from 10 to 50 [4{6,15,18,26].
In all our experiments, each CS execution took one time unit and each message
delay was one time unit. Requests for the CS were modeled as a Poisson process
with arrival rate req . Thus the time delay between when a node left the CS
and made its next request to enter the CS is an exponential random variable
with mean 1
time units. Link changes were modeled as a Poisson process with
arrival rate mob . Thus the time delay between each change to the graph is an
exponential random variable with mean 1
mob
time units. Each change to the
graph consisted of the deletion of a link chosen at random (whose loss did not
disconnect the graph) and the formation of a link chosen at random.
In each execution, we measured the average waiting time for CS entry, that
is, the average number of time units that nodes spent in their WAITING sections.
We also measured the average number of messages sent per CS entry.
We varied the load on the system ( req ), the degree of mobility ( mob ), and
the \connectivity" of the graph. Connectivity was measured as the percentage
of possible links that were present in the graph. Note that a clique on nodes
has 435 (undirected) links.
In the graphs of our results, each plotted point represents the average of six
repetitions of the simulation. Thus in plots of average time per CS entry, each
point is the average of the averages from six executions, and similarly for plots
of average number of messages per CS entry.
For the RR simulations, we initially formed a random connected graph with
the desired number of links and then used breadth-rst search to form a spanning
tree of the graph to play the part of the static virtual spanning tree over which
nodes communicate in Raymond's algorithm. After the spanning tree was formed,
we randomly permuted the graph while maintaining the desired connectivity and
then calculated the shortest paths from all nodes to their neighbors in the virtual
spanning tree. After this, we started the mutual exclusion algorithm and began
counting messages and waiting time per CS entry. When link changes occurred,
we did not measure the time or messages needed to recalculate shortest path
routes in the modied graph. We did measure any added time and distance that
the application messages traveled due to route changes, charging one message per
link traversed.
For simulations of RL, we formed a random connected graph with the desired
number of links, initialized the node heights and link directions, and then started
the algorithm and performance measurements. When link changes occurred, the
time and messages needed to nd new routes between nodes were included in the
overall cost of performance.
In this section, part (a) of each gure displays results when the graph is
static, part (b) when (low mobility), and part (c) when
Our choice for the value of the low mobility parameter
corresponds to the situation where nodes remain stationary for a few tens of
seconds after moving and prior to making another move. Our choice for the
value of the high mobility parameter represents a much more volatile network,
where nodes remain static for only a few seconds between moves.
6.1. Average waiting time per CS entry
Load (Request Arrival Rate)
(a)
Load (Request Arrival Rate)2060100140180.0001
(b)
Time
Units/CS
Entry
Time
Units/CS
Entry
Load (Request Arrival Rate)
Time
Units/CS
Entry
RR, 20% Connectivity
RR, 80% Connectivity
RL, 80% Connectivity
RL, 20% Connectivity
Figure
5. Load vs. Time/CS entry for (a) zero, (b) low, and (c) high mobility
Figure
5 plots the average number of time units elapsed between host request
and subsequent entry to the CS against values of req increasing from 10 4 (the
mean time units between requests is
requests is 1) from left to right along the x axis. We chose the high load value
of req because at this rate each node would have a request pending almost all
the time. The low load value of req represents a much less busy network, with
requests rarely pending at all nodes at the same time. Plots are shown for runs
with 20% (87 links) and 80% (348 links) connectivity for both the RL and RR
simulations.
Figure
5 indicates that RL has better performance than RR in terms of
average waiting time per CS entry, up to a factor of six. The reason is that
Raymond's algorithm sends application messages over a static virtual spanning
tree; when a message is sent from a node to one of its neighbors in the virtual
spanning tree, it may actually be routed over a long distance, thus increasing the
time delay. In contrast, the RL algorithm uses accurate information about the
actual topology, resulting in less delay between each request and subsequent CS
entry.
Both algorithms show an increase in average waiting time per CS entry from
low to high load in Figure 5. The higher the load, the larger is the number of
other nodes that precede a given node into the CS.
The average waiting time for each CS entry reaches its peak for the RL
simulation at around 75 time units per CS entry under the highest load. This
is caused by an essentially round robin pattern of token traversal. However, the
average waiting time for the RL simulation in Figure 5(c) at the highest load
actually decreases under high mobility. This phenomenon may be due to the fact
that, at high loads, frequent link failures break the fair pattern in which the token
is received, causing some nodes to get the token more frequently.
Figure
5 also shows that the waiting time advantage of RL over RR increases
with increasing load and increasing mobility. The increased waiting time of RR
with increased load when the network connectivity is low is due to longer average
route lengths. In the simulation trials, the average route length roughly doubled
when the connectivity decreased from 80% to 20%. The performance gap between
waiting time for RL and RR is seen to a lesser degree at high connectivity,
when average route length in RR is lower. However, it is apparent that the RR
simulation suers from the combined eects of higher contention and imposed
static spanning tree communication paths at high loads, while RL is mainly
aected by contention for the CS at high loads.
Finally, Figure 5 suggests that connectivity in the range tested is immaterial
to the behavior of the RL algorithm at high load, whereas a larger connectivity is
better for RR than a smaller connectivity at all loads. In order to further study
the eect of connectivity, we ran the experiments shown in Figure 6: the average
number of time units elapsed between host request and subsequent entry to the
CS is plotted against network connectivity increasing from 10% (43 links) to 100%
(435 links) along the x axis. Curves are plotted for low load, where
(the mean time unit between requests is load, where
mean time unit between requests) for both the RL and RR simulations.
RL, Low Load
RR, High Load
RR, Low Load101000
X101000
(a)
Connectivity
Connectivity
(b)
Time
Units/CS
Entry
Time
Units/CS
Entry
Time
Units/CS
Entry
Connectivity
Figure
6. Connectivity vs. Time/CS entry for (a) zero, (b) low, and (c) high mobility
Figure
6 conrms that connectivity does not aect the waiting time per CS
entry in the RL simulation at high load. At high load, the RL algorithm does not
exploit connectivity. When load is high, the RL simulation always sends request
messages over the path last traveled by the token, even if there is a shorter path to
the token when the request is made. At low load in RL, connectivity does aect
the waiting time per CS entry because request messages are not always sent over
the path last traveled by the token. This is because with lower load there is
sucient time between requests for token movement to change link direction in
the vicinity of the token holder, an eect that increases with higher connectivity,
shortening request paths.
The waiting time for the RR algorithm decreases with increasing connec-
tivity, since the path lengths between neighbors in the virtual spanning tree
approach one. However, even with a clique, when shortest path lengths are all
one, the time for RR does not match that for RL. The reason is that the spanning
tree used by RR for all communication might have a relatively large diameter,
whereas in RL neighboring nodes are always in direct communication.
The results of the simulations in this section are summarized in Table 1.
Zero Mobility Low Mobility High Mobility
Connectivity 20% 80% 20% 80% 20% 80%
RR high load 185 107 185 140 294 290
RL high load
RR low load 17 8
RL low load 7 4
Table
1: Summary of time per CS entry.
6.2. Average number of messages per CS entry
The RR algorithm sends request and token messages along the virtual spanning
tree. Each message from a node to its virtual neighbor is converted into a
sequence of actual messages, that traverse the (current) shortest path from the
sender to the recipient.
The RL algorithm sends Request and Token messages along the actual token
oriented DAG. In addition, as the token traverses a path, each node on that
path sends LinkInfo messages to all its outgoing neighbors. Additional LinkInfo
messages are sent, and propagated, when a link failure causes a node to lose its
last outgoing link.
Our experimental results re
ect the relative number of routing messages for
RR vs. LinkInfo messages for RL. When interpreting these results, it is important
to remember that the simulation of the RR algorithm is not charged for messages
needed to recalculate the routes due to topology changes. Thus, if RL is better
than RR in some situation, it will certainly be better when routing messages are
charged to it, even if they are prorated. Also, if RR is better than RL in another
Load (Request Arrival Rate)
(a)10152535100010Load (Request Arrival Rate)X
Entry
Entry
Entry
Load (Request Arrival Rate)
RR, 20% Connectivity
RR, 80% Connectivity
RL, 80% Connectivity
RL, 20% Connectivity
Figure
7. Load vs. Messages/CS Entry for (a) zero, (b) low, and (c) high mobility
situation, depending on how much better it is, RL might be comparable or even
better than RR when routing messages are charged to RR.
Figure
7 plots the average number of messages received per CS execution
against values of req ranging from 10 4 (the mean time units between requests
(the mean time units between requests is 1) from left to right along
the x axis. Plots are shown for runs with 20% (87 links) and 80% (348 links)
connectivity for both the RL and RR simulations.
Figure
7(b) and (c) show that the RR algorithm sends fewer messages per
CS entry than the RL algorithm in all simulation trials with mobility, although
as load increases the message advantage of RR decreases markedly.
In all situations studied, except the RL simulation in the static case with
high connectivity, the number of messages per CS entry tends to decrease as load
increases. The reason is that, although the overall number of messages increases
with load in both algorithms, due to the additional token and request messages,
it increases less than linearly with the number of requests, and hence less than
linearly with the number of CS entries. In the extreme, at very high load, every
time the token moves, it is likely to cause a CS entry.
In the static case with high connectivity, the RL algorithm experiences a
threshold eect around load of .01: when load is less than .01, the number of
messages per CS entry is roughly constant at a lower value, and when the load is
above .01, the number of messages per CS entry is roughly constant at a higher
value. The threshold eect becomes less pronounced as connectivity decreases.
We conjecture that some qualitative behavior of the algorithm on a 30 node graph
changes when load increases from .001 to .01. This change may be attributed
to the observation that token movement more eectively shortens request path
length at high connectivity with low load. This is because at low load there is
sucient time between requests for nodes to receive LinkInfo messages sent as
the token moves, causing nodes to send requests over direct links to the token
holder rather than over the last link on which they sent the token. This eect
is amplied at high connectivity because each node is more likely to be directly
connected to the token holder.
The RL algorithm sends more messages per CS entry than the RR algorithm
when mobility causes link changes, and the number of messages sent in the RL
algorithm grows very large under low loads, as can be observed in Figure 7(b) and
(c). When links fail and form, the RL algorithm sends many LinkInfo messages
to maintain the token oriented DAG, resulting in a higher message to CS entry
ratio at low loads when the degree of mobility remains constant. However, when
interpreting these results, it is important to note that the RL algorithm is being
charged for the cost of routing in the simulations with mobility, while the RR
simulation is not charged for routing.
Figure
8 shows the results of experiments designed to understand the eect
of connectivity on the number of messages per CS entry. In the gure, the
average number of messages per CS entry is plotted against network connectivity
increasing from 10% (43 links) to 100% (435 links) from left to right on the x
axis. Curves are plotted for low load, where
between requests is load, where
between requests is 1) for both the RL and RR simulations.
In the static case, the number of RL messages per CS entry increases linearly
with connectivity, for a xed load. As connectivity increases, the number of
neighbors per node increases, resulting in more LinkInfo messages being sent as
RR, Low Load
RR, High Load
RL, Low Load
Connectivity
Connectivity
(b)
Entry
Entry
Entry
Connectivity
(c)
Figure
8. Connectivity vs. Messages/CS Entry for (a) zero, (b) low, and (c) high mobility
the token travels. However, the number of RR messages per CS entry decreases
(less than linearly) with connectivity, since the shortest path lengths between
neighbors in the virtual spanning tree decrease. In fact, our results for RR at
100% connectivity (when the virtual spanning tree is an actual spanning tree)
and high load match the performance of approximately 4 messages per CS entry
cited by Raymond [25] at high load.
Part (a) of Figure 8 shows that in the static case the RL algorithm uses
fewer messages per CS entry below 25% connectivity for high load and below
60% connectivity for low load.
Figure
8(b) and (c) show that, in the dynamic cases, the number of messages
per CS entry is little aected by connectivity for a xed load. In the RL algorithm,
there are two opposing trends with increasing connectivity that appear to cancel
each other out: higher connectivity means more neighbors per node, which means
more LinkInfo messages will be sent with each failure. On the other hand, more
neighbors per node means that it is less likely for a link failure to be that of the
last outgoing link, and thus LinkInfo messages due to failure will propagate less.
For the RR case, the logarithmic scale on the y axis in Figure 8(c) hides the slight
decrease in messages per CS entry, making both curves appear
at.
The results of the simulations in this section are summarized in Table 2.
Zero Mobility Low Mobility High Mobility
Connectivity 20% 80% 20% 80% 20% 80%
RR high load 13 6 11 7
RR low load
RL low load 13 17 189 180 1900 1825
Table
2: Summary of messages per CS entry.
7. Conclusion and Discussion
We presented a distributed mutual exclusion algorithm designed to be aware
of and adapt to node mobility, along with a proof of correctness, and simulation
results comparing the performance of this algorithm to that of a static token based
mutual exclusion algorithm running on top of an ideal ad hoc routing protocol.
We assumed there were no partitions in the network throughout this paper for
simplicity; partitions can be handled in our algorithm by using a method similar
to that used in the TORA ad hoc routing protocol [22]. In [22], additional labels
are used to represent the heights of nodes, allowing nodes to detect, by recognition
of the originator of a chain of height increases, when a series of height changes
has occurred at all reachable nodes without encountering the \destination". A
similar partition detection mechanism could be encorporated into our mutual
exclusion algorithm at the expense of slightly larger messages.
Our algorithm compares favorably to the layered approach using an ad hoc
routing protocol, providing better average waiting time per CS entry in all tested
scenarios. Our simulation results indicate that in many situations the message
complexity per CS entry of our algorithm would not be greater than the message
cost incurred by a static mutual exclusion algorithm running on top of an ad hoc
routing algorithm, when messages of both the mutual exclusion algorithm and
the routing algorithm are counted.
Acknowledgements
We thank Savita Kini for many discussions on previous versions of the algo-
rithm, Soma Chaudhuri for careful reading and helpful comments on the liveness
proof, and Debra Elkins for helpful discussions.
--R
The slide mechanism with applications in dynamic networks.
Polynomial end to end communication.
Structuring distributed algorithms for mobile hosts.
A distance routing e
A performance comparison of multi-hop wireless ad hoc network routing protocols
Query localization techniques for on-demand routing protocols in ad hoc networks
A fault tolerant algorithm for distributed mutual exclusion.
Routing and multicast in multihop
A distributed routing algorithm for mobile wireless networks.
A token based k-resilient mutual exclusion algorithm for distributed systems
Signal stability based adaptive routing (SSA) for ad-hoc mobile networks
Scheduling broadcasts in multihop radio networks.
Distributed algorithms for generating loop-free routes in networks with frequently changing topology
Dynamic source routing in ad hoc wireless networks.
A cluster-based approach for routing in dynamic networks
Reliable broadcast in mobile multihop packet networks.
A highly adaptive distributed routing algorithm for mobile wireless networks.
Highly dynamic destination-sequenced distance-vector routing for mobile computers
Multicast operation of the ad-hoc on-demand distance vector routing protocol
--TR
A tree-based algorithm for distributed mutual exclusion
The slide mechanism with applications in dynamic networks
A token based <italic>k</italic>-resilient mutual exclusion algorithm for distributed systems
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
A distributed routing algorithm for mobile wireless networks
Efficient message ordering in dynamic networks
Reliable broadcast in mobile multihop packet networks
A cluster-based approach for routing in dynamic networks
Multicluster, mobile, multimedia radio network
Location-aided routing (LAR) in mobile ad hoc networks
A distance routing effect algorithm for mobility (DREAM)
A performance comparison of multi-hop wireless ad hoc network routing protocols
Query localization techniques for on-demand routing protocols in ad hoc networks
Scenario-based performance analysis of routing protocols for mobile ad-hoc networks
Multicast operation of the ad-hoc on-demand distance vector routing protocol
Ad-hoc On-Demand Distance Vector Routing
A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks
--CTR
Chen , Jennifer L. Welch, Self-stabilizing mutual exclusion using tokens in mobile ad hoc networks, Proceedings of the 6th international workshop on Discrete algorithms and methods for mobile computing and communications, September 28-28, 2002, Atlanta, Georgia, USA
Chen , Jennifer L. Welch, Self-stabilizing dynamic mutual exclusion for mobile ad hoc networks, Journal of Parallel and Distributed Computing, v.65 n.9, p.1072-1089, September 2005
Djibo Karimou , Jean Frdric Myoupo, An Application of an Initialization Protocol to Permutation Routing in a Single-Hop Mobile Ad Hoc Networks, The Journal of Supercomputing, v.31 n.3, p.215-226, March 2005
Emmanuelle Anceaume , Ajoy K. Datta , Maria Gradinariu , Gwendal Simon, Publish/subscribe scheme for mobile networks, Proceedings of the second ACM international workshop on Principles of mobile computing, October 30-31, 2002, Toulouse, France
Gruia-Catalin Roman , Jamie Payton, A Termination Detection Protocol for Use in Mobile Ad Hoc Networks, Automated Software Engineering, v.12 n.1, p.81-99, January 2005
M. Benchaba , A. Bouabdallah , N. Badache , M. Ahmed-Nacer, Distributed mutual exclusion algorithms in mobile ad hoc networks: an overview, ACM SIGOPS Operating Systems Review, v.38 n.1, p.74-89, January 2004 | mobile computing;mutual exclusion;distributed algorithm;ad hoc network |
506896 | Replication requirements in mobile environments. | Replication is extremely important in mobile environments because nomadic users require local copies of important data. However, today's replication systems are not "mobile-ready". Instead of improving the mobile user's environment, the replication system actually hinders mobility and complicates mobile operation. Designed for stationary environments, the replication services do not and cannot provide mobile users with the capabilities they require. Replication in mobile environments requires fundamentally different solutions than those previously proposed, because nomadicity presents a fundamentally new and different computing paradigm. Here we outline the requirements that mobility places on the replication service, and briefly describe ROAM, a system designed to meet those requirements. | Introduction
Mobile computing is rapidly becoming standard
in all types of environments: academic, com-
mercial, and private. Widespread mobility impacts
multiple arenas, but one of particular importance
is data replication. Replication is especially
important in mobile environments, since
disconnected or poorly connected machines must
rely primarily on local resources. The monetary
costs of communication when mobile, combined
with the lower bandwidth, higher latency, and reduced
availability, eectively require that important
data be stored locally on the mobile machine.
In the case of shared data, between multiple mobile
users or between mobile and stationary ma-
chines, replication is often the best and sometimes
the only viable approach.
Many replication solutions [4,16] assume a
static infrastructure; that is, the connections
This work was sponsored by the Advanced Research
Projects Agency under contract DABT63-94-C-0080.
Gerald Popek is also a-liated with CarsDirect.com.
themselves may be transient but the connection
location and the set of possible synchronization
partners always remain the same. However, mobile
users are by denition not static, and a replication
service that forces them to adjust to a
static infrastructure hinders mobility rather than
enables it. Extraordinary actions, such as long
distance telephone calls over low-bandwidth links,
are necessary for users to conform to the underlying
static model, costing additional time
and money while providing a degraded service.
Additionally, mobile users have di-culty inter-operating
with other mobile users, because communication
patterns and topologies are typically
predened according to the underlying infrastruc-
ture. Often, direct synchronization between mobile
users is simply not permitted.
Other systems [2,14,18] have simply traded the
above communication problem for another one:
scaling. They provide the ability for any-to-
any synchronization, but their model suers from
inherent scaling problems, limiting its usability
in real environments. Good scaling behavior is
s
very important in the mobile scenario. Mobile
users clearly require local replicas on their mobile
machines. Yet, replicas must also be stored
in the o-ce environment for reliability, intra-
o-ce use by non-mobile personnel, and system-
administration activities like backups. Addition-
ally, typical methods for reducing replication fac-
tors, such as local area network sharing tech-
niques, are simply not feasible in the mobile con-
text. Mobile users require local replicas of critical
information, and in most cases desire local access
to non-critical objects as well, for cost and performance
reasons. The inability to scale well is as
large an obstacle to the mobile user as the restriction
of a static infrastructure discussed above.
The main problem is that mobile users are
replicating data using systems that were not designed
for mobility. As such, instead of the replication
system improving the state of mobile com-
puting, it actually hinders mobility, as users nd
themselves forced to adjust their physical motion
and computing needs to better match what the
system expects. This paper outlines the requirements
of a replication service designed for the mobile
context. We conclude with a description of
Roam, a replication solution redesigned especially
for mobile computing. Built using the Ward architecture
[11], it enables rather than hinders mobil-
ity, and provides a replication environment truly
suited to mobile environments.
2. Replication Requirements
Mobile users have special requirements above
and beyond those of simple replication required
by anyone wishing to share data. Here we discuss
some of the requirements that are particular
to mobile use: any-to-any communication, larger
replication factors, detailed controls over replication
behavior, and the lack of pre-motion actions.
We omit discussion of well-understood ideas, such
as the case for optimistic replication, discussed
in [2,3,5,17].
2.1. Any-to-any communication
By denition, mobile users change their geographic
location. As such, it cannot be predicted a
priori what machines will be geographically collocated
at any given time. Given that it is typically
cheaper, faster, and more e-cient to communicate
with a local partner rather than a remote one, mobile
users want the ability to directly communicate
and synchronize with whomever is \nearby." Consistency
can be correctly maintained even if two
machines cannot directly synchronize with each
other, as demonstrated by systems based on the
client-server model [4,16], but local synchronization
increases usability and the level of functionality
while decreasing the inherent synchronization
cost. Users who are geographically collocated
don't want updates to eventually propagate
through a long-distance, sub-optimal path: the
two machines are next to each other, and the synchronization
should be instantaneous.
Since users expect that nearby machines should
synchronize with each other quickly and e-
ciently, and it cannot be predicted which machines
will be geographically collocated at any point in
the future, a replication model capable of supporting
any-to-any communication is required. That
is, the model must allow any machine to communicate
with any other machine|there can be no
second-class clients in the system.
Any-to-any communication is also required in
other mobile arenas, such as in appliance mobility
[6], the motion from device to device or system
to system. For instance, given a desktop, a lap-
top, and a palmtop, it is unlikely that one would
want to impose a strict client-server relationship
between the three; rather, one would want each to
be able to communicate with any of the others.
Providing any-to-any communication is equivalent
to using a peer-to-peer replication model [10,
14,18]; if anyone can directly synchronize with
anyone else, then everyone must by denition be
equals, at least with respect to update-generation
abilities. Some, however, have argued against
peer models in mobile environments because of
the relative insecurity regarding the physical devices
themselves|for example, laptops are often
stolen. The argument is that since mobile computers
are physically less secure, they should be
\second-class" citizens with respect to the highly
secure servers located behind locked doors [15].
The class-based distinction is intended to provide
improved security by limiting the potential security
breach to only a second-class object.
The argument is based on the assumption that
security features must be encapsulated within the
peer model, and therefore unauthorized access to
any peer thwarts all security barriers and mecha-
nisms. However, systems such as Truffles [13]
have demonstrated that security policies can be
modularized and logically situated around a peer
replication framework while still remaining independent
of the replication system. Truffles,
an extension to the peer-based systems Ficus [2]
and Rumor [14], incorporates encryption-based
authentication and over-the-wire privacy and integrity
services to increase a replica's condence
in its peers. Truffles further supports protected
denition and modication of security policies.
For example, part of the security policy could
be to only accept new le versions from specic
(authenticated) replicas|which is eectively the
degree of security provided by the \second-class"
replicas mentioned above.
With such an architecture, the problems caused
by unauthorized access to a peer replica are no different
from the unauthorized access of a client in a
client-server model. Thus, the question of update-
exchange topologies (any-to-any as compared to a
more stylized, rigid structure as in client-server
models) can be dealt with independently of the
security issue and the question of how to enforce
proper security controls.
2.2. Larger replication factors
Most replication systems only provide for a
handful of replicas of any given object. Ad-
ditionally, peer algorithms have never traditionally
scaled well. Finally, some have argued that
peer solutions simply by their nature cannot scale
well [15].
However, while mobile environments seem to
require a peer-based solution (described above),
they also seem to negate the assumption that a
handful of replicas is enough. While we do not
claim a need for thousands of writable copies, it
does seem likely that the environments common
today and envisioned for the near future will require
larger replication factors than current systems
allow.
First and foremost, each mobile user requires
a local replica on their laptop, doubling replication
factors when data is stored both on the user's
desktop and laptop. Additionally, although replication
factors can often be minimized in o-ce environments
due to LAN-style sharing and remote-access
capabilities, such network-based le sharing
cannot be utilized in mobile environments due to
the frequency of network partitions and the wide
range of available bandwidth and transfer latency.
Second, consider the case of appliance mobility.
The above discussion assumes that each user has
one static machine and one mobile machine. The
future will see the use of many more \smart" devices
capable of storing replicated data. Palmtop
computers are becoming more common, and there
is even a wristwatch that can download calendar
data from another machine. Researchers [19] have
built systems that allow laptop and palmtop machines
to share data dynamically and opportunis-
tically. It is not di-cult to imagine other devices
in the near future having the capability to store
and conceivably update replicated data; such devices
potentially increase replication factors dramatically
Finally, some have argued the need for larger
replication factors independent of the mobile sce-
nario, such as in the case of air tra-c control [9].
Other scenarios possibly requiring larger replication
factors include stock exchanges, network
routing, airline reservation systems, and military
command and control.
Read-only strategies and other class-based techniques
cannot adequately solve the scaling prob-
slem, at least in the mobile scenario. Class-based
solutions are not applicable to mobility, for the
reasons described above (Section 2.1). Read-only
strategies are not viable solutions because they
force users to pre-select the writable replicas beforehand
and limit the number of writable copies.
In general one cannot predict which replicas require
write-access and which ones do not. We
must provide the ability for all replicas to generate
updates, even though some may never do
so.
2.3. Detailed replication controls
By denition, a replication service provides
users with some degree of replication control|
a method of indicating what objects they want
replicated. Many systems provide replication on a
large-granularity basis, meaning that users requiring
one portion of the container must locally replicate
the entire container. Such systems are perhaps
adequate in stationary environments, when
users have access to large disk pools and network
resources, but replication control becomes vastly
more important to mobile users. Nomadic users
do not in general have access to o-machine re-
sources, and therefore objects that are not locally
stored are eectively inaccessible. Everything the
user requires must be replicated locally, which becomes
problematic when the container is large.
Replicating a large-granularity container means
that some of the replicated objects will be deemed
unimportant to the particular user. Unimportant
data occupies otherwise usable disk space,
which cannot be used for more critical objects. In
the mobile context, where network disconnections
are commonplace, important data that cannot be
stored locally causes problems ranging from minor
inconveniences to complete stoppages of work
and productivity, as described by Kuenning [7].
Kuenning's studies of user behavior indicate that
the set of required data can in fact be completely
stored locally, but only if the underlying replication
service provides the appropriate
exibility to
individually select objects for replication. Users
and automated tools therefore require fairly detailed
controls over what objects are replicated,
because without them mobile users cannot adequately
function.
2.4. Pre-motion actions
One possible design point would have users
\register" themselves as nomads for a specic time
duration before becoming mobile. In doing so,
the control structures and algorithms of the replication
system could be greatly simplied; users
would act as if they were stationary, and register
their motion as the unusual case. For instance,
suppose a user was taking a three-day trip from
Los Angeles to New York. Before traveling, machines
in Los Angeles and New York could exchange
state to \re-congure" the user's portable
to correctly interact with the machines in New
York. Since replication requires underlying distributed
algorithms, part of the reconguration
process would require changing and saving the distributed
state stored on the portable, to ensure
correct algorithm execution.
However, such a design policy drastically restricts
the way in which mobility can occur, and
does not match with the reality of mobile use.
Mobility cannot always be predicted or scheduled.
Often the chaos of real life causes unpredicted mo-
bility: the car fails en route to work, freeway trafc
causes unforeseeable delays, a child has to be
picked up early from school, a family emergency
occurs, or weather delays travel plans. Users are
often forced to become mobile earlier or remain
mobile longer than they had initially intended. In
general, we cannot require that users know a priori
either when they will become mobile or for
how long.
Additionally, this design policy makes underlying
assumptions about the connectivity and accessibility
of machines in the two aected geographic
areas: Los Angeles and New York, in the above
example. It assumes that before mobility occurs,
the necessary machines are all accessible so the
state-transformation operation can occur. Inaccessibility
5of any participant in this process blocks
the user's mobility. Such a policy seems overly re-
strictive, and does not match the reality of mobile
use. Perhaps a user wants to change geographic
locations precisely because a local machine is un-
available, or perhaps a user needs to become mobile
at an instant when connectivity is down between
the multiple required sites. Since neither
mobility nor connectivity can be predicted, one
cannot make assumptions on the combination of
the two.
For these reasons, we believe that solutions that
require \pre-motion" actions are not viable in the
mobile scenario. Pre-motion actions force users
to adapt to the system rather than having the
system support the desired user behavior. Any
real solution must provide the type of \get-up and
go" functionality required by people for everyday
use.
3. Roam
Roam is a system designed to meet the above
set of requirements. It is based on the Ward
model [11] and is currently being implemented and
tested at the University of California at Los Angeles
3.1. Ward model
The Ward model combines classical elements of
both the traditional peer-to-peer and client-server
models, yielding a solution that scales well and
provides replication
exibility, allowing dynamic
reconguration of the synchronization topology.
The model's main grouping mechanism is the
ward, or Wide Area Replication Domain. A ward
is a collection of \nearby" machines, possibly only
loosely connected. The denition of \nearby" depends
on factors such as geographic location, expected
network connectivity, bandwidth, latency,
and cost; see [12] for a full discussion of these issues
Wards are created as replicas are added to the
system: each new replica chooses whether to join
an existing ward or form a new one. We believe
that it is possible to automate the assignment of
ward membership, but as the issues involved are
complex, we have avoided attempting to do so in
the current system. Instead, this decision is controlled
by a human, such as a system administrator
or knowledgeable user. If necessary, the decision
can be altered later by using ward-changing
utilities.
Although all members of the ward are equal
peers, the ward has a designated ward master,
similar to a server in a client-server model but
with several important dierences:
Since all ward members are peers, any two
ward members can directly synchronize with
one another. Typical client-server solutions
do not allow client-to-client synchronization.
Whether by design or by accident, mobile users
will often encounter other mobile users; in such
cases, direct access to the other ward member
may be easier, cheaper and more e-cient than
access to the ward master.
Since all ward members are peers, any ward
member can serve as the ward master. Automatic
re-election and ward-master recongura-
tion can occur should the ward master fail or
become unavailable, and algorithms exist to re-solve
multiple-master scenarios. Correctness is
not aected by a transient ward master fail-
ure, but the system maintains better consistency
if the ward master is typically available
and accessible. Since neither an inaccessible
ward master nor multiple ward masters aects
overall system correctness (see Section 3.2), the
re-election problem is considerably easier than
related distributed re-election problems.
The ward master is not required to store actual
data for all intra-ward objects, though it must
be able to identify (i.e. name) the complete set.
Most client-server strategies force the server to
store a superset of each client's data.
The ward master is the ward's only link with
other wards; that is, only the ward master is
aware of other replicas outside the ward. This
is one manner in which the ward model achieves
s
wards.idraw
Figure
1. The basic ward architecture. Overlapped members
are a mobility feature (Section 3.4).
good scaling|by limiting the amount of knowledge
stored at individual replicas. Traditional
peer models force every replica to learn about
other replica's existence; in the ward model, replicas
are only knowledgeable about the other replicas
within their own ward. In fact, most replicas
are completely unaware of the very existence of
other wards.
All ward masters belong to a higher-level ward,
forming a two-level hierarchical model. 1 Ward
masters act on their ward's behalf by bringing
new updates into the ward, exporting others out of
the ward, and gossiping about all known updates.
Consistency is maintained across all replicas by
having ward masters communicate directly with
each other and allowing information to propagate
independently within each ward. Figure 1 illustrates
the basic architecture, as well as advanced
features discussed in later sections.
Wards are dynamically formed when replicas
are created, and are dynamically maintained as
suitable ward-member candidates change. Ward
destruction occurs automatically when the last
replica in a given ward is destroyed.
3.2. System correctness
An important feature of the ward model is that
system correctness does not depend on having precisely
one master per ward. Even during recon-
guration, updated les will
ow between replicas
without loss of information or other incorrect
behavior. For most purposes, the ward master
is simply another replica. Whether communicating
within its own ward or with other wards, the
master maintains consistency using the same algorithms
as the non-master replicas. Thus, the propagation
of information within and among wards
follows from the correctness of these algorithms,
1 The rationale behind the two-level hierarchy and its impact
on scaling is discussed in Section 3.5.
rst described in [1].
If a ward master becomes temporarily unavail-
able, information will continue to propagate between
other ward members, due to the peer model.
However, information will not usually propagate
to other wards until the master returns. An exception
to this rule will occur if a ward member temporarily
or permanently moves to another ward,
as described in Section 3.4, carrying data with it.
If the master suers a permanent failure, a new
master can be elected. We must then demonstrate
that correctness will not suer during the transition
to the new master. Correctness will be violated
either if the failed master had some information
that cannot be reconstructed, or if the failed
master's participation is required for the completion
of some distributed algorithm. The rst case
can occur only if the lost information had been
created at the master and had not yet propagated
to another replica. In this case, the lost information
cannot aect correctness because the situation
is the same as if it had never existed. The second
case is handled by distributed failure-recovery
algorithms that are invoked when an administrator
declares the old master as unrecoverable.
If a new ward master is elected, there is a possibility
of creating multiple masters. Correctness is
not aected in this case because the master does
not play any special role in the algorithms. The
purpose of a ward master is not to coordinate the
behavior of other ward members, but rather to
serve as a conduit for information
ow between
wards. Multiple masters, like overlapped members
(Section 3.4.2), will merely provide another
communication path between wards. Since the
peer-to-peer algorithms assume arbitrary communication
patterns, correctness will not be aected
by multiple ward masters.
3.3. Flexibility in the model
Replication
exibility is an important feature
of the ward model. The set of data stored within
each ward, called the ward set, is dynamically ad-
justable, as is the set of ward members themselves.
As ward members change their data demands and
alter what replicated data they store locally, the
ward set changes. Similarly, as mobile machines
join or leave the ward, the set of ward participants
changes. Both the ward set and ward membership
are locally recorded and are replicated in an optimistic
fashion.
Additionally, each ward member, including the
ward master, can locally store a dierent sub-set
of the ward set. Such replication
exibility,
called selective replication [10] provides improved
e-ciency and resource utilization: ward members
locally store only those objects that they actively
require. Replication decisions can be made manually
or with automated tools [5,8].
Since the ward set varies dynamically, dier-
ent wards might store dierent sets: not all ward
sets will be equivalent. In essence, the model provides
selective replication between wards them-
selves. The reconciliation topologies and algorithms
[10] apply equally well within a single ward
and between ward masters. Brie
y, the algorithms
provide that machines communicate with multiple
partners to ensure that each data object is synchronized
directly with another replica. Addition-
ally, the data synchronization algorithms support
the reconciliation of non-local data via a third-party
data-storage site, allowing the ward master
to reconcile data that is not stored locally but is
stored somewhere within the ward.
3.4. Support for mobility
The model supports two types of mobility.
Intra-ward mobility occurs when machines within
the same ward become mobile within a limited
geographic area; the machines encountered are all
ward members. Since ward members are peers, direct
communication is possible with any encountered
machine. Intra-ward mobility might occur
within a building, when traveling to a co-worker's
house, or at a local coee shop.
Perhaps more interesting, inter-ward mobility
occurs when users travel (with their data) to another
geographic region, encountering machines
from another ward. Examples include businessmen
traveling to remote o-ces and distant collaborators
meeting at a common conference.
Inter-ward mobility raises two main issues.
First, recall that due to the model's replication
exibility, two wards might not have identical
ward sets. Thus, the mobile machine may store
data objects not kept in the new ward, and vice-
versa. Second, consider the typical patterns of
mobility. Often users travel away from their
\home location" for only a short time. The system
would perform poorly if such transient mobile
actions required global changes in data structures
across multiple wards. On the other hand, mobile
users occasionally spend long periods of time
at other locations, either permanently or semi-
permanently changing their denition of \home."
In these scenarios, users should be provided with
the same quality of service (in terms of local performance
and time to synchronize data) as they
experienced in their previous \home".
Our solution resolves both issues by den-
ing two types of inter-ward mobility|short-term
(transient) and long-term (semi-permanent)|and
providing the ability to transparently and automatically
upgrade from the former to the latter.
The two operations are called ward overlapping
and ward changing respectively. Collectively, the
two are called ward motion and enable peer-to-
peer communication between any two replicas in
the ward model, regardless of their ward membership
3.4.1. Ward changing
Ward changing involves a long-term, perhaps
permanent, change in ward membership. The
moving replica physically changes its notion of
its \home" ward, forgetting all information from
the previous ward; similarly, the other participants
in the old and new wards alter their notion
of current membership. Ward membership information
is maintained using the same optimistic
algorithms that are used for replicating data, so
that the problem of tracking membership in often-
disconnected environments is straightforward.
s
The addition of a new ward member may
change the ward set. Since the ward master is
responsible for the inter-ward synchronization of
all data in the ward set, the ward set must expand
to properly encompass the replicated data
stored at the moving replica. Similarly, the ward
set at the old ward may shrink in size, as the ward
set is dynamically and optimistically recalculated
when ward membership changes. The ward-set
changes propagate to other ward masters in an op-
timistic, \need-to-know" fashion so that only the
ward masters that care about the changes learn
of them. Since both ward sets can potentially
change, and these changes are eventually propagated
to other ward masters, ward changing can
be a heavyweight operation. However, users benet
because all local data can be synchronized completely
within the local ward, giving users the best
possible quality of service and reconciliation performance
3.4.2. Ward overlapping
In contrast, ward overlapping is intended as a
very lightweight mechanism, and causes no global
changes within the system. Only the new ward
is aected by the operation. The localization of
changes makes it a lightweight operation both to
perform and to undo.
Ward overlapping allows simultaneous multi-
ward membership, enabling direct communication
with the members of each ward. To make the
mechanism lightweight, we avoid changing the
ward sets by making the new replica an \over-
lapped" member instead of a full-edged
pant. Ward members (except for the ward master)
cannot distinguish between real and overlapped
members; the only dierence is in the management
of the ward set. Instead of merging the existing
ward set with the data stored on the mobile
machine, the ward set remains unaltered. Data
shared between the mobile machine and ward set
can be reconciled locally with members of the new
ward. However, data outside the new ward cannot
be reconciled locally, and must either temporarily
remain unsynchronized or else be reconciled with
the original home ward.
3.4.3. Ward motion summary
When a replica enters another ward, there are
only two possibilities: the ward set can change
or remain the same. The former creates a
performance-improving but heavyweight solution;
the latter causes a moderate performance degradation
when synchronizing data not stored in the
new ward but provides a very lightweight solution
for transient mobile situations. Since both are operationally
equivalent, the system can transparently
upgrade from overlapping to changing if the
motion seems more permanent than rst expected.
Additionally, since ward formation is itself dy-
namic, users can easily form mobile workgroups
by identifying a set of mobile replicas as a new
(possibly temporary) ward. By using ward over-
lapping, mobile workgroups can be formed without
leaving the old wards. Ward motion and dynamic
ward formation and destruction allow easy
and straightforward communication between any
set of replicas in the entire system.
3.5. Scalability
The scalability of the ward model is directly related
to the degree of replication
exibility. Ward
sets can dynamically change in unpredictable
ways; therefore the only method for a ward master
to identify its ward set is to list each entry
individually. The fully hierarchical generalization
of the ward model to more than two levels faces
scaling problems due to the physical problems of
maintaining and indexing these lists of entries.
Nevertheless, the proposed model scales well
within its intended environment, and allows several
hundred read-write replicas of any given ob-
ject, meeting the demands of everyone from a single
developer or a medium-sized committee to a
large, international company. The model could be
adapted to scale better by restricting the degree
of replication freedom. For instance, if ward sets
changed only in very regular fashions, they could
be named as a unit instead of naming all members,
dramatically improving scalability. However, we
believe that replication
exibility is an important
design consideration in the targeted mobile envi-
ronment, and one that users absolutely require, so
we have chosen not to impose such regularity.
4. Performance
4.1. Disk space overhead
Roam, like Rumor before it, stores its non-volatile
data structures in lookaside databases
within the volume but hidden from the user. From
the user's perspective, anything other than his or
her actual data is overhead and eectively shrinks
the size of the disk. Minimal disk overhead is
therefore an important and visible criterion for
user satisfaction.
Additionally, Roam is designed to be a scalable
system. The Ward Model should support
hundreds of replicas with minimal impact between
wards. Specically, the creation of a new replica
in ward X should not aect the disk overhead of
the replicas in other wards.
We therefore measured the disk overhead of
Roam using two dierent volumes. The rst of
these volumes was chosen as a typical representative
of a user's personal subtree, while the second
was chosen to stress Roam by storing small les
that would exaggerate the system's space overhead
After empirically measuring the overhead under
dierent conditions, we tted equations to describe
the overhead in terms of the number of les,
types of les, number of replicas, and number of
wards. These equations can be summarized as follows
(full results are given in [12]):
Each new directory costs 4.2KB
object in the directory
Each new le costs .24KB
The rst replica within the ward, even without
any user data, costs 57.36KB
Each additional replica within the ward costs
object stored at the
replica
Each new ward costs 6.44KB
4.2. Synchronization performance
Since Roam's main task is the synchronization
of data, we also measured the synchronization per-
formance. We performed our experiments with
two portable machines, in all cases minimizing
extraneous processes to avoid non-repeatable ef-
fects. One machine was a Dell Latitude XP with
a 486DX4 running at 100MHz with 36MB of main
memory, while the second was a TI TravelMate
with a 133MHz Pentium and 64MB of main
memory. Reconciliation was always performed by
transferring data from the Dell machine to the TI
machine. In other words, the reconciliation process
always executed on the TI machine.
Of course, reconciliation performance depends
heavily on the sizes of the les that have been up-
dated. Since Roam performs whole-le transfers,
and any updated le must be transfered across the
network in its entirety, we would expect reconciliation
to take more time when more data has been
updated. We therefore varied the amount of data
updated from 0 to 100%, and within each trial we
randomly selected the set of updated les. Since
the les are selected at random, a given gure of
X% is only an approximation of the amount of data
updated, rather than an exact gure. In all mea-
surements, we used the personal-subtree volume
mentioned in Section 4.1, and performed at least
seven trials at each data point.
We performed ve dierent experiments under
the above conditions. The rst two compared
Roam and Rumor synchronization performance
over a 10MB quiet Ethernet and WaveLAN wireless
cards respectively. The third studied the effect
of increasing numbers of replicas; the fourth
studied the eect of increasing numbers of wards.
The fth looked at the eects of selective replication
[10] and dierent replication patterns on
synchronization performance.
These experiments showed that Roam is 10%
s
to 25% slower than Rumor when running with
similar numbers of replicas. Most of the slow-down
is due to Roam's more
exible structure,
which uses more processes and IPC to simplify the
code and enhance scalability. Reconciliation of the
13.6MB volume under Roam takes from 46 to 206
seconds, depending on the transport mechanism,
the number of les modied, and the number of
replicas in the ward.
We also studied the impact of multiple wards on
the synchronization performance. We varied the
number of wards from one, as in the previous ex-
periments, to six. We placed three replicas within
one of these wards, and measured the synchronization
between two of them on the previously
described portable machines. These experiments
showed that at a 95% level of condence, adding
wards has no impact on synchronization performance
between two replicas.
4.3. Scalability
We have already discussed some aspects of
Roam's scalability, such as in disk space overhead
(Section 4.1). However, another major aspect
of scalability is the ability to create many
replicas and still have the system perform well
during synchronization. Synchronization performance
includes two related issues. First, the reconciliation
time for a given replica in a given ward
should be largely unaected by the total number
of replicas and wards. Second, the time to distribute
an update from any replica to any other
replica should presumably be faster in the Ward
Model than in a standard approach (like Rumor),
or else we have failed in our task.
4.3.1. Reconciliation time
To measure the behavior of reconciliation time
as the total number of replicas increases, we used
a hybrid simulation. We created 64 replicas of our
test volume, reducing the hardware requirements
by using servers to store wards and replicas that
were not actively participating in the experiments.
Again, we found that at a 95% level of condence,
the synchronization time does not change as the
system conguration was varied from one ward
with a total of three replicas to 7 wards with a
total of 64 replicas.
4.3.2. Update distribution
Another aspect of scalability concerns the distribution
of updates to all replicas. A scalable
system would presumably deliver updates to all
replicas more quickly than a non-scalable system,
at least at large numbers of replicas. Additionally,
while it may not perform better at small numbers
of replicas, a scalable system should at least not
perform worse.
Rather than measuring elapsed time, which depends
on many complicated factors such as con-
nectivity, network partitions, available machines,
and reconciliation intervals, we considered the
number of individual, pair-wise reconciliation ac-
tions, and analytically developed equations that
characterize the distribution of updates. We assume
that there are M replicas; one of them, replica
R, generates an update that must propagate to all
other replicas. The following equations identify
the number of separate reconciliation actions that
must occur, both on the average and in the worst
case, to propagate the update from R to some
other replica S.
In a non-ward system such as Rumor, since
there are M replicas, M 1 of which do not yet
have the update, and reconciliation uses a ring
between all M replicas, we need M 1reconciliation
actions on average. The worst case requires M 1
reconciliation actions.
The analysis for Roam is a little more com-
plicated. Assume that the M replicas are divided
into N wards such that each ward has M=N mem-
bers. Propagating an update from R to S requires
rst sending it from R to R's ward master, then
sending it from R's ward master to S's ward mas-
ter, and then nally to S. Of course, if R and S are
members of the same ward, then much of the expense
is saved; however, we will solve the general
problem rst before discussing the special case.
Under the above conditions, we need 1( M
reconciliation actions on average to distribute the
update between a replica and its ward master, and2
actions on average between ward masters.
From these building blocks, we calculate that, on
average, Roam requires the following number of
reconciliation actions:2
Note that when performance). Setting eliminates
any benet from grouping. However, it is also interesting
to note that when
becomes M 1. Having only two wards does not
improve the required time to distribute updates
(although it does improve other aspects such as
data structure size and network utilization).
In general, Roam distributes updates faster
than Rumor when 2 < N < M and M > 3; other-
wise, Roam performs the same as Rumor (with
respect to update distribution). From the two
equations we calculate that the optimal number of
wards for a given value of M is
2M. The above
conditions yield a factor of three improvement at
50 replicas, and a factor of ve at 200 replicas.
With a multi-level implementation, larger degrees
of improvement are possible.
The analysis for Roam also indicates that, in
the worst case, Roam requires 2M
reconciliation
actions.
As a special case, if R and S are in the same
ward, only 1( M
reconciliation actions are required
on average, and M
1 in the worst case.
4.4. Ward motion
Recall from Section 3.4 that Roam supports
two dierent
avors of ward motion: overlapping
and changing. Overlapping is a lightweight, temporary
form of motion that is easy to perform and
undo. However, synchronization performance can
become worse during overlapping. When the moving
replica stores objects that are not part of the
new ward, they must be synchronized with the
original ward (or else remain unsynchronized during
the presumably short time period). Changing
is a more heavyweight, permanent form of motion
that costs more but provides optimal synchronization
performance in the new ward.
We experimentally investigated the costs of
both forms of ward motion, using the same
13.6MB volume used in the other tests. There are
four types of costs involved in these operations:
1. Initial setup costs at the moving replica,
2. Disk overhead at the moving replica,
3. Costs imposed on other wards and ward mem-
bers, and
4. Ongoing costs of synchronization.
We summarize our results here; complete data is
given in [12].
We found that setting up either type of motion
took from 60 to 80 seconds, depending on the
number of les stored on the local machine. Somewhat
surprisingly, ward changing required only
about 7% more elapsed time than overlapping.
The disk overhead at the moving replica depends
on the size of the destination ward. In
essence, the other members of the destination
ward must be tracked as if they were members of
the replica's original ward, occupying 6.44KB plus
12 bytes per le (see Section 4.1). For ward over-
lapping, this cost must be paid for both the original
ward's members and the destination ward's
members, while for ward changing, only the desti-
nation's members must be tracked. In either case,
these costs are insignicant compared to the space
required by the volume itself.
The costs imposed on other replicas and wards
are minimal for ward overlapping, but for ward
changing the old and new ward masters must
change their ward sets, and these dierences will
need to be propagated to all other ward masters
by gossiping. However, the amount of information
that must propagate is minimal (about 255 bytes
per le that changes ward membership), so the
additional network load is still quite low.
s
Finally, when wards are overlapped, synchronization
costs increase because the moving replica
must communicate with both the new ward and
the old one. Synchronization with the (tempo-
rary) new ward will take about the same amount
of time as it would have in the original ward.
However, to synchronize any les not available
in the new ward, the original must be contacted.
We measured these additional costs for various
replication patterns in our test volume (described
in [12]). To simulate the fact that communication
with the original ward is probably long-distance
and thus slower, we used a WaveLAN network
for these experiments. We found that the synchronization
time depends on both the number
of les found only in the original ward, and on
the number of modied les. When no les had
been modied, the time was essentially constant
at about 45 seconds. By contrast, when 100% of
the (locally-stored) les had been modied, synchronization
took from 78 to 169 seconds, depending
on the exact set of les shared between the two
ward masters.
5. Conclusion
Replication is required for mobile computing,
but today's replication services do not provide the
features required by mobile users. Nomadicity
requires a replication solution that provides any-
to-any communication in a scalable fashion with
su-ciently detailed control over the replication
decisions. Roam was designed and implemented
to meet these goals, paving the way not just to improved
mobile computing but to new and better
avenues of mobile research. Performance experiments
have shown that Roam is indeed scalable
and can handle the mobility patterns expected to
be displayed by future users.
--R
Ficus: A Very Large Scale Reliable Distributed File System.
The Little Work project.
Disconnected operation in the Coda
Presentation at the GloMo PI Meeting (February 4) at the University of Califor- nia
Predictive File Hoarding for Disconnected Mobile Operation.
Automated hoarding for mobile computers.
Peer replication with selective control.
The ward model: A scalable replication architecture for mobil- ity
A Scalable Replication System for Mobile and Distributed Computing.
The in uence of scale on distributed
A highly available
Experience with disconnected operation in a mobile computing environment.
--TR
Coda
The Influence of Scale on Distributed File System Design
FICUS: a very large scale reliable distributed file system
Disconnected operation in the Coda File System
Managing update conflicts in Bayou, a weakly connected replicated storage system
Automated hoarding for mobile computers
Seer
Roam | mobile computing;file systems;replication |
506906 | Negotiation-based protocols for disseminating information in wireless sensor networks. | In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminate information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of four specific SPIN protocols: SPIN-PP and SPIN-EC, which are optimized for a point-to-point network, and SPIN-BC and SPIN-RL, which are optimized for a broadcast network. Comparing the SPIN protocols to other possible approaches, we find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches in a point-to-point network and 80% more data for a given amount of energy in a broadcast network. We also find that, in terms of dissemination rate and energy usage, the SPIN protocols perform close to the theoretical optimum in both point-to-point and broadcast networks. | Introduction
Wireless networks of sensors are likely to be widely deployed
in the future because they greatly extend our ability to monitor
and control the physical environment from remote lo-
cations. Such networks can greatly improve the accuracy of
information obtained via collaboration among sensor nodes
and online information processing at those nodes.
sensor networks improve sensing accuracy by
providing distributed processing of vast quantities of sensing
information (e.g., seismic data, acoustic data, high-resolution
images, etc. When networked, sensors can aggregate such
data to provide a rich, multi-dimensional view of the en-
vironment. In addition, networked sensors can focus their
Submitted to ACM Wireless Networks; an earlier version
of this paper appeared in ACM MOBICOM '99.
attention on critical events pointed out by other sensors in
the network (e.g., an intruder entering a building). Finally,
networked sensors can continue to function accurately in the
face of failure of individual sensors; for example, if some sensors
in a network lose a piece of crucial information, other
sensors may come to the rescue by providing the missing
data.
sensor networks can also improve remote access
to sensor data by providing sink nodes that connect them to
other networks, such as the Internet, using wide-area wireless
links. If the sensors share their observations and process
these observations so that meaningful and useful information
is available at the sink nodes, users can retrieve information
from the sink nodes to monitor and control the environment
from afar.
We therefore envision a future in which collections of
sensor nodes form ad hoc distributed processing networks
that produce easily accessible and high-quality information
about the physical environment. Each sensor node operates
autonomously with no central point of control in the net-
work, and each node bases its decisions on its mission, the
information it currently has, and its knowledge of its com-
puting, communication and energy resources. Compared to
today's isolated sensors, tomorrow's networked sensors have
the potential to perform with more accuracy, robustness and
sophistication.
Several obstacles need to be overcome before this vision
can become a reality. These obstacles arise from the limited
energy, computational power, and communication resources
available to the sensors in the network.
Energy: Because wireless sensors have a limited supply
of energy, energy-conserving forms of communication
and computation are essential to wireless sensor
networks.
Computation: Sensors have limited computing power
and therefore may not be able to run sophisticated net-work
protocols.
Communication: The bandwidth of the wireless links
connecting sensor nodes is often limited, on the order
of a few hundred Kbps, further constraining inter-
sensor communication.
In this paper, we present SPIN (Sensor Protocols for Information
via Negotiation), a family of negotiation-based information
dissemination protocols suitable for wireless sensor
networks. We designed SPIN to disseminate individual
sensor observations to all sensors in a network, treating all
(a)
A
(a) (a)
(a)
Figure
1: The implosion problem. In this graph, node A
starts by
ooding its data to all of its neighbors. Two copies
of the data eventually arrive at node D. The system wastes
energy and bandwidth in one unnecessary send and receive.
sensors as potential sink nodes. SPIN thus provides a way
of replicating complete views of the environment throughout
an entire network.
The design of SPIN grew out of our analysis of the different
strengths and limitations of conventional protocols
for disseminating data in a sensor network. Such protocols,
which we characterize as classic
ooding, start with a source
node sending its data to all of its neighbors. Upon receiving
a piece of data, each node then stores and sends a copy of the
data to all of its neighbors. This is therefore a straightforward
protocol requiring no protocol state at any node, and
it disseminates data quickly in a network where bandwidth
is not scarce and links are not loss-prone.
Three deciencies of this simple approach render it inadequate
as a protocol for sensor networks:
Implosion: In classic
ooding, a node always sends
data to its neighbors, regardless of whether or not the
neighbor has already received the data from another
source. This leads to the implosion problem, illustrated
in Figure 1. Here, node A starts out by
ing data to its two neighbors, B and C. These nodes
store the data from A and send a copy of it on to
their neighbor D. The protocol thus wastes resources
by sending two copies of the data to D. It is easy to
see that implosion is linear in the degree of any node.
Overlap: Sensor nodes often cover overlapping geographic
areas, and nodes often gather overlapping pieces
of sensor data. Figure 2 illustrates what happens when
two nodes gather such overlapping data and
then
ood the data to their common neighbor (C).
Again, the algorithm wastes energy and bandwidth
sending two copies of a piece of data to the same node.
Overlap is a harder problem to solve than the implosion
problem|implosion is a function only of network
topology, whereas overlap is a function of both topology
and the mapping of observed data to sensor nodes.
Resource blindness: In classic
ooding, nodes do not
modify their activities based on the amount of energy
available to them at a given time. A network of embedded
sensors can be \resource-aware" and adapt its
communication and computation to the state of its energy
resources.
The SPIN family of protocols incorporates two key innovations
that overcome these deciencies: negotiation and
resource-adaptation.
A
s
r
Figure
2: The overlap problem. Two sensors cover an overlapping
geographic region. When these sensors
ood their
data to node C, C receives two copies of the data marked r.
To overcome the problems of implosion and overlap, SPIN
nodes negotiate with each other before transmitting data.
Negotiation helps ensure that only useful information will
be transferred. To negotiate successfully, however, nodes
must be able to describe or name the data they observe.
We refer to the descriptors used in SPIN negotiations as
meta-data.
In SPIN, nodes poll their resources before data transmis-
sion. Each sensor node has its own resource manager that
keeps track of resource consumption; applications probe the
manager before transmitting or processing data. This allows
sensors to cut back on certain activities when energy is low,
e.g., by being more prudent in forwarding third-party data.
Together, these features overcome the three deciencies
of classic
ooding. The negotiation process that precedes actual
data transmission eliminates implosion because it eliminates
transmission of redundant data messages. The use
of meta-data descriptors eliminates the possibility of overlap
because it allows nodes to name the portion of the data
that they are interested in obtaining. Being aware of local
energy resources allows sensors to cut back on activities
whenever their energy resources are low, thereby extending
longevity.
To assess the e-ciency of information dissemination via
SPIN, we performed two studies of the SPIN approach based
on two dierent wireless network models. In the rst study,
we examined ve dierent protocols and their performance
in a simple, point-to-point, wireless network where packets
are never dropped and queueing delays never occur. Two
of the protocols in this study are SPIN protocols (SPIN-PP
and SPIN-EC). The other three protocols function as comparison
protocols: (i)
ooding, which we outlined above; (ii)
gossiping, a variant on
ooding that sends messages to random
sets of neighboring nodes; and (iii) ideal, an idealized
routing protocol in which each node has global knowledge
of the status of all other nodes in the network, yielding the
best possible performance. In the second study, we were
interested in studying SPIN protocols in a more realistic
wireless network model, where radios send packets over a
single, unreliable, broadcast channel. SPIN-BC and SPIN-
RL are two SPIN protocols that we designed specically for
such networks, and we compare them to two other protocols,
ooding and ideal.
We evaluated each protocol under varying conditions by
measuring the amount of data it transmitted and the amount
of energy it used. The SPIN protocols disseminate information
with low latency and conserve energy at the same time.
Our results highlight the advantages of using meta-data to
name data and negotiate data transmissions. SPIN-PP uses
negotiation to solve the implosion and overlap problems; it
reduces energy consumption by a factor of 3.6 compared
to
ooding, while disseminating data almost as quickly as
theoretically possible. SPIN-EC, which additionally incorporates
a threshold-based resource-awareness mechanism in
addition to negotiation, disseminates 1.4 times more data
per unit energy than
ooding and in fact comes very close
to the ideal amount of data that can be disseminated per
unit energy. In a lossless, broadcast network with queueing
delays, SPIN-BC reduces energy consumption by a factor of
1.6 and speeds up data dissemination by a factor of 1.8 compared
to
ooding. When the network loses packets, SPIN-
RL is able to successfully recover from packet-losses, while
still using half as much energy per unit data as
ooding.
The SPIN family of protocols rests upon two basic ideas.
First, to operate e-ciently and to conserve energy, sensor
applications need to communicate with each other about
the data that they already have and the data they still need
to obtain. Exchanging sensor data may be an expensive
network operation, but exchanging data about sensor data
need not be. Second, nodes in a network must monitor and
adapt to changes in their own energy resources to extend
the operating lifetime of the system. This section presents
the individual features that make up the SPIN family of
protocols.
2.1 Application-level Control
Our design of the SPIN protocols is motivated in part by
the principle of Application Level Framing (ALF) [4]. With
ALF, network protocols must choose transmission units that
are meaningful to applications, i.e., packetization is best
done in terms of Application Data Units (ADUs). One of the
important components of ALF-based protocols is the common
data naming between the transmission protocol and
application (e.g., [21]), which we follow in the design of our
meta-data. We take ALF-like ideas one step further by arguing
that routing decisions are also best made in application-controlled
and application-specic ways, using knowledge of
not just network topology but application data layout and
the state of resources at each node. We believe that such
integrated approaches to naming and routing are attractive
to a large range of network situations, especially in mobile
and wireless networks of devices and sensors.
Because SPIN is an application-level approach to net-work
communication, we intend to implement SPIN as middleware
application libraries with a well dened API. These
libraries will implement the basic SPIN message types, message
handling routines, and resource-management functions.
Sensor applications can then use these libraries to construct
their own SPIN protocols.
2.2 Meta-Data
Sensors use meta-data to succinctly and completely describe
the data that they collect. If x is the meta-data descriptor
for sensor data X, then the size of x in bytes must be shorter
than the size of X, for SPIN to be benecial. If two pieces
of actual data are distinguishable, then their corresponding
meta-data should be distinguishable. Likewise, two pieces
of indistinguishable data should share the same meta-data
representation.
SPIN does not specify a format for meta-data; this format
is application-specic. Sensors that cover disjoint geographic
regions may simply use their own unique IDs as
meta-data. The meta-data x would then stand for \all the
data gathered by sensor x". A camera sensor, in contrast,
might use (x; is a geographic
coordinate and is an orientation. Because each applica-
tion's meta-data format may be dierent, SPIN relies on
each application to interpret and synthesize its own meta-
data. There are costs associated with the storage, retrieval,
and general management of meta-data, but the benet of
having a succinct representation for large data messages in
SPIN far outweighs these costs.
2.3 SPIN Messages
SPIN nodes use three types of messages to communicate:
ADV { new data advertisement. When a SPIN node
has data to share, it can advertise this fact by transmitting
an ADV message containing meta-data.
REQ { request for data. A SPIN node sends an REQ
message when it wishes to receive some actual data.
{ data message. messages contain actual
sensor data with a meta-data header.
Because ADV and REQ messages contain only meta-data,
they are smaller, and cheaper to send and receive, than their
corresponding messages.
2.4 SPIN Resource Management
SPIN applications are resource-aware and resource-adaptive.
They can poll their system resources to nd out how much
energy is available to them. They can also calculate the cost,
in terms of energy, of performing computations and sending
and receiving data over the network. With this informa-
tion, SPIN nodes can make informed decisions about using
their resources eectively. SPIN does not specify a particular
energy management policy for its protocols. Rather,
it species an interface that applications can use to probe
their available resources.
In this section, we present four protocols that follow the
SPIN philosophy outlined in the previous section. Two of
the protocols, SPIN-PP and SPIN-BC, tackle the basic problem
of data transmission under ideal conditions, where energy
is plentiful and packets are never lost. SPIN-PP solves
this problem for networks using point-to-point transmission
media, and SPIN-BC solves this problem for networks using
broadcast media. The other two protocols, SPIN-EC
and SPIN-RL, are modied versions of the rst two proto-
cols, and they are meant to operate in networks that are not
ideal. SPIN-EC, an energy-conserving version of SPIN-PP,
reduces the number of messages it exchanges when energy in
the system is low. SPIN-RL, a reliable version of SPIN-BC,
recovers from losses in the network by selectively retransmitting
messages.
3.1 SPIN-PP: A 3-Stage Handshake Protocol for Point-
to-Point Media
The rst SPIN protocol, SPIN-PP, is optimized for a networks
using point-to-point transmission media, where it is
possible for nodes A and B to communicate exclusively with
each other without interfering with other nodes. In such a
point-to-point wireless network, the cost of communicating
with n neighbors in terms of time and energy is n times the
cost of communicating with 1 neighbor. We start our study
of SPIN protocols with a point-to-point network because of
its relatively simple, linear cost model.
The SPIN-PP protocol works in three stages (ADV-REQ-
DATA), with each stage corresponding to one of the messages
described above. The protocol starts when a node
advertises new data that it is willing to disseminate. It does
this by sending an ADV message to its neighbors, naming
the new data (ADV stage). Upon receiving an ADV, the
neighboring node checks to see whether it has already received
or requested the advertised data. If not, it responds
by sending an REQ message for the missing data back to
the sender (REQ stage). The protocol completes when the
initiator of the protocol responds to the REQ with a
message, containing the missing data (DATA stage).
Figure
3 shows an example of the protocol. Upon receiving
an ADV packet from node A, node B checks to see
whether it possesses all of the advertised data (1). If not,
node B sends an REQ message back to A, listing all of the
data that it would like to acquire (2). When node A receives
the REQ packet, it retrieves the requested data and sends
it back to node B as a DATA message (3). Node B, in turn,
sends ADV messages advertising the new data it received
from node A to all of its neighbors (4). It does not send an
advertisement back to node A, because it knows that node A
already has the data. These nodes then send advertisements
of the new data to all of their neighbors, and the protocol
continues.
There are several important things to note about this
example. First, if node B had its own data, it could aggregate
this with the data of node A and send advertisements
of the aggregated data to all of its neighbors (4). Second,
nodes are not required to respond to every message in the
protocol. In this example, one neighbor does not send an
packet back to node B (5). This would occur if that
node already possessed the data being advertised.
Although this protocol has been designed for lossless net-
works, it can easily be adapted to work in lossy or mobile
networks. Here, nodes could compensate for lost ADV messages
by re-advertising these messages periodically. Nodes
can compensate for lost REQ and DATA messages by re-requesting
data items that do not arrive within a xed time
period. For mobile networks, changes in the local topology
can trigger updates to a node's neighbor list. If a node notices
that its neighbor list has changed, it can spontaneously
re-advertise all of its data.
This protocol's strength is its simplicity. Nodes using
the protocol make very simple decisions when they receive
new data, and they therefore waste little energy in compu-
tation. Furthermore, each node only needs to know about
its single-hop network neighbors. The fact that no other
topology information is required to run the algorithm has
some important consequences. First, SPIN-PP can be run
in a completely uncongured network with a small startup
cost to determine nearest neighbors. Second, if the topology
of the network changes frequently, these changes only have
to travel one hop before the nodes can continue running the
algorithm.
A
A
A ADV
A
A
A
(1) (2)
Figure
3: The SPIN-PP Protocol. Node A starts by advertising
its data to node B (1). Node B responds by sending
a request to node A (2). After receiving the requested data
(3), node B then sends out advertisements to its neighbors
(4), who in turn send requests back to B (5,6).
3.2 SPIN-EC: SPIN-PP with a Low-Energy Threshold
The SPIN-EC protocol adds a simple energy-conservation
heuristic to the SPIN-PP protocol. When energy is plen-
tiful, SPIN-EC nodes communicate using the same 3-stage
protocol as SPIN-PP nodes. When a SPIN-EC node observes
that its energy is approaching a low-energy threshold,
it adapts by reducing its participation in the protocol. In
general, a node will only participate in a stage of the protocol
if it believes that it can complete all the other stages of
the protocol without going below the low-energy threshold.
This conservative approach implies that if a node receives
some new data, it only initiates the three-stage protocol if it
believes it has enough energy to participate in the full protocol
with all of its neighbors. Similarly, if a node receives an
advertisement, it does not send out a request if it does not
have enough energy to transmit the request and receive the
corresponding data. This approach does not prevent a node
from receiving, and therefore expending energy on, ADV or
messages below its low-energy threshold. It does, how-
ever, prevent the node from ever handling a
below this threshold.
3.3 SPIN-BC: A 3-Stage Handshake Protocol for Broadcast
Media
In broadcast transmission media, nodes in the network communicate
using a single, shared channel. As a result, when
a node sends out a message in a broadcast network, it is received
by every node within a certain range of the sender 1 ,
regardless of the message's destination. If a node wishes
to send a message and senses that the channel is currently
in use, it must wait for the channel to become idle before
This transmission range is determined by the power with which
the sender transmitted the message and the sensitivity of the receiver.
attempting to send the message. The disadvantage of such
networks is that whenever a node sends out a message, all
nodes within transmission range of that node must pay a
price for that transmission, in terms of both time and en-
ergy. However, the advantage of such networks is that, when
a single node sends a message out to a broadcast address,
this node can reach all of its neighbors using only one trans-
mission. One-to-many communication is therefore 1=n times
cheaper in a broadcast network than in a point-to-point net-
work, where n is the number of neighbors for each node.
improves upon SPIN-PP for broadcast networks
by exclusively using cheap, one-to-many communi-
cation. This means that all messages are sent to the broadcast
address and thus processed by all nodes that are within
transmission range of the sender. We justify this approach
by noting that, since broadcast and unicast transmissions
use the same amount of network resources in a broadcast
network, SPIN-BC does not lose much e-ciency by using
the broadcast address. Moreover, SPIN-BC nodes can coordinate
their resource-conserving eorts more eectively because
each node overhears all transactions that occur within
its transmission range. For example, if two nodes A and B
send requests for a piece of data to node C, C only needs
to broadcast the requested data once in order to deliver the
data to both A and B. Thus, only one node, either A or
B, needs to send a request to C, and all other requests are
redundant. If A and B address their requests directly to
only C will hear the message, though all of the nodes
within the transmission range of A and B will pay for two
requests. However, if A and B address their requests to the
broadcast address, all nodes within range will overhear these
requests. Assuming that A and B are not perfectly synchro-
nized, then either A will send its request rst or B will.
The node who does not send rst will overhear the other
node's request, realize that its own request is redundant,
and suppress its own request. In this example, nodes that
use the broadcast address can roughly halve their network
resource consumption over nodes that do not. As we will illustrate
shortly, this kind of approach, often called broadcast
message-suppression, can be used to curtail the proliferation
of redundant messages in the network.
Like the SPIN-PP protocol, the SPIN-BC protocol has
an ADV, REQ, and DATA stage, which serve the same purpose
as they do in SPIN-PP. There are three central differences
between between SPIN-PP and SPIN-BC. First, as
mentioned above, all SPIN-BC nodes send their messages to
the broadcast address, so that all nodes within transmission
range will receive the messages. Second, SPIN-BC nodes
do not immediately send out requests when they hear advertisements
for data they need. Upon receiving an ADV,
each node checks to see whether it has already received or
requested the advertised data. If not, it sets a random timer
to expire, uniformly chosen from a predetermined interval.
When the timer expires, the node sends an REQ message
out to the broadcast address, specifying the original advertiser
in the header of the message. When nodes other than
the original advertiser receive the REQ, they cancel their
own request timers, and prevent themselves from sending
out redundant copies of the same request. The nal difference
between SPIN-PP and SPIN-BC is that a SPIN-PP
node will send out the requested data to the broadcast address
once and only once, as this is su-cient to get the data
to all its neighbors (assuming a lossless network). It will not
respond to multiple requests for the same piece of data.
Figure
4 shows an example of the protocol. Upon receiving
an ADV packet from node A, A's neighbors check to see
A
(1)
A
(2)
A
A
F
G ADV
nodes without data
nodes with data = transmission range
nodes waiting to transmit REQ
Figure
4: The SPIN-BC Protocol. Node A starts by advertising
its data to all of its neighbors (1). Node C responds
by broadcasting a request, specifying A as the originator of
the advertisement (2), and suppressing the request from D.
After receiving the requested data (3), E's request is also
suppressed, and C, D, and E send advertisements out to
their neighbors for the data that they received from A (4).
whether they have received the advertised data (1). Three of
neighbors, C, D, and E, do not have A's data, and enter
request suppression mode for dierent, random amounts of
time. C's timer expires rst, and C broadcasts a request for
A's data (2), which in turn suppresses the duplicate request
from D. Though several nodes receive the request, only A
responds, because it is the originator of the original packet
(3). After A sends out its data, E's request is suppressed,
and C, D, and E all send out advertisements for their new
data (4).
3.4 SPIN-RL: SPIN-BC for Lossy Networks
SPIN-RL, a reliable version of SPIN-BC, can disseminate
data e-ciently through a broadcast network, even if the net-work
loses packets. The SPIN-RL protocol incorporates two
adjustments to SPIN-BC to achieve reliability. First, each
SPIN-RL node keeps track of which advertisements it hears
from which nodes, and if it doesn't receive the data within
a reasonable period of time following a request, the node
re-requests the data. It lls out the originating-advertiser
eld in the header of the REQ message with a destination,
randomly picked from the list of neighbors that had advertised
that specic piece of data. Second, SPIN-RL nodes
limit the frequency with which they will resend data. If a
SPIN-RL node sends out a message corresponding to
a specic piece of data, it will wait a predetermined amount
of time before responding to any more requests for that piece
of data.
A
(a)
(a)
(a)(a) 4
Figure
5: Gossiping. At every step, each node only forwards
data on to one neighbor, which it selects randomly. After
node D receives the data, it must forward the data back to
the sender (B), otherwise the data would never reach node
C.
4 Other Data Dissemination Algorithms
In this section, we describe the three dissemination algorithms
against which we will compare the performance of
SPIN.
4.1 Classic Flooding
In classic
ooding, a node wishing to disseminate a piece of
data across the network starts by sending a copy of this data
to all of its neighbors. Whenever a node receives new data,
it makes copies of the data and sends the data to all of its
neighbors, except the node from which it just received the
data. The amount of time it takes a group of nodes to receive
some data and then forward that data on to their neighbors
is called a round. The algorithm nishes, or converges, when
all the nodes in the network have received a copy of the data.
Flooding converges in O(d) rounds, where d is the diameter
of the network, because it takes at most d rounds for a piece
of data to travel from one end of the network to the other.
Although
ooding exhibits the same appealing simplicity
as SPIN-PP, it does not solve either the implosion or the
overlap problem.
4.2 Gossiping
Gossiping [9] is an alternative to the classic
ooding approach
that uses randomization to conserve energy. Instead
of indiscriminately forwarding data to all its neighbors, a
gossiping node only forwards data on to one randomly selected
neighbor. If a gossiping node receives data from a
given neighbor, it can forward data back to that neighbor if
it randomly selects that neighbor. Figure 5 illustrates the
reason that gossiping nodes forward data back to the sender.
If node D never forwarded the data back to node B, node C
would never receive the data.
Whenever data travels to a node with high degree in
a classic
ooding network, more copies of the data start
oating around the network. At some point, however, these
copies may end up imploding. Gossiping avoids such implosion
because it only makes one copy of each message at any
node. The fewer copies made, the lower the likelihood that
any of these copies will ever implode.
While gossiping distributes information slowly, it dissipates
energy at a slow rate as well. Consider the case where
a single data source disseminates data using gossiping. Since
A
(c)
(a,c)
(a,c) (a)
(c)
(a)
Figure
dissemination of observed data a and c. Each
node in the gure is marked with its initial data, and boxed
numbers represent the order in which data is disseminated in
the network. In ideal dissemination, both implosion, caused
by B and C's common neighbor, and overlap, caused by A
and C's overlapping initial data item, c, do not occur.
the source sends to only one of its neighbors, and that neighbor
sends to only one of its neighbors, the fastest rate at
which gossiping distributes data is 1 node/round. Thus, if
there are c data sources in the network, gossiping's fastest
possible distribution rate is c nodes/round.
Finally, we note that, although gossiping largely avoids
implosion, it does not solve the overlap problem.
4.3 Ideal Dissemination
Figure
6 depicts an example network where every node sends
observed data along a shortest-path route and every node
receives each piece of distinct data only once. We call this
ideal dissemination because observed data a and c arrive at
each node in the shortest possible amount of time. No energy
is ever wasted transmitting and receiving useless data.
Current networking solutions oer several possible approaches
for dissemination using shortest-paths. One such
approach is network-level multicast, such as IP multicast
[5]. In this approach, the nodes in the network build and
maintain distributed source-specic shortest-path trees and
themselves act as multicast routers. To disseminate a new
piece of data to all the other nodes in the network, a source
would send the data to the network multicast group, thus ensuring
that the data would reach all of the participants along
shortest-path routes. In order to handle losses, the dissemination
protocol would be modied to use reliable multicast.
Unfortunately, multicast and particularly reliable multicast
both rely upon complicated protocol machinery, much of
which may be unnecessary for solving the specic problem
of data dissemination in a sensor network. In many respects,
SPIN may in fact be viewed as a form of application-level
multicasting, where information about both the topology
and data layout are incorporated into the distributed multicast
trees.
Since most existing approaches to shortest-path distribution
trees would have to be modied to achieve ideal dis-
semination, we will concentrate on comparing SPIN to the
results of an ideal dissemination protocol, rather than its
implementation. For point-to-point networks, it turns out
that we can simulate the results of an ideal dissemination
protocol using a modied version of SPIN-PP. We arrive at
this simulation approach by noticing that if we trace the
message history of the SPIN-PP protocol in a network, the
messages in the network would match the history of
an ideal dissemination protocol. Therefore, to simulate an
RCApplication
Resource Manager
Network Interface
RCAgent
Network Neighbor Energy
Link Link Link
Meta-Data
Data
Meta-Data
Data
Resource-Adaptive
Node
Figure
7: Block diagram of a Resource-Adaptive Node.
ideal dissemination protocol for point-to-point networks, we
run the SPIN-PP protocol and eliminate any time and energy
costs that ADV and REQ messages incur. Dening
an ideal protocol for broadcast networks is more tricky. We
approximate an ideal dissemination protocol for broadcast
networks by running the SPIN-BC protocol on a lossless net-work
and eliminating any time and energy costs that ADV
and REQ messages would incur.
5 Point-to-Point Media Simulations
In order to study the SPIN-PP and SPIN-EC approaches
discussed in the previous sections, we developed a sensor net-work
simulator by extending the functionality of the ns software
package. Using this simulation framework, we compared
SPIN-PP and SPIN-EC with classic
ooding and gossiping
and the ideal data distribution protocol. We found
that SPIN-PP provides higher throughput than gossiping
and the same order of throughput as
ooding, while at the
same time it uses substantially less energy than both these
protocols. SPIN-EC is able to deliver even more data per
unit energy than SPIN-PP and close to the ideal amount of
data per unit energy by adapting to the limited energy of
the network. We found that in all of our simulations, nodes
with a higher degree tended to dissipate more energy than
nodes with a lower degree, creating potential weak points in
a battery-operated network.
5.1 ns Implementation
ns [16] is an event-driven network simulator with extensive
support for simulation of TCP, routing, and multicast pro-
tocols. To implement the SPIN-PP and SPIN-EC protocols,
we added several features to the ns simulator. The ns Node
class was extended to create a Resource-Adaptive Node, as
shown in Figure 7. The major components of a Resource-Adaptive
Node are the Resources, the Resource Manager,
the Resource-Constrained Application (RCApplication), the
Resource-Constrained Agent (RCAgent) and the Network
Interface.
The Resource Manager provides a common interface between
the application and the individual resources. The
RCApplication, a subclass of ns's Application class, is responsible
for updating the status of the node's resources
through the Resource Manager. In addition, the RCApplica-
tion implements the SPIN communication protocol and the
resource-adaptive decision-making algorithms. The RCA-
gent packetizes the data generated by the RCApplication
and sends the packets to the Node's Network Interface for
-5515Test Network
Meters
Meters
Figure
8: Topology of the 25-node, wireless test network.
The edges shown here signify communicating neighbors in a
point-to-point wireless medium.
transmission to one of the node's neighbors. For each point-
to-point link that would exist between neighboring nodes in
a wireless network, we created a wired link using ns's built-in
link support. We made these wired links appear to be
wireless by forcing them to consume the same amount of
time and energy that would accompany real, wireless link
communications.
5.2 Simulation Testbed
For our simulations, we used the 25-node network shown
in
Figure
8. This network, which was randomly generated
with the constraint that the graph be fully connected, has
59 edges, a degree of 4.7, a hop diameter of 8, and an average
shortest path of 3.2 hops. The power of the sensor
radio transmitter is set so that any node within a 10 meter
radius is within communication range and is called a neighbor
of the sensor. The radio speed (1 Mbps) and the power
dissipation (600 mW in transmit mode, 200 mW in receive
mode) were chosen based on data from currently available
radios. The processing delay for transmitting a message is
randomly chosen between 5 ms and 10 ms 2 .
We initialized each node with 3 data items, chosen randomly
from a set of 25 possible data items. This means
there is overlap in the initial data of dierent sensors, as
often occurs in sensor networks. The size of each data item
was set to 500 bytes, and we gave each item a distinct,
byte, meta-data name. Our test network assumes no net-work
losses and no queuing delays. Table 1 summarizes
these network characteristics.
Using this network conguration, we ran each protocol
and tracked its progress in terms of the rate of data distribution
and energy usage. For each set of results, we ran
the simulation 10 times and averaged the data distribution
times and energy usage to account for the random processing
delay. The results of these simulations are presented in
the following sections.
5.3 Unlimited Energy Simulations
For the rst set of simulations, we gave all the nodes a virtually
innite supply of energy and simulated each data distribution
protocol until it converged. Since energy is not lim-
ited, SPIN-PP and SPIN-EC are identical protocols. There-
2 Note that these simulations do not account for any delay caused
by accessing, comparing, and managing meta-data.
Nodes
Edges 59
Average degree 4.7 neighbors
Diameter 8 hops
Average shortest path 3.2 hops
reach
Radio propagation delay 3x10 8 m/s
Processing delay 5-10 ms
speed 1 Mbps
Transmit cost 600 mW
Receive cost 200 mW
Data size 500 bytes
Meta-data size 16 bytes
Network losses None
Queuing delays None
Table
1: Characteristics of the 25-node wireless test net-work
fore, the results in this section only compare SPIN-PP with
ooding, gossiping, and the ideal data distribution protocol.
5.3.1 Data Acquired Over Time
Figure
9 shows the amount of data acquired by the network
over time for each of the protocols. These graphs clearly
show that gossiping has the slowest rate of convergence.
However, it is interesting to note that using gossiping, the
system has acquired over 85% of the total data in a small
amount of time; the majority of the time is spent distributing
the last 15% of the data to the nodes. This is because
a gossiping node sends all of the data it has to a randomly
chosen neighbor. Because the nodes obtain a large amount
of data, this transmission will be costly, and since it is very
likely that the neighbor already has a large proportion of the
data which is being transmitted, it will also be very wasteful.
A gossiping protocol which kept some per-neighbor state,
such as having each node keep track of the data it has already
sent to each of its neighbors, would perform much
better by reducing the amount of wasteful transmissions.
Figure
9 shows that SPIN-PP takes 80 ms longer to
converge than
ooding, whereas
ooding takes only ms
longer to converge than ideal. Although it appears that
SPIN-PP performs much worse than
ooding in convergence
time, this increase is actually a constant amount, regardless
of the length of the simulation. Thus for longer simulations,
the increase in convergence time for the SPIN-PP protocol
will be negligible.
Our experimental results showed that the data distribution
curves were convex for all four protocols. We therefore
speculated that these curves might generally be convex, regardless
of the network topology. If we could predict the
shape of these curves, we might be able to gain some intuition
about the behavior of the protocols for dierent net-work
topologies. To do this, we noted that the amount of
data received by a node i at each round d depends only on
the number of neighbors d hops away from this node, n i
(d).
However, since n i
(d) is dierent for each node i and each
distance d and is entirely dependent on the specic topol-
ogy, we found that, in fact, no general conclusions can be
drawn about the shape of these curves.
Time
Total
Data
Total Data Acquired in the Sensor Network
Flooding
Gossiping
Total
Data
Total Data Acquired in the Sensor Network
Flooding
Gossiping
Figure
9: Percent of total data acquired in the system over
time for each protocol. (a) shows the entire time scale until
all the protocols converge. (b) shows a blow-up of the rst
seconds.
5.3.2 Energy Dissipated Over Time
For the previous set of simulations, we also measured the
energy dissipated by the network over time, as shown in
Figure
10.
These graphs show that gossiping again is the most costly
protocol; it requires much more energy than the other two
protocols to accomplish the same task. As stated before,
adding a small amount of state to the gossiping protocol
will dramatically reduce the total system energy usage.
Figure
also shows that SPIN-PP uses approximately
a factor of 3.5 less energy than
ooding. Thus, by sacric-
ing a small, constant oset in convergence time, SPIN-PP
achieves a dramatic reduction in system energy. SPIN-PP
is able to achieve this large reduction in energy since there
is no wasted transmission of the large 500-byte data items.
We can see this advantage of the SPIN-PP protocol by
looking at the message proles for the dierent protocols,
shown in Figure 11. The rst three bars for each protocol
show the number of data items transmitted throughout the
network, the number of these data items that are redundant
and thus represent wasteful transmission, and the number
Energy
Dissipated
Total Energy Dissipated in the Sensor Network
Flooding
Gossiping
Energy
Dissipated
Total Energy Dissipated in the Sensor Network
Flooding
Gossiping
Figure
10: Total amount of energy dissipated in the system
for each protocol. (a) shows the entire time scale until all
the protocols converge. (b) shows a blow-up of the rst 0.22
seconds.
of data items that are useful. The number of useful data
transmissions is the same for each protocol since the data
distribution is complete once every node has all the data.
The last three bars for each protocol show the number of
meta-data items transmitted and the number of these items
that are redundant and useful. These bars have a height
zero for ideal,
ooding, and gossiping, since these protocols
do not use meta-data transmissions. Note that the number
of useful meta-data transmissions for the SPIN-PP protocol
is three times the number of useful data transmissions, since
each data transmission in the SPIN-PP protocol requires
three messages with meta-data.
Flooding and gossiping nodes send out many more data
items than SPIN-PP nodes. Furthermore, 77% of these
data items are redundant for
ooding and 96% of the data
items are redundant for gossiping, and these redundant messages
come at the high cost of 500 bytes each. SPIN-PP
nodes also send out a large number of redundant messages
however, these redundant messages are meta-data
messages. Meta-data messages come at a relatively low cost
and come with an important benet: meta-data negotiation500015000
Redundant data
Data items
Meta-data items
Useful meta-data
items received
items received
sent/received
sent/received
Flooding Gossiping
Useful data
items received
Redundant meta-data
items received
Protocol
Number
of
Messages
Figure
11: Message proles for the unlimited energy simu-
lations. Notice that SPIN-PP does not send any redundant
data messages.
90.080.120.16Number of neighbors
Energy
dissipated
Energy Dissipated per Node Versus Number of Neighbors
Flooding
Figure
12: Energy dissipation versus node degree for unlimited
energy simulations.
keeps SPIN-PP nodes from sending out even a single redundant
data-item.
We plotted the average energy dissipated for each node
of a certain degree, as shown in Figure 12. This gure shows
that for all the protocols, the energy dissipated at each node
depends upon its degree. The repercussions of this nding
is that if a high-degree node happens to lie upon a critical
path in the network, it may die out before other nodes
and partition the network. We believe that handling such
situations is an important area for improvement in all four
protocols.
The key results from these unlimited energy simulations
are summarized in Table 2.
5.4 Limited Energy Simulations
For this set of simulations, we limited the total energy in
the system to 1.6 Joules to determine how eectively each
protocol uses its available energy. Figure 13 shows the data
acquisition rate for the SPIN-PP, SPIN-EC,
ooding, gos-
siping, and ideal protocols. This gure shows that SPIN-
Performance Protocol
Relative to Ideal SPIN-PP Flooding Gossiping
Increase in Energy 1.25x 4.5x 25.5x
Dissipation
Increase in 90 ms 10 ms 3025 ms
Convergence Time
Slope of Energy 1.25x 5x 25x
Dissipation vs.
Node Degree
Correlation Line
% of Total Data 0 77% 96%
Messages that are
Redundant
Table
2: Key results of the unlimited energy simulations for
the SPIN-PP,
ooding, and gossiping protocols compared
with the ideal data distribution protocol.
EC puts its available energy to best use and comes close
to distributing the same amount of data as the ideal pro-
tocol. SPIN-EC is able to distribute 73% of the total data
as compared with the ideal protocol which distributes 85%.
We note that SPIN-PP distributes 68%,
ooding distributes
53%, and gossiping distributes only 38%.
Figure
14 shows the rate of energy dissipation for this
set of simulations. This plot shows that
ooding uses all
its energy very quickly, whereas gossiping, SPIN-PP, and
SPIN-EC use the energy at a slower rate and thus are able
to remain operational for a longer period of time.
Figure
15 shows the number of data items acquired per
unit energy for each of the protocols. If the system energy is
limited to below 0.2 Joules, none of the protocols has enough
energy to distribute any data. With 0.2 Joules, the gossiping
protocol is able to distribute a small amount of data; with
Joules, the SPIN protocols begins to distribute data; and
with 1.1 Joules, the
ooding protocol begins to distribute
the data. This shows that if the energy is very limited, the
gossiping protocol can accomplish the most data distribu-
tion. However, if there is enough energy to get the
ooding
or one of the SPIN protocols started, these protocols deliver
much more data per unit energy than gossiping. This
graph also shows the advantage of SPIN-EC over SPIN-PP,
which doesn't base any decisions on the current level of its
resources. By making the communication decisions based on
the current level of the energy available to each node, SPIN-
EC is able to distribute 10% more data per unit energy than
SPIN-PP and 60% more data per unit energy than
ooding.
6 Broadcast Media Simulations
For our second study, we examined the use of SPIN protocols
in a single, shared-media channel. The nodes in this
model use the 802.11 MAC layer protocol to gain access to
the channel. Packets may be queued at the nodes themselves
or may be lost due to transmission errors or channel
collisions. We used this framework to compare the performance
of SPIN-BC, SPIN-RL,
ooding, and an ideal data
distribution protocol. We found that SPIN-RL is able to use
meta-data to successfully recover from packet losses, while
acquiring twice as much data per unit energy as
ooding.
Because
ooding does not have any built-in mechanisms for
providing reliability, it can not recover from packet losses
and never converges.
Total
Data
Total Data Acquired in the Sensor Network
Flooding
Gossiping
Figure
13: Percent of total data acquired in the system for
each protocol when the total system energy is limited to 1.6
Joules.
Time
Energy
Dissipated
Total Energy Dissipated in the Sensor Network
Flooding
Gossiping
Figure
14: Energy dissipated in the system for each protocol
when the total system energy is limited to 1.6 Joules.
6.1 Simulation Implementation and Setup
We used monarch, a variant of the ns simulator for all the
simulations in this study. monarch [14] extends the functionality
of ns to enable the simulation of realistic wireless com-
munication. These extensions include a radio propagation
model and a detailed simulation of the IEEE 802.11 DCF
MAC protocol. We extended monarch's MobileNode class
to create wireless Resource-Adaptive Nodes. The only difference
between these Resource-Adaptive Nodes and those
described in Section 5 is that we replaced the wired Network
Interface shown in Figure 7 with a wireless 802.11 MAC in-
terface. We also made several modications to monarch's
built-in 802.11 MAC implementation in order to perform
our simulations. First, we modied the MAC implementation
to appropriately subtract energy from a node's Energy
Resource whenever it sends and receives a packet. Second,
we added a switch to the MAC layer that turns
collisions and losses.
The simulation testbed that we used in our second study
is the same as the testbed used in our rst study. We used
Dissipated (J)
Total
Data
Total Data Acquired per Amount of Energy
Flooding
Gossiping
Figure
15: Data acquired for a given amount of energy.
SPIN-EC distributes 10% more data per unit energy than
SPIN-PP and 60% more data per unit energy than
ooding.
the same topology and radio characteristics as those given
in
Figure
8 and in Table 1. The only dierences between
these two studies are that packets in this study may experience
queueing delays and, depending upon the test congu-
ration, may also be lost due to multi-path fading or packet
collisions.
6.2 Simulations without Packet Losses
For the rst set of simulations, we gave all the nodes a virtually
innite supply of energy, turned losses,
and ran each data distribution protocol until it converged.
6.2.1 Data Acquired Over Time
Figure
shows the amount of data acquired by the net-work
over time for each of the protocols. These graphs show
that SPIN-BC converges faster than
ooding, and almost as
quickly as the ideal protocol. The dierence in convergence
times between SPIN-BC and
ooding can be explained by
queueing delays in the network. Recall that in a broadcast
network, each node must wait for the channel to become
in order to send out a packet. When many nodes in a small
area have packets to send, these nodes queue up their packets
while waiting for access to the channel. If some of these
packets are redundant, than they cause other, useful packets
in the network to wait needlessly in queues. Flooding
does not provide any mechanisms to circumvent implosion
and overlap and therefore sends out many useless packets,
as shown in Figure 17. These packets therefore cause unnecessary
delays in the running time of the
ooding algorithm.
6.2.2 Energy Dissipated Over Time
For the previous set of simulations, we also measured the
energy dissipated by the network over time, as shown in
Figure
18. These gures show that SPIN-BC reduces energy
consumption by a factor of 1.6 over
ooding. We can
see the advantage of the SPIN-BC protocol by examining
the message proles for each protocol given in Figure 17.
Because these protocols all use broadcast, some redundant
data-transmissions are unavoidable, as illustrated by the
Time
Total
Data
Total Data Acquired in the Sensor Network
Flooding
Figure
Percent of total data acquired in the system over
time in a lossless broadcast-network.
Performance Protocol
relative to Ideal no losses losses
Flooding SPIN-RL
Increase in Energy 1.6x 2.4x 1.6x
Dissipation
Increase in 1.1x 2x 5x
Convergence Time
Slope of Energy .11x 1.67x 1.6x
Dissipation vs.
Node Degree
Correlation Line
Total Data 1x 2.2x .89x
Messages received
% of Total Data 1.1x 1.8x .96x
Messages that are
Redundant
Table
3: Key results of the broadcast network simulations
compared with the ideal data distribution protocol.
ideal protocol's message prole. What this gure illustrates
is that, by sacricing small amounts of energy sending meta-data
messages, SPIN-BC achieves a dramatic reduction in
wasted data messages and a corresponding reduction in system
energy and convergence time. Figure 20 further reinforces
these results, showing that SPIN-BC nodes acquire
times more data per unit energy expended than
ooding.
The key results from these simulations are summarized in
Table
3.
6.3 Simulations with Packet Losses
For the second set of simulations, we gave all the nodes a virtually
innite supply of energy and allowed the MAC layer
to lose packets due to collisions and transmission errors. We
compare SPIN-RL, our reliable protocol, to SPIN-BC and
ooding. As a point of reference, we also compare SPIN-
RL to the ideal protocol, run in a lossless network. We ran
each protocol until it either converged or ceased to make any
progress towards converging.
Number
of
items
received
Data items
Redundant data items
Useful data items
Meta-data items
Redundant meta-data items
Useful meta-data items
Figure
17: Message proles for each protocol in a lossless
broadcast-network.
6.3.1 Data Acquired Over Time
Figure
21 shows the amount of data acquired by the net-work
over time for each of the protocols. Only three of the
protocols, namely SPIN-BC, SPIN-RL, and
ooding were
run on a lossy network. The ideal protocol was run on a
lossless network, and is provided as a best-case reference
point. Of all three of the protocols run on the lossy network,
SPIN-RL is the only protocol that will retransmit lost pack-
ets, and therefore is the only protocol that converges. It is
interesting to note that, although SPIN-BC outperformed
ooding in the lossless network, it does not perform as well
as
ooding in a lossy network. We can account for SPIN-
BC's poor performance by the fact that SPIN-BC nodes
must successfully send and receive three messages in order
to move a piece of data over a hop in the network, whereas
ooding nodes only have to send one. SPIN-BC's protocol
is therefore three times more vulnerable to network losses
than
ooding, which explains the dierence in behavior we
see between Figures 16 and 21.
6.3.2 Energy Dissipated Over Time
For the previous set of simulations, we also measured the
energy dissipated by the network over time, as shown in
Figure
22. These gures show that, of all the protocols,
SPIN-RL expends the most energy, only slightly more than
ooding. We can account for the relative energy expenditure
of each protocol by examining the message proles, given in
Figure
25. Of all the protocols, SPIN-RL nodes receive the
most data messages, as well as the most meta-data messages.
This extra expenditure is well justied, however, if we look
at how it is put to use. Figure 24 shows the amount of data
acquired per unit energy for each protocol. Using almost the
same amount of energy, SPIN-RL is able to acquire twice
the amount of data as
ooding. The key results from these
simulations are summarized in Table 3.
7 Related Work
Perhaps the most fundamental use of dissemination protocols
in networking is in the context of routing table dissem-
Time
Energy
Dissipated
Total Energy Dissipated in the Sensor Network
Flooding
Figure
Total amount of energy dissipated in the system
for each protocol in a lossless broadcast-network.
ination. For example, nodes in link-state protocols (such as
OSPF [15]) periodically disseminate their view of the net-work
topology to their neighbors, as discussed in [10, 25].
Such protocols closely mimic the classic
ooding protocol
we described earlier.
There are generally two types of topologies used in wireless
networks: centralized control and peer-to-peer communications
[17]. The latter style is better suited for wireless
sensor networks than the former, given the ad hoc, decentralized
nature of such networks. Recently, mobile ad hoc
routing protocols have become an active area of research
[3, 11, 18, 20, 24]. While these protocols solve important
problems, they are a dierent class of problems from the
ones that arise in wireless sensor networks. In particular, we
believe that sensor networks will benet from application-controlled
negotiation-based dissemination protocols, such
as SPIN.
Routing protocols based on minimum-energy routing [12,
23] and other power-friendly algorithms have been proposed
in the literature [13]. We believe that such protocols will
be useful in wireless sensor networks, complementing SPIN
and enabling better resource adaptation. Recent advances
in operating system design [7] have made application-level
approaches to resource adaptation such as SPIN a viable
alternative to more traditional approaches.
Using gossiping and broadcasting algorithms to disseminate
information in distributed systems has been extensively
explored in the literature, often as epidemic algorithms [6].
In [1, 6], gossiping is used to maintain database consistency,
while in [19], gossiping is used as a mechanism to achieve
fault tolerance. A theoretical analysis of gossiping is presented
in [9]. Recently, such techniques have also been used
for resource discovery in networks [8].
Close in philosophy to the negotiation-based approach
of SPIN is the popular Network News Transfer Protocol
(NNTP) for Usenet news distribution on the Internet [2].
Here, news servers form neighborhoods and disseminate new
information between each other, using names and timestamps
as meta-data to negotiate data dissemination.
There has been a lot of recent interest in using IP multicast
[5] as the underlying infrastructure to e-ciently and
Number of neighbors
Energy
dissipated
Energy Dissipated per Node Versus Number of Neighbors
Flooding
Figure
19: Energy dissipation versus node degree in a loss-less
broadcast-network.
Energy Dissipated (J)
Total
Data
Total Data Acquired Per Unit Energy
Floodiing
Figure
20: Energy dissipated versus data acquired in a loss-less
broadcast-network.
Time
Total
Data
Total Data Acquired in the Sensor Network
Flooding
Figure
21: Percent of total data acquired in the system over
time for each protocol in a lossy broadcast-network.
reliably disseminate data from a source to many receivers
[22] on the Internet. However, for the reasons described in
Section 4, we believe that enabling applications to control
routing decisions is a less complex and better approach for
wireless sensor networks.
Conclusions
In this paper, we introduced SPIN (Sensor Protocols for Information
via Negotiation), a family of data dissemination
protocols for wireless sensor networks. SPIN uses meta-data
negotiation and resource-adaptation to overcome several deciencies
in traditional dissemination approaches. Using
meta-data names, nodes negotiate with each other about
the data they possess. These negotiations ensure that nodes
only transmit data when necessary and never waste energy
on useless transmissions. Because they are resource-aware,
nodes are able to cut back on their activities whenever their
resources are low to increase their longevity.
We have discussed the details of four specic SPIN pro-
tocols, SPIN-PP and SPIN-EC for point-to-point networks,
and SPIN-BC and SPIN-RL for broadcast networks. SPIN-
PP is a 3-stage handshake protocol for disseminating data,
and SPIN-EC is a version of SPIN-PP that backs o from
communication at a low-energy threshold. SPIN-BC is a
variant of SPIN-PP that takes advantage of cheap, MAC-layer
broadcast, and SPIN-RL is a reliable version of SPIN-
BC. Finally, we compared the SPIN-PP, SPIN-EC, SPIN-
BC, and SPIN-RL protocols to
ooding, gossiping, and ideal
dissemination protocols using the ns simulation tool.
After examining SPIN in this paper, both qualitatively
and quantitatively, we arrive at the following conclusions:
Naming data using meta-data descriptors and negotiating
data transmissions using meta-data successfully
solve the implosion and overlap problems described in
Section 1.
The SPIN protocols are simple and e-ciently disseminate
data, while maintaining only local information
about their nearest neighbors. These protocols are
well-suited for an environment where the sensors are
Time
Energy
Dissipated
Total Energy Dissipated in the Sensor Network
Flood
Figure
22: Total amount of energy dissipated in the system
for each protocol in a lossy broadcast-network.
mobile because they base their forwarding decisions on
local neighborhood information.
In terms of time, SPIN-PP achieves comparable results
to classic
ooding protocols, and in some cases outperforms
classic
ooding. In terms of energy, SPIN-PP
uses only about 25% as much energy as a classic
ing protocol. SPIN-EC is able to distribute 60% more
data per unit energy than
ooding. In all of our ex-
periments, SPIN-PP and SPIN-EC outperformed gos-
siping. They also come close to an ideal dissemination
protocol in terms of both time and energy under some
conditions.
Perhaps surprisingly, SPIN-BC and SPIN-RL are able
to use one-to-many communications exclusively, while
still acquiring data faster than
ooding using less en-
ergy. Not only can SPIN-RL converge in the presence
of network packet losses, it is able to dissipate twice
the amount of data per unit energy as
ooding.
In summary, SPIN protocols hold the promise of achieving
high performance at a low cost in terms of complexity, en-
ergy, computation, and communication.
Although our initial work and results are promising, there
is still work to be done in this area. Though we have discussed
energy-conservation in terms of point-to-point media
and reliability in terms of broadcast media, we would
like to explore methods for combining these techniques for
both kinds of networks, and we do not believe this would be
di-cult to accomplish. We would also like to study SPIN
protocols in a mobile wireless network model. We expect
that these networks would challenge the speed and adaptiveness
of SPIN protocols in a way that stationary networks do
not. Finally, we would like to develop more sophisticated
resource-adaptation protocols to use available energy well.
In particular, we are interested in designing protocols that
make adaptive decisions based not only on the cost of communicating
data, but also the cost of synthesizing it. Such
resource-adaptive approaches may hold the key to making
compute-intensive sensor applications a reality in the future.
90.040.080.120.16Number of neighbors
Energy
dissipated
Energy Dissipated per Node Versus Number of Neighbors
Flooding
Figure
23: Energy dissipation versus node degree for each
protocol in a lossy broadcast-network.
Energy Dissipated (J)
Total
Data
Total Data Acquired Per Unit Energy
Flooding
Figure
24: Energy dissipated versus data acquired for each
protocol in a lossy broadcast-network. The symbol in the
second graph highlights the last data-point of the SPIN-BC
line.
Flooding50015002500Number
of
items
received
Data items received
Redundant data items
Useful data items
Meta-data items(x.1)
Redundant meta-data items (x.1)
Useful meta-data items (x.1)
Figure
25: Message proles for the each protocol in a lossy
broadcast-network.
Acknowledgments
We thank Wei Shi, who participated in the initial design
and evaluation of some of the work in this paper. We thank
Anantha Chandrakasan for his helpful comments and suggestions
throughout this work. We also thank Suchitra Raman
and John Wroclawski for several useful comments and
suggestions on earlier versions of this paper. This research
was supported in part by a research grant from the NTT
Corporation and in part by DARPA contract DAAN02-98-
K-0003. Wendi Heinzelman is supported by a Kodak Fellowship
--R
Epidemic Algorithms in Replicated Databases.
Network News Transport Protocol.
A Performance Comparison of Multi-Hop Wireless Ad Hoc Netowrk Routing Protocols
Architectural Consideration for a New Generation of Protocols.
Multicast Routing in Datagram Internetworks and Extended LANs.
Epidemic Algorithms for Replicated Database Maintenance.
An operating system architecture for application-level resource management
Resource Discovery in Distributed Networks.
A Survey of Gossiping and Broadcasting in Communication Networks.
Routing in the Internet.
Routing in Ad Hoc Networks of Mobile Hosts.
Spectral E-ciency Considerations for Packet Radio
Distributed Network Protocols for Wireless Communication.
Monarch Extensions to the ns-2 Network Simulator
Information Networks.
A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Ne- towrks
Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers
Scalable Data Naming for Application Level Framing in Reliable Multicast.
Reliable Multicast Research Group.
A Channel Access Scheme for Large Dense Packet Radio Networks.
Routing in Communication Networks.
--TR
Epidemic algorithms for replicated database maintenance
Multicast routing in datagram internetworks and extended LANs
Architectural considerations for a new generation of protocols
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
Routing in the Internet
information networks
Routing in communications networks
Exokernel
A channel access scheme for large dense packet radio networks
Epidemic algorithms in replicated databases (extended abstract)
A performance comparison of multi-hop wireless ad hoc network routing protocols
Scalable data naming for application level framing in reliable multicast
Resource discovery in distributed networks
A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks
--CTR
Kavitha Gundappachikkenahalli , Hesham H. Ali, ADPROC: an adaptive routing framework to provide QOS in wireless sensor networks, Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks, p.76-83, February 14-16, 2006, Innsbruck, Austria
Mahesh (Umamaheswaran) Arumugam, Infuse: a TDMA based reprogramming service for sensor networks, Proceedings of the 2nd international conference on Embedded networked sensor systems, November 03-05, 2004, Baltimore, MD, USA
Jonathan W. Hui , David Culler, The dynamic behavior of a data dissemination protocol for network programming at scale, Proceedings of the 2nd international conference on Embedded networked sensor systems, November 03-05, 2004, Baltimore, MD, USA
Ossama Younis , Sonia Fahmy, HEED: A Hybrid, Energy-Efficient, Distributed Clustering Approach for Ad Hoc Sensor Networks, IEEE Transactions on Mobile Computing, v.3 n.4, p.366-379, October 2004
Chih-fan Hsin , Mingyan Liu, A distributed monitoring mechanism for wireless sensor networks, Proceedings of the 3rd ACM workshop on Wireless security, p.57-66, September 28-28, 2002, Atlanta, GA, USA
Wei Liu , Yanchao Zhang , Wenjing Lou , Yuguang Fang, A robust and energy-efficient data dissemination framework for wireless sensor networks, Wireless Networks, v.12 n.4, p.465-479, July 2006
Constandinos X. Mavromoustakis , Helen D. Karatza, Handling Delay Sensitive Contents Using Adaptive Traffic-Based Control Method for Minimizing Energy Consumption in Wireless Devices, Proceedings of the 38th annual Symposium on Simulation, p.295-302, April 04-06, 2005
Constandinos X. Mavromoustakis , Helen D. Karatza, Adaptive Energy Conservation Model using Dynamic Caching for Wireless Devices, Proceedings of the 37th annual symposium on Simulation, p.257, April 18-22, 2004
Foad Lotfifar , Hadi Shahriar Shahhoseini, A mesh-based routing protocol for wireless ad- sensor networks, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Christopher M. Sadler , Margaret Martonosi, Dali: a communication-centric data abstraction layer for energy-constrained devices in mobile sensor networks, Proceedings of the 5th international conference on Mobile systems, applications and services, June 11-13, 2007, San Juan, Puerto Rico
Wang Lei , Chen Zhi-ping , Jiang Xin-hua, Researches on scheme of pairwise key establishment for distributed sensor networks, Proceedings of the 1st ACM workshop on Wireless multimedia networking and performance modeling, October 13-13, 2005, Montreal, Quebec, Canada
Ramana Rao Kompella , Alex C. Snoeren, Practical lazy scheduling in sensor networks, Proceedings of the 1st international conference on Embedded networked sensor systems, November 05-07, 2003, Los Angeles, California, USA
Christopher M. Sadler , Margaret Martonosi, Dali: a communication-centric data abstraction layer for energy-constrained devices in mobile sensor networks, Proceedings of the 5th international conference on Mobile systems, applications and services, June 11-13, 2007, San Juan, Puerto Rico
M. H. Ali , Walid G. Aref , Cristina Nita-Rotaru, SPASS: scalable and energy-efficient data acquisition in sensor databases, Proceedings of the 4th ACM international workshop on Data engineering for wireless and mobile access, June 12-12, 2005, Baltimore, MD, USA
Flvia Coimbra Delicato , Paulo F. Pires , Luci Pirmez , Luiz Fernando Carmo, A Service Approach for Architecting Application Independent Wireless Sensor Networks, Cluster Computing, v.8 n.2-3, p.211-221, July 2005
Alexandra Meliou , David Chu , Joseph Hellerstein , Carlos Guestrin , Wei Hong, Data gathering tours in sensor networks, Proceedings of the fifth international conference on Information processing in sensor networks, April 19-21, 2006, Nashville, Tennessee, USA
Jae-Hwan Chang , Leandros Tassiulas, Maximum lifetime routing in wireless sensor networks, IEEE/ACM Transactions on Networking (TON), v.12 n.4, p.609-619, August 2004
Alexandros G. Dimakis , Vinod Prabhakaran , Kannan Ramchandran, Ubiquitous access to distributed data in large-scale sensor networks through decentralized erasure codes, Proceedings of the 4th international symposium on Information processing in sensor networks, April 24-27, 2005, Los Angeles, California
Elena Fasolo , Christian Prehofer , Michele Rossi , Qing Wei , Jrg Widmer , Andrea Zanella , Michele Zorzi, Challenges and new approaches for efficient data gathering and dissemination in pervasive wireless networks, Proceedings of the first international conference on Integrated internet ad hoc and sensor networks, May 30-31, 2006, Nice, France
Harshavardhan Sabbineni , Krishnendu Chakrabarty, Location-Aided Flooding: An Energy-Efficient Data Dissemination Protocol for Wireless Sensor Networks, IEEE Transactions on Computers, v.54 n.1, p.36-46, January 2005
A. Wadaa , S. Olariu , L. Wilson , M. Eltoweissy , K. Jones, Training a wireless sensor network, Mobile Networks and Applications, v.10 n.1-2, p.151-168, February 2005
C. -F. Chiasserini , R. Gaeta , M. Garetto , M. Gribaudo , D. Manini , M. Sereno, Fluid models for large-scale wireless sensor networks, Performance Evaluation, v.64 n.7-8, p.715-736, August, 2007 | negotiation-based protocols;energy-efficient protocols;meta-data;wireless sensor networks;information dissemination |
507059 | The Impulse Memory Controller. | AbstractImpulse is a memory system architecture that adds an optional level of address indirection at the memory controller. Applications can use this level of indirection to remap their data structures in memory. As a result, they can control how their data is accessed and cached, which can improve cache and bus utilization. The Impulse design does not require any modification to processor, cache, or bus designs since all the functionality resides at the memory controller. As a result, Impulse can be adopted in conventional systems without major system changes. We describe the design of the Impulse architecture and how an Impulse memory system can be used in a variety of ways to improve the performance of memory-bound applications. Impulse can be used to dynamically create superpages cheaply, to dynamically recolor physical pages, to perform strided fetches, and to perform gathers and scatters through indirection vectors. Our performance results demonstrate the effectiveness of these optimizations in a variety of scenarios. Using Impulse can speed up a range of applications from 20 percent to over a factor of 5. Alternatively, Impulse can be used by the OS for dynamic superpage creation; the best policy for creating superpages using Impulse outperforms previously known superpage creation policies. | Introduction
Since 1987, microprocessor performance has improved at a rate of 55% per year; in contrast,
latencies have improved by only 7% per year, and DRAM bandwidths by only 15-20%
per year [17]. the result is that the relative performance impact of memory accesses continues to
grow. In addition, as instruction issue rates increase, the demand for memory bandwidth grows at
least proportionately, possibly even superlinearly [8, 19]. Many important applications (e.g., sparse
database, signal processing, multimedia, and CAD applications) do not exhibit sufficient
Contact information: Prof. John Carter, School of Computing, 50 S Central Campus Drive, Room 3190, University
of Utah, SLC, UT 84112-9205. retrac@cs.utah.edu. Voice: 801-585-5474. Fax: 801-581-5843.
locality of reference to make effective use of the on-chip cache hierarchy. For such applications,
the growing processor/memory performance gap makes it more and more difficult to effectively
exploit the tremendous processing power of modern microprocessors. In the Impulse project, we
are attacking this problem by designing and building a memory controller that is more powerful
than conventional ones.
Impulse introduces an optional level of address translation at the memory controller. the key
insight that this feature exploits is that "unused" physical addresses can be translated to "real"
physical addresses at the memory controller. An unused physical address is a legitimate address
that is not backed by DRAM. For example, in a conventional system with 4GB of physical address
space and only 1GB of installed DRAM, 3GB of the physical address space remains unused. We
call these unused addresses shadow addresses, and they constitute a shadow address space that
the Impulse controller maps to physical memory. By giving applications control (mediated by the
OS) over the use of shadow addresses, Impulse supports application-specific optimizations that
restructure data. Using Impulse requires software modifications to applications (or compilers) and
operating systems, but requires no hardware modifications to processors, caches, or buses.
As a simple example of how Impulse's memory remapping can be used, consider a program
that accesses the diagonal elements of a large, dense matrix A. the physical layout of part of the
data structure A is shown on the right-hand side of Figure 1. On a conventional memory system,
each time the processor accesses a new diagonal element (A[i][i]), it requests a full cache line
of contiguous physical memory (typically 32-128 bytes of data on modern systems). the program
accesses only a single word of each of these cache lines. Such an access is shown in the top half
of
Figure
1.
Using Impulse, an application can configure the memory controller to export a dense, shadow-
space alias that contains just the diagonal elements, and can have the OS map a new set of virtual
addresses to this shadow memory. the application can then access the diagonal elements via the
new virtual alias. Such an access is shown in the bottom half of Figure 1.
Remapping the array diagonal to a dense alias yields several performance benefits. First, the
program enjoys a higher cache hit rate because several diagonal elements are loaded into the caches
at once. Second, the program consumes less bus bandwidth because non-diagonal elements are
not sent over the bus. Third, the program makes more effective use of cache space because the
diagonal elements now have contiguous shadow addresses. In general, Impulse's flexibility allows
applications to customize addressing to fit their needs.
Section 2 describes the Impulse architecture. It describes the organization of the memory
controller itself, as well as the system call interface that applications use to control it. the operating
system must mediate use of the memory controller to prevent applications from accessing each
other's physical memory.
Section 3 describes the types of optimizations that Impulse supports. Many of the optimizations
that we describe are not new, but Impulse is the first system that provides hardware support for
them all in general-purpose computer systems. the optimizations include transposing matrices in
memory without copying, creating superpages without copying, and doing scatter/gather through
an indirection vector. Section 4 presents the results of a simulation study of Impulse, and shows
that these optimizations can benefit a wide range of applications. Various applications see speedups
ranging from 20% to a factor of 5. OS policies for dynamic superpage creation using Impulse have
around 20% better speedup than those from prior work.
Section 5 describes related work. A great deal of work has been done in the compiler and
operating systems communities on related optimizations. the contribution of Impulse is that it
provides hardware support for many optimizations that previously had to be performed purely in
software. As a result, the tradeoffs for performing these optimizations are different. Section 6
summarizes our conclusions, and describes future work.
Impulse Architecture
Impulse expands the traditional virtual memory hierarchy by adding address translation hardware
to the memory controller. This optional extra level of remapping is enabled by the fact that not all
physical addresses in a traditional virtual memory system typically map to valid memory locations.
the unused physical addresses constitute a shadow address space. the technology trend is putting
more and more bits into physical addresses. For example, more and more 64-bit systems are
coming out. One result of this trend is that the shadow space is getting larger and larger. Impulse
allows software to configure the memory controller to interpret shadow addresses. Virtualizing
unused physical addresses in this way can improve the efficiency of on-chip caches and TLBs,
since hot data can be dynamically segregated from cold data.
Data items whose physical DRAM addresses are not contiguous can be mapped to contiguous
shadow addresses. In response to a cache line fetch of a shadow address, the memory controller
fetches and compacts sparse data into dense cache lines before returning the data to the proces-
sor. To determine where the data associated with these compacted shadow cache lines reside
in physical memory, Impulse first recovers their offsets within the original data structure, which
we call pseudo-virtual addresses. It then translates these pseudo-virtual addresses to physical
DRAM addresses. the pseudo-virtual address space page layout mirrors the virtual address space,
allowing Impulse to remap data structures that lie across non-contiguous physical pages. the
shadow!pseudo-virtual!physical mappings all take place within the memory controller. the
operating system manages all the resources in the expanded memory hierarchy and provides an
interface for the application to specify optimizations for particular data structures.
2.1 Software Interface and OS Support
To exploit Impulse, appropriate system calls must be inserted into the application code to configure
the memory controller. the Architecture and Language Implementation group at the University of
Massachusetts is developing compiler technology for Impulse. In response to an Impulse system
call, the OS allocates a range of contiguous virtual addresses large enough to map the elements
of the new (synthetic) data structure. the OS then maps the new data structure through shadow
memory to the corresponding physical data elements. It does so by allocating a contiguous range
of shadow addresses and downloading two pieces of information to the MMC: (i) a function that
the MMC should use to perform the mapping from shadow to pseudo-virtual space and (ii) a set of
page table entries that can be used to translate pseudo-virtual to physical DRAM addresses.
As an example, consider remapping the diagonal of an n n matrix A[]. Figure 2 depicts
the memory translations for both the matrix A[] and the remapped image of its diagonal. Upon
seeing an access to a shadow address in the synthetic diagonal data structure, the memory controller
gathers the corresponding diagonal elements from the original array, packs them into a dense cache
line, and returns this cache line to the processor. the OS interface allows alignment and offset
characteristics of the remapped data structure to be specified, which gives the application some
control over L1 cache behavior. In the current Impulse design, coherence is maintained in software:
the OS or the application programmer must keep aliased data consistent by explicitly flushing the
cache.
2.2 Hardware Organization
the organization of the Impulse controller architecture is depicted in Figure 3. the critical component
of the Impulse MMC is the shadow engine, which processes all shadow accesses. the shadow
engine contains a small SRAM Assembly Buffer, which is used to scatter/gather cache lines in the
shadow address space; some shadow descriptors to store remapping configuration information; an
wasted bus bandwidth
Conventional Memory System
Impulse Memory System
Impulse
Controller
Cache Physical pages
Figure
1: Using Impulse to remap the diagonal of a dense matrix into a dense cache line. the black boxes
represent data on the diagonal, whereas the gray boxes represent non-diagonal data.
Diagonal
virtual memory shadow memory pseudo-virtual memory physical memory
Page
table
Conventional Memory System
Impulse Memory System
A
Remapping
function
Figure
2: Accessing the (sparse) diagonal elements of an array via a dense diagonal variable in Impulse.
CPU
MMU
system
memory
bus
Shadow Address
DRAM Interface
Normal
Address
Shadow engine
AddrCalc
Assembly
buffer Descriptors
Impulse MMC
Figure
3: Impulse memory controller organization.
ALU unit (AddrCalc) to translate shadow addresses to pseudo-virtual addresses; and a Memory
Controller Translation Lookaside Buffer (MTLB) to cache recently used translations from pseudo-
virtual addresses to physical addresses. the shadow engine contains eight shadow descriptors,
each of which is is capable of saving all configuration settings for one remapping. All shadow
descriptors share the same ALU unit and the same MTLB.
Since the extra level of address translation is optional, addresses appearing on the memory bus
may be in the physical (backed by DRAM) or shadow memory spaces. Valid physical addresses
pass untranslated to the DRAM interface.
Shadow addresses must be converted to physical addresses before being presented to the DRAM.
To do so, the shadow engine first determines which shadow descriptor to use and passes its contents
to the AddrCalc unit. the output of the AddrCalc will be a series of offsets for the individual
sparse elements that need to be fetched. These offsets are passed through the MTLB to compute
the physical addresses that need to be fetched. To hide some of the latency of fetching remapped
data, each shadow descriptor can be configured to prefetch the remapped cache line following the
currently accessed one.
Depending on how Impulse is used to access a particular data structure, the shadow address
translations can take three forms: direct, strided, or scatter/gather. Direct mapping translates a
shadow address directly to a physical DRAM address. This mapping can be used to recolor physical
pages without copying or to construct superpages dynamically. Strided mapping creates dense
cache lines from array elements that are not contiguous. the mapping function maps an address
soffset in shadow space to pseudo-virtual address pvaddr stride soffset, where pvaddr is the
starting address of the data structure's pseudo-virtual image. pvaddr is assigned by the OS upon
configuration. Scatter/gather mapping uses an indirection vector vec to translate an address soffset
in shadow space to pseudo-virtual address pvaddr
3 Impulse Optimizations
Impulse remappings can be used to enable a wide variety of optimizations. We first describe how
Impulse's ability to pack data into cache lines (either using stride or scatter/gather remapping) can
be used. We examine two scientific application kernels-sparse matrix-vector multiply (SMVP)
and dense matrix-matrix product (DMMP)-and three image processing algorithms-image filter-
ing, image rotation, and ray tracing. We then show how Impulse's ability to remap pages can be
used to automatically improve TLB behavior through dynamic superpage creation. Some of these
results have been published in prior conference papers [9, 13, 44, 49].
3.1 Sparse Matrix-Vector Product
Sparse matrix-vector product (SMVP) is an irregular computational kernel that is critical to many
large scientific algorithms. For example, most of the time in conjugate gradient [3] or in the
simulations [33] is spent performing SMVP.
To avoid wasting memory, sparse matrices are generally compacted so that only non-zero elements
and corresponding index arrays are stored. For example, the Class A input matrix for
the NAS conjugate gradient kernel (CG-A) is 14,000 by 14,000, and contains only 1.85 million
non-zeroes. Although sparse encodings save tremendous amounts of memory, sparse matrix codes
tend to suffer from poor memory performance because data must be accessed through indirection
vectors. CG-A on an SGI Origin 2000 processor (which has a 2-way, 32K L1 cache and a 2-way,
exhibits L1 and L2 cache hit rates of only 63% and 92%, respectively.
the inner loop of the sparse matrix-vector product in CG is roughly:
for to n do
sum := 0
for j := ROWS[i] to ROWS[i+1]-1 do
sum +=
the code and data structures for SMVP are illustrated in Figure 4. Each iteration multiplies
a row of the sparse matrix A with the dense vector x. the accesses to x are indirect (via the
COLUMN index vector) and sparse, making this code perform poorly on conventional memory
systems. Whenever x is accessed, a conventional memory system fetches a cache line of data, of
which only one element is used. the large sizes of x, COLUMN, and DATA and the sparse nature
of accesses to x inhibit data reuse in the L1 cache. Each element of COLUMN or DATA is used
only once, and almost every access to x results in an L1 cache miss. A large L2 cache can enable
reuse of x, if physical data layouts can be managed to prevent L2 cache conflicts between A and x.
Unfortunately, conventional systems do not typically provide mechanisms for managing physical
layout.
the Impulse memory controller supports scatter/gather of physical addresses through indirection
vectors. Vector machines such as the CDC STAR-100 [18] provided scatter/gather capabilities
in hardware within the processor. Impulse allows conventional CPUs to take advantage of scat-
ter/gather functionality by implementing the operations at the memory, which reduces memory
traffic over the bus.
To exploit Impulse, CG's SMVP code can be modified as follows:
impulse remap(x, x', N, COLUMN, INDIRECT, .)
for to n do
sum := 0
for j := ROWS[i] to ROWS[i+1]-1 do
sum += DATA[j] * x'[j]
the impulse remap operation asks the operating system to 1) allocate a new region of
shadow space, 2) map x' to that shadow region, and 3) instruct the memory controller to map the
elements of the shadow region x'[k] to the physical memory for x[COLUMN[k]]. After the
remapped array has been set up, the code accesses the remapped version of the gathered structure
rather than the original structure (x).
This optimization improves the performance of SMVP in two ways. First, spatial locality is
improved in the L1 cache. Since the memory controller packs the gathered elements into cache
lines, each cache line contain 100% useful data, rather than only one useful element. Second,
sum := 0
for j := ROWS[i] to ROWS[i+1]-1 do
sum += DATA[j] * x[COLUMN[j]];
for to n do
A
F2 C D
Y
U
COLUMN
ROWS
Figure
4: Conjugate gradient's sparse matrix-vector product. the matrix A is encoded using three dense
arrays: DATA, ROWS, and COLUMN. the contents of A are in DATA. ROWS[i] indicates where the i th row
begins in DATA. COLUMN[i] indicates which column of A the element stored in DATA[i] comes from.
the processor issues fewer memory instructions, since the read of the indirection vector COLUMN
occurs at the memory controller. Note that the use of scatter/gather at the memory controller
reduces temporal locality in the L2 cache. the remapped elements of x' cannot be reused, since
all of the elements have different addresses.
An alternative to scatter/gather is dynamic physical page recoloring through direct remapping
of physical pages. Physical page recoloring changes the physical addresses of pages so that
reusable data is mapped to a different part of a physically-addressed cache than non-reused data.
By performing page recoloring, conflict misses can be eliminated. On a conventional machine,
physical page recoloring is expensive: the only way to change the physical address of data is to
copy the data between physical pages. Impulse allows physical pages to be recolored without
copying. Virtual page recoloring has been explored by other authors [6].
For SMVP, the x vector is reused within an iteration, while elements of the DATA, ROW, and
COLUMN vectors are used only once in each iteration. As an alternative to scatter/gather of x at
the memory controller, Impulse can be used to physically recolor pages so that x does not conflict
with the other data structures in the L2 cache. For example, in the CG-A benchmark, x is over
100K bytes: it would not fit in most L1 caches, but would fit in many L2 caches.
Impulse can remap x to pages that occupy most of the physically-indexed L2 cache, and can
remap DATA, ROWS, and COLUMNS to a small number of pages that do not conflict with x. In our
experiments, we color the vectors x, DATA, and COLUMN so that they do not conflict in the L2
cache. the multiplicand vector x is heavily reused, so we color it to occupy the first half of the
L2 cache. To keep the large DATA and COLUMN structures from conflicting, we divide the second
half of the L2 cache into two quarters, and then color DATA and COLUMN so they each occupy one
quarter of the cache. In effect, we use pieces of the L2 cache as a set of virtual stream buffers [29]
for DATA, ROWS, and COLUMNS.
3.2 Tiled Matrix Algorithms
Dense matrix algorithms form an important class of scientific kernels. For example, LU decomposition
and dense Cholesky factorization are dense matrix computational kernels. Such algorithms
are tiled (or blocked) to increase their efficiency. That is, the iterations of tiled algorithms are
reordered to improve their memory performance. the difficulty with using tiled algorithms lies
in choosing an appropriate tile size [27]. Because tiles are non-contiguous in the virtual address
space, it is difficult to keep them from conflicting with each other or with themselves in cache. To
avoid conflicts, either tile sizes must be kept small, which makes inefficient use of the cache, or
tiles must be copied into non-conflicting regions of memory, which is expensive.
Impulse provides an alternative method of removing cache conflicts for tiles. We use the simplest
tiled algorithm, dense matrix-matrix product (DMMP), as an example of how Impulse can
improve the behavior of tiled matrix algorithms. Assume that we are computing
want to keep the current tile of the C matrix in the L1 cache as we compute it. In addition, since
the same row of the A matrix is used multiple times to compute a row of the C matrix, we would
like to keep the active row of A in the L2 cache.
Impulse allows base-stride remapping of the tiles from non-contiguous portions of memory
into contiguous tiles of shadow space. As a result, Impulse makes it easy for the OS to virtually
remap the tiles, since the physical footprint of a tile will match its size. If we use the OS to remap
the virtual address of a matrix tile to its new shadow alias, we can then eliminate interference in
a virtually-indexed L1 cache. First, we divide the L1 cache into three segments. In each segment
we keep a tile: the current output tile from C, and the input tiles from A and B. When we finish
with one tile, we use Impulse to remap the virtual tile to the next physical tile. To maintain cache
consistency, we must purge the A and B tiles and flush the C tiles from the caches whenever they
are remapped. As Section 4.1.2 shows, these costs are minor.
3.3 Image Filtering
Image filtering applies a numerical filter function to an image to modify its appearance. Image
filtering may be used to attenuate high-frequency components caused by noise in a sampled image,
to adjust an image to different geometry, to detect or enhance edges within an image, or to create
various special effects. Box, Bartlett, Gaussian, and binomial filters are common in practice. Each
modifies the input image in a different way, but all share similar computational characteristics.
We concentrate on a representative class of filters, binomial filters [15], in which each pixel
in the output image is computed by applying a two-dimensional "mask" to the input image. Binomial
filtering is computationally similar to a single step of a successive over-relaxation algorithm
for solving differential equations: the filtered pixel value is calculated as a linear function
of the neighboring pixel values of the original image and the corresponding mask values.
For example, for an order-5 binomial filter, the value of pixel (i; j) in the output image will be256
To avoid edge effects, the original image
boundaries must be extended before applying the masking function. Figure 5 illustrates a black-
and-white sample image before and after the application of a small binomial filter.
In practice, many filter functions, including binomial, are "separable," meaning that they are
symmetric and can be decomposed into a pair of orthogonal linear filters. For example, a two-dimensional
mask can be decomposed into two, one-dimensional, linear masks ([
- the two-dimensional mask is simply the outer product of this one-dimensional mask with its
transpose. the process of applying the mask to the input image can be performed by sweeping
first along the rows and then the columns, calculating a partial sum at each step. Each pixel in the
original image is used only for a short time, which makes filtering a pure streaming application.
Impulse can transpose both the input and output image arrays without copying, which gives the
column sweep much better cache behavior.
3.4 Image Rotation
Image warping refers to any algorithm that performs an image-to-image transformation. Separable
image warps are those that can be decomposed into multiple one-dimensional transformations [10].
For separable warps, Impulse can be used to improve the cache and TLB performance of one-dimensional
traversals orthogonal to the image layout in memory. the three-shear image rotation
algorithm is an example of a separable image warp. This algorithm rotates a 2-dimensional image
around its center in three stages, each of which performs a "shear" operation on the image, as
Figure
5: Example of binomial image filtering. the original image is on the left, and the filtered image is
on the right.
illustrated in Figure 6. the algorithm is simpler to write, faster to run, and has fewer visual
artifacts than a direct rotation. the underlying math is straightforward. Rotation through an angle
can be expressed as matrix multiplication:@
y 0A =@
cos sin
sin cos A@
x
the rotation matrix can be broken into three shears as follows:@ cos sin
sin cos A =@ 1 0
tan 1A@ 1 sin
tan 1A
None of the shears requires scaling (since the determinant of each matrix is 1), so each involves
just a shift of rows or columns. Not only is this algorithm simple to understand and implement, it
is robust in that it is defined over all rotation values from 0 - to 90 - . Two-shear rotations fail for
angles near 90 - .
We assume a simple image representation of an array of pixel values. the second shear operation
(along the y axis) walks along the column of the image matrix, which gives rise to poor
memory performance for large images. Impulse improves both cache and TLB performance by
transposing the matrix without copying, so that walking along columns in the image is replaced by
walking along rows in a transposed matrix.
Isosurface Rendering Using Ray Tracing
Our isosurface rendering benchmark is based on the technique demonstrated by Parker et al. [37].
This benchmark generates an image of an isosurface in a volume from a specific point of view. In
Figure
Three-shear rotation of an image counter-clockwise through one radian. the original image
(upper left) is first sheared horizontally (upper right). That image is sheared upwards (lower right). the
final rotated image (lower left) is generated via one final horizontal shift.
contrast to other volume visualization methods, this method does not generate an explicit representation
of the isosurface and render it with a z-buffer, but instead uses brute-force ray tracing to
perform interactive isosurfacing. For each ray, the first isosurface intersected determines the value
of the corresponding pixel. the approach has a high intrinsic computational cost, but its simplicity
and scalability make it ideal for large data sets on current high-end systems.
Traditionally, ray tracing has not been used for volume visualization because it suffers from
poor memory behavior when rays do not travel along the direction that data is stored. Each ray
must be traced through a potentially large fraction of the volume, giving rise to two problems. First,
many memory pages may need to be touched, which results in high TLB pressure. Second, a ray
with a high angle of incidence may visit only one volume element (voxel) per cache line, in which
case bus bandwidth will be wasted loading unnecessary data that pollutes the cache. By carefully
hand-optimizing their ray tracer's memory access patterns, Parker et al. achieve acceptable
performance for interactive rendering (about 10 frames per second). They improve data locality
by organizing the data set into a multi-level spatial hierarchy of tiles, each composed of smaller
cells. the smaller cells provide good cache-line utilization. "Macro cells" are created to cache the
minimum and maximum data values from the cells of each tile. These macro cells enable a simple
min/max comparison to detect whether a ray intersects an isosurface within the tile. Empty macro
cells need not be traversed.
Careful hand-tiling of the volume data set can yield much better memory performance, but
choosing the optimal number of levels in the spatial hierarchy and sizes for the tiles at each level is
difficult, and the resulting code is hard to understand and maintain. Impulse can deliver better performance
than hand-tiling at a lower programming cost. There is no need to preprocess the volume
data set for good memory performance: the Impulse memory controller can remap it dynamically.
screen
volume
Figure
7: Isosurface rendering using ray tracing. the picture on the left shows rays perpendicular to the
viewing screen being traced through a volume. the one on the right illustrates how each ray visits a sequence
of voxels in the volume; Impulse optimizes voxel fetches from memory via indirection vectors representing
the voxel sequences for each ray.
In addition, the source code retains its readability and modifiability.
Like many real-world visualization systems, our benchmark uses an orthographic tracer whose
rays all intersect the screen surface at right angles, producing images that lack perspective and
appear far away, but are relatively simple to compute.
We use Impulse to extract the voxels that a ray potentially intersects when traversing the vol-
ume. the right-hand side of Figure 7 illustrates how each ray visits a certain sequence of voxels in
the volume. Instead of fetching cache lines full of unnecessary voxels, Impulse can remap a ray to
the voxels it requires so that only useful voxels will be fetched.
3.6 Online Superpage Promotion
Impulse can be used to improve TLB performance automatically, by having the operating system
automatically create superpages dynamically. Superpages are supported by the translation
lookaside buffers (TLBs) on almost all modern processors; they are groups of contiguous virtual
memory pages that can be mapped with a single TLB entry [12, 30, 43]. Using superpages makes
more efficient use of a TLB, but the physical pages that back a superpage must be contiguous and
properly aligned. Dynamically coalescing smaller pages into a superpage thus requires that all
the pages be be coincidentally adjacent and aligned (which is unlikely), or that they be copied so
that they become so. the overhead of promoting superpages by copying includes both direct and
indirect costs. the direct costs come from copying the pages and changing the mappings. Indi-
0Physical Addresses
0x06155000
0x40138000
0x20285000
0x80243000
0x00005000
0x00004000
Virtual Addresses Shadow Addresses
0x00007000
00004 004physical size
Processor TLB
virtual
Memory
controller
Figure
8: An Example of Creating Superpages Using Shadow Space
rect costs include the increased number of instructions executed on each TLB miss (due to the new
decision-making code in the miss handler) and the increased contention in the cache hierarchy (due
to the code and data used in the promotion process). When deciding whether to create superpages,
all costs must be balanced against the improvements in TLB performance.
Romer et al. [40] study several different policies for dynamically creating superpages. Their
trace-driven simulations and analysis show how a policy that balances potential performance benefits
and promotion overheads can improve performance in some TLB-bound applications by about
50%. Our work extends that of Romer et al. by showing how Impulse changes the design of a
dynamic superpage promotion policy.
the Impulse memory controller maintains its own page tables for shadow memory mappings.
Building superpages from base pages that are not physically contiguous entails simply remapping
the virtual pages to properly aligned shadow pages. the memory controller then maps the shadow
pages to the original physical pages. the processor's TLB is not affected by the extra level of
translation that takes place at the controller.
Figure
8 illustrates how superpage mapping works on Impulse. In this example, the OS has
mapped a contiguous 16KB virtual address range to a single shadow superpage at "physical" page
frame 0x80240. When an address in the shadow physical range is placed on the system memory
bus, the memory controller detects that this "physical" address needs to be retranslated using its
local shadow-to-physical translation tables. In the example in Figure 8, the processor translates
an access to virtual address 0x00004080 to shadow physical address 0x80240080, which the
controller, in turn, translates to real physical address 0x40138080.
Performance
We performed a series of detailed simulations to evaluate the performance impact of the optimizations
described in Section 3. Our studies use the URSIM [48] execution-driven simulator, which
is derived from RSIM [35]. URSIM models a microarchitecture close to MIPS R10000 micro-processor
[30] with a 64-entry instruction window. We configured it to issue four instructions
per cycle. We model a 64-kilobyte L1 data cache that is non-blocking, write-back, virtually in-
dexed, physically tagged, direct-mapped, and has 32-byte lines. the 512-kilobyte L2 data cache
is non-blocking, write-back, physically indexed, physically tagged, two-way associative, and has
128-byte lines. L1 cache hits take one cycle, and L2 cache hits take eight cycles.
URSIM models a split-transaction MIPS R10000 cluster bus with a snoopy coherence pro-
tocol. the bus multiplexes addresses and data, is eight bytes wide, has a three-cycle arbitration
delay and a one-cycle turn-around time. We model two memory controllers: a conventional high-performance
MMC based on the one in the SGI O200 server and the Impulse MMC. the system
bus, memory controller, and DRAMs have the same clock rate, which is one third of the CPU
clock's. the memory system supports critical word first, i.e., a stalled memory instruction resumes
execution after the first quad-word returns. the load latency of the first quad-word is
cycles.
the unified TLB is single-cycle, fully associative, software-managed, and combined instruction
and data. It employs a least-recently-used replacement policy. the base page size is 4096 bytes.
Superpages are built in power-of-two multiples of the base page size, and the biggest superpage
that the TLB can map contains 2048 base pages. We model a 128-entry TLB.
In the remainder of this section we examine the simulated performance of Impulse on the
examples given in Section 3. Our calculation of "L2 cache hit ratio" and "mem (memory) hit
ratio" uses the total number of loads executed (not the total number of L2 cache accesses) as the
divisor for both ratios. This formulation makes it easier to compare the effects of the L1 and L2
caches on memory accesses: the sum of the L1 cache, L2 cache, and memory hit ratios equals
100%.
4.1 Fine-Grained Remapping
the first set of experiments exploit Impulse's fine-grained remapping capabilities to create synthetic
data structures with better locality than in the original programs.
4.1.1 Sparse Matrix-Vector Product
Table
4.1.1 shows how Impulse can be used to improve the performance of the NAS Class A
Conjugate Gradient (CG-A) benchmark. the first column gives results from running CG-A on a
non-Impulse system. the second and third columns give results from running CG-A on an Impulse
system. the second column numbers come from using the Impulse memory controller to perform
scatter/gather; the third column numbers come from using it to perform physical page coloring.
On the conventional memory system, CG-A suffers many cache misses: nearly 17% of accesses
go to the memory. the inner loop of CG-A is very small, so it can generate cache misses quickly,
which leads to there being a large number of cache misses outstanding at any given time. the large
number of outstanding memory operations causes heavy contention for the system bus, memory
controller, and DRAMs; for the baseline version of CG-A, bus utilization reaches 88.5%. As a
result, the average latency of a memory operation reaches 163 cycles for the baseline version of
CG-A. This behavior combined with the high cache miss rates causes the average load in CG-A to
take 47.6 cycles compared to only 1 cycle for L1 cache hits.
Scatter/gather remapping on CG-A improves performance by over a factor of 3, largely due
to the increase in the L1 cache hit ratio and the decrease in the number of loads/stores that go
to memory. Each main memory access for the remapped vector x' loads the cache with several
useful elements from the original vector x, which increases the L1 cache hit rate. In other words,
retrieving elements from the remapped array x' improves the spatial locality of CG-A.
Scatter/gather remapping reduces the total number of loads executed by the program from 493
million from 353 million. In the original program, two loads are issued to compute x[COLUMN[j]].
In the scatter/gather version of the program, only one load is issued by the processor, because the
load of the indirection vector occurs at the memory controller. This reduction more than compensates
for the scatter/gather's increase in the average cost of a load, and accounts for almost
one-third of the cycles saved.
To provide another example of how useful Impulse can be, we use it to recolor the pages of the
major data structures in CG-A. Page recoloring consistently reduces the cost of memory accesses
by eliminating conflict misses in the L2 cache and increasing the L2 cache hit ratio from 19.7% to
22.0%. As a result, fewer loads go to memory, and performance is improved by 17%.
Although page recoloring improves performance on CG-A, it is not nearly as effective as scat-
ter/gather. the difference is primarily because page recoloring does not achieve the two major improvements
that scatter/gather provides: improving the locality of CG-A and reducing the number
of loads executed. This comparison does not mean that page recoloring is not a useful optimiza-
tion. Although the speedup for page recoloring on CG-A is substantially less than scatter/gather,
page recoloring is more broadly applicable.
4.1.2 Dense Matrix-Matrix Product
This section examines the performance benefits of tile remapping for DMMP, and compares the
results to software tile copying. Impulse's alignment restrictions require that remapped tiles be
aligned to L2 cache line boundaries, which adds the following constraints to our matrices:
Tile sizes must be a multiple of a cache line. In our experiments, this size is 128 bytes. This
constraint is not overly limiting, especially since it makes the most efficient use of cache
space.
Arrays must be padded so that tiles are aligned to 128 bytes. Compilers can easily support
this constraint: similar padding techniques have been explored in the context of vector
processors [7].
Table
4.1.2 illustrates the results of our tiling experiments. the baseline is the conventional
no-copy tiling. Software tile copying outperforms the baseline code by almost 10%; Impulse
tile remapping outperforms it by more than 20%. the improvement in performance for both is
primarily due to the difference in cache behavior. Both copying and remapping more than double
the L1 cache hit rate, and they reduce the average number of cycles for a load to less than two.
Impulse has a higher L1 cache hit ratio than software copying, since copying tiles can incur cache
misses: the number of loads that go to memory is reduced by two-thirds. In addition, the cost of
copying the tiles is greater than the overhead of using Impulse to remap tiles. As a result, using
Impulse provides twice as much speedup.
This comparison between conventional and Impulse copying schemes is conservative for several
reasons. Copying works particularly well on DMMP: the number of operations performed on
a tile of size O(n 2 ) is O(n 3 ), which means the overhead of copying is relatively low. For algorithms
where the reuse of the data is lower, the relative overhead of copying is greater. Likewise,
as caches (and therefore tiles) grow larger, the cost of copying grows, whereas the (low) cost of
Impulse's tile remapping remains fixed. Finally, some authors have found that the performance
of copying can vary greatly with matrix size, tile size, and cache size [45], but Impulse should be
insensitive to cross-interference between tiles.
Conventional Scatter/Gather Page Coloring
Time 5.48B 1.77B 4.67B
L2 hit ratio 19.7% 15.9% 22.0%
mem hit ratio 16.9% 6.3% 14.3%
avg load time 47.6 23.2 38.7
loads 493M 353M 493M
speedup - 3.10 1.17
Table
1: Simulated results for the NAS Class A conjugate gradient benchmark, using two different opti-
mizations. Times are in processor cycles.
Conventional Software copying Impulse
Time 664M 610M 547M
L1 hit ratio 49.6% 98.6% 99.5%
L2 hit ratio 48.7% 1.1% 0.4%
mem hit ratio 1.7% 0.3% 0.1%
avg load time 6.68 1.71 1.46
Table
2: Simulated results for tiled matrix-matrix product. Times are in millions of cycles. the matrices
are 512 by 512, with 32 by tiles.
Tiled Impulse
Time 459M 237M
L1 hit ratio 98.95% 99.7%
mem hit ratio 0.24% 0.05%
avg load time 1.57 1.16
issued instructions (total) 725M 290M
graduated instructions (total) 435M 280M
issued instructions (TLB) 256M 7.8M
graduated instructions (TLB) 134M 3.3M
Table
3: Simulated results for image filtering with various memory system configurations. Times are in
processor cycles. TLB misses are the number of user data misses.
4.1.3 Image Filtering
Table
3 presents the results of order-129 binomial filter on a 321024 color image. the Impulse
version of the code pads each pixel to four bytes. Performance differences between the hand-
tiled and Impulse versions of the algorithm arise from the vertical pass over the data. the tiled
version suffers more than 3.5 times as many L1 cache misses and 40 times as many TLB faults,
and executed 134 million instructions in TLB miss handlers. the indirect impact of the high TLB
miss rate is even more dramatic - in the baseline filtering program, almost 300 million instructions
are issued but not graduated. In contrast, the Impulse version of the algorithm executes only
3.3 million instructions handling TLB misses and only 10 million instructions are issued but not
graduated. Compared to these dramatic performance improvements, the less than 1 million cycles
spent setting up Impulse remapping are a negligible overhead.
Although both versions of the algorithm touch each data element the same number of times,
Impulse improves the memory behavior of the image filtering code in two ways. When the original
algorithm performs the vertical filtering pass, it touches more pages per iteration than the processor
TLB can hold, yielding the high kernel overhead observed in these runs. Image cache lines conflicting
within the L1 cache further degrade performance. Since the Impulse version of the code
(what appear to the processor to be) contiguous addresses, it suffers very few TLB faults
and has near-perfect temporal and spatial locality in the L1 cache.
4.1.4 Three-Shear Image Rotation
Table
4 illustrates performance results for rotating a color image clockwise through one radian.
the image contains 24 bits of color information, as in a ".ppm" file. We measure three versions
of this benchmark: the original version, adapted from Wolberg [47]; a hand-tiled version of the
code, in which the vertical shear's traversal is blocked; and a version adapted to Impulse, in which
the matrices are transposed at the memory controller. the Impulse version requires that each pixel
be padded to four bytes, since Impulse operates on power-of-two object sizes. To quantify the
performance effect of padding, we measure the results for the non-Impulse versions of the code
using both three-byte and four-byte pixels.
the performance differences among the different versions are entirely due to cycles saved
during the vertical shear. the horizontal shears exhibit good memory behavior (in row-major
layout), and so are not a performance bottleneck. Impulse increases the cache hit rate from roughly
95% to 98.5% and reduces the number of TLB misses by two orders of magnitude. This reduction
Original Original Tiled Tiled Impulse
padded padded
Time 572M 576M 284M 278M 215M
L1 hit ratio 95.0% 94.8% 98.1% 97.6% 98.5%
L2 hit ratio 1.5% 1.6% 1.1% 1.5% 1.1%
mem hit ratio 3.5% 3.6% 0.8% 0.9% 0.4%
avg load time 3.85 3.85 1.81 2.19 1.50
issued instructions (total) 476M 477M 300M 294M 232M
graduated instructions (total) 346M 346M 262M 262M 229M
issued instructions (TLB) 212M 215M 52M 51M 0.81M
graduated instructions (TLB) 103M 104M 24M 24M 0.42M
Table
4: Simulation results for performing a 3-shear rotation of a 1k-by-1k 24-bit color image. Times are
in processor cycles. TLB misses are user data misses.
in the TLB miss rate eliminates 99 million TLB miss handling instructions and reduces the number
of issued but not graduated instructions by over 100 million. These two effects constitute most of
Impulse's benefit.
the tiled version walks through all columns 32 pixels at a time, which yields a hit rate higher
than the original program's, but lower than Impulse's. the tiles in the source matrix are sheared in
the destination matrix, so even though cache performance for the source is nearly perfect, it suffers
for the destination. For the same reason, the decrease in TLB misses for the tiled code is not as
great as that for the Impulse code.
the Impulse code requires 33% more memory to store a 24-bit color image. We also measured
the performance impact of using padded 32-bit pixels with each of the non-Impulse codes. In the
original program, padding causes each cache line fetch to load useless pad bytes, which degrades
the performance of a program that is already memory-bound. In contrast, for the tiled program,
the increase in memory traffic is balanced by the reduction in load, shift, and mask operations:
manipulating word-aligned pixels is faster than manipulating byte-aligned pixels. the padded,
tiled version of the rotation code is still slower than Impulse. the tiled version of the shear uses
more cycles recomputing (or saving and restoring) each column's displacement when traversing
the tiles. For our input image, this displacement is computed 1024= 32 times, since the column
length is 1024 and the tile height is 32. In contrast, the Impulse code (which is not tiled) only
computes each column's displacement once, since each column is completely traversed when it is
visited.
Isosurface Rendering Using Ray Tracing
For simplicity, our benchmark assumes that the screen plane is parallel to the volume's z axis. As
a result, we can compute an entire plane's worth of indirection vector at once, and we do not need
to remap addresses for every ray. This assumption is not a large restriction: it assumes the use of
a volume rendering algorithm like Lacroute's [26], which transforms arbitrary viewing angles into
angles that have better memory performance. the isosurface in the volume is on one edge of the
surface, parallel to the x-z plane.
the measurements we present are for two particular viewing angles. Table 5(A) shows results
when the screen is parallel to the y-z plane, so that the rays exactly follow the layout of voxels in
memory (we assume an x-y-z layout order). Table 5(B) shows results when the screen is parallel to
the x-z plane, where the rays exhibit the worst possible cache and TLB behavior when traversing
the x-y planes. These two sets of data points represent the extremes in memory performance for
the ray tracer.
In our data, the measurements labeled "Original" are of a ray tracer that uses macro-cells
to reduce the number of voxels traversed, but that does not tile the volume. the macro-cells
are 444 voxels in size. the results labeled "Indirection" use macro-cells and address voxels
through an indirection vector. the indirection vector stores precomputed voxel offsets of each x-y
plane. Finally, the results labeled "Impulse" use Impulse to perform the indirection lookup at the
memory controller.
In
Table
5(A), where the rays are parallel to the array layout, Impulse delivers a substantial
performance gain. Precomputing the voxel offsets reduces execution time by approximately 9
million cycles. the experiment reported in the Indirection column exchanges the computation
of voxels offsets for the accesses to the indirection vector. Although it increases the number of
memory loads, it still achieves positive speedup because most of those accesses are cache hits. With
Impulse, the accesses to the indirection vector are performed only within the memory controller,
which hides the access latencies. Consequently, Impulse obtained a higher speedup. Compared to
the original version, Impulse saves the computation of voxels offsets.
In
Table
5(B), where the rays are perpendicular to the voxel array layout, Impulse yields a much
larger performance gain - a speedup of 5.49. Reducing the number of TLB misses saves approximately
million graduated instructions while reducing the number of issued but not graduated
instructions by approximately 120 million. Increasing the cache hit ratio by loading no useless
voxels into the cache saves the remaining quarter-billion cycles. the Indirection version executes
about 3% slower than the original one. With rays perpendicular to the voxel array, accessing voxels
Original Indirection Impulse
Time 74.2M 65.0M 61.4M
L1 hit ratio 95.1% 90.8% 91.8%
L2 hit ratio 3.7% 7.3% 6.3%
mem hit ratio 1.2% 1.9% 1.9%
avg load time 1.8 2.8 2.5
loads 21.6M 17.2M 13M
issued instructions (total) 131M 71.4M 57.7M
graduated instructions (total) 128M 69.3M 55.5M
issued instructions (TLB) 0.68M 1.14M 0.18M
graduated instructions (TLB) 0.35M 0.50M 0.15M
speedup - 1.14 1.21
Original Indirection Impulse
Time 383M 397M 69.7M
L2 hit ratio 0.6% 2.2% 5.1%
mem hit ratio 12.3% 15.2% 1.6%
avg load time 8.2 10.3 2.4
loads 32M 27M 16M
issued instructions (total) 348M 318M 76M
graduated instructions (total) 218M 148M 68M
issued instructions (TLB) 126M 156M 0.18M
graduated instructions (TLB) 59M 60M 0.15M
Table
5: Results for isosurface rendering. Times are in processor cycles. TLB misses are user data misses.
In (A), the rays follow the memory layout of the image; in (B), they are perpendicular to the memory layout.
generates lots of cache misses and frequently loads new data into the cache. These loads can evict
the indirection vector from the cache and bring down the cache hit ratio of the indirection vector
accesses. As a result, the overhead of accessing the indirection vector outweighs the benefit of
saving the computation of voxel offsets and slows down execution.
4.2 Online Superpage Promotion
To evaluate the performance of Impulse's support for inexpensive superpage promotion, we reevaluated
Romer et al.'s work on dynamic superpage promotion algorithms [40] in the context of Im-
pulse. Our system model differs from theirs in several significant ways. They employ a form
of trace-driven simulation with ATOM [42], a binary rewriting tool. That is, they rewrite their
applications using ATOM to monitor memory references, and the modified applications are used
to do on-the-fly "simulation" of TLB behavior. Their simulated system has two 32-entry, fully-associative
TLBs (one for instructions and one for data), uses LRU replacement on TLB entries,
and has a base page size of 4096 bytes. To better understand how TLB size may affect the perfor-
mance, we model two TLB sizes: 64 and 128 entries.
Romer et al. combine the results of their trace-driven simulation with measured baseline performance
results to calculate effective speedup on their benchmarks. They execute their benchmarks
on a DEC Alpha 3000/700 running DEC OSF/1 2.1. the processor in that system is a
dual-issue, in-order, 225 MHz Alpha 21064. the system has two megabytes of off-chip cache and
160 megabytes of main memory.
For their simulations, they assume the following fixed costs, which do not take cache effects
into account:
each 1Kbyte copied is assigned a 3000-cycle cost;
the asap policy is charged cycles for each TLB miss;
and the approx-online policy is charged 130 cycles for each TLB miss.
the performance results presented here are obtained through complete simulation of the bench-
marks. We measure both kernel and application time, the direct overhead of implementing the
superpage promotion algorithms and the resulting effects on the system, including the expanded
TLB miss handlers, cache effects due to accessing the page tables and maintaining prefetch coun-
ters, and the overhead associated with promoting and using superpages with Impulse. We present
comparative performance results for our application benchmark suite.
4.2.1 Application Results
To evaluate the different superpage promotion approaches on larger problems, we use eight programs
from a mix of sources. Our benchmark suite includes three SPEC95 benchmarks (com-
press, gcc, and vortex), the three image processing benchmarks described earlier (iso-
rotate, and filter), one scientific benchmark (adi), and one benchmark from the
DIS benchmark suite (dm) [28]. All benchmarks were compiled with Sun cc Workshop Compiler
4.2 and optimization level "-xO4".
Compress is the SPEC95 data compression program run on an input of ten million characters.
To avoid overestimating the efficacy of superpages, the compression algorithm was run only once,
instead of the default 25 times. gcc is the cc1 pass of the version 2.5.3 gcc compiler (for SPARC
architectures) used to compile the 306-kilobyte file "1cp-dec1.c". vortex is an object-oriented
database program measured with the SPEC95 "test" input. isosurf is the interactive isosurfacing
volume renderer described in Section 4.1.5. filter performs an order-129 binomial filter on
a 321024 color image. rotate turns a 10241024 color image clockwise through one radian.
adi implements algorithm alternative direction integration. dm is a data management program
using input file "dm07.in".
Two of these benchmarks, gcc and compress, are also included in Romer et al.'s benchmark
suite, although we use SPEC95 versions, whereas they used SPEC92 versions. We do not use the
other SPEC92 applications from that study, due to the benchmarks' obsolescence. Several of
Romer et al.'s remaining benchmarks were based on tools used in the research environment at the
University of Washington, and were not readily available to us.
Table
6 lists the characteristics of the baseline run of each benchmark with a four-way issue
superscalar processor, where no superpage promotion occurs. TLB miss time is the total time
spent in the data TLB miss handler. These benchmarks demonstrate varying sensitivity to TLB
performance: on the system with the smaller TLB, between 9.2% and 35.1% of their execution
time is lost due to TLB miss costs. the percentage of time spent handling TLB misses falls to
between less than 1% and 33.4% on the system with a 128-entry TLB.
Figures
show the normalized speedups of the different combinations of promotion
policies (asap and approx-online) and mechanisms (remapping and copying) compared to the
baseline instance of each benchmark. In our experiments we found that the best approx-online
threshold for a two-page superpage is 16 on a conventional system and is 4 on an Impulse system.
These are also the thresholds used in our full-application tests. Figure 9 gives results with a 64-
entry results with a 128-entry TLB. Online superpage promotion can improve
Total Cache TLB TLB
Benchmark cycles misses misses miss
64-entry TLB
compress 632 3455 4845 27.9%
gcc 628 1555 2103 10.3%
vortex
filter 425 241 4798 35.1%
rotate 547 3570 3807 17.9%
dm 233 129 771 9.2%
128-entry TLB
compress 426 3619 36 0.6%
gcc 533 1526 332 2.0%
vortex 423 763 1047 8.1%
isosurf 93 989 548 17.4%
filter 417 240 4544 33.4%
rotate 545 3569 3702 16.9%
dm 211 143 250 3.3%
Table
Characteristics of each baseline run
speedup
Impulse+asap
Impulse+approx_online
copying+asap
copying+approx_online
compress
gcc
vortex
1.07 1.060.93
1.06 0.85
filter
1.76 1.74 1.70 1.68
rotate
dm
1.13 1.09 1.11 1.05
Figure
9: Normalized speedups for each of two promotion policies on a 4-issue system with a 64-entry
TLB.
performance by up to a factor of two (on adi with remapping asap), but it also can decrease
performance by a similar factor (when using the copying version of asap on isosurf). We can
make two orthogonal comparisons from these figures: remapping versus copying, and asap versus
approx-online.
4.2.2 Asap vs. Approx-online
We first compare the two promotion algorithms, asap and approx-online, using the results from
Figures
9 and 10. the relative performance of the two algorithms is strongly influenced by the
choice of promotion mechanism, remapping or copying. Using remapping , asap slightly out-performs
approx-online in the average case. It exceeds the performance of approx-online in 14
of the 16 experiments, and trails the performance of approx-online in only one case (on vortex
with a 64-entry tlb). the differences in performance range from asap+remap outperforming
aol+remap by 32% for adi with a 64-entry TLB, to aol+remap outperforming asap+remap by
6% for vortex with a 64-entry TLB. In general, however, performance differences between the
two policies are small: asap is on average 7% better with a 64-entry TLB, and 6% better with a
128-entry TLB.
the results change noticeably when we employ a copying promotion mechanism: approx-online
outperforms asap in nine of the 16 experiments, while the policies perform almost identically in
three of the other seven cases. the magnitude of the disparity between approx-online and asap
results is also dramatically larger. the differences in performance range from asap outperforming
approx-online by 20% for vortex with a 64-entry TLB, to approx-online outperforming asap
by 45% for isosurf with a 64-entry TLB. Overall, our results confirm those of Romer et al.: the
best promotion policy to use when creating superpages via copying is approx-online. Taking the
arithmetic mean of the performance differences reveals that asap is, on average, 6% better with a
64-entry TLB, and 4% better with a 128-entry TLB.
speedup
Impulse+asap
Impulse+approx_online
copying+asap
copying+approx_online
compress
gcc
vortex
1.07 1.03 0.84 0.82
1.06 1.060.92
filter
1.73 1.69 1.67 1.65
rotate
dm
Figure
10: Normalized speedups for each of two promotion policies on a 4-issue system with a 128-entry
TLB.
the relative performance of the asap and approx-online promotion policies changes when we
employ different promotion mechanisms because asap tends to create superpages more aggressively
than approx-online. the design assumption underlying the approx-online algorithm (and
the reason that it performs better than asap when copying is used) is that superpages should not
be created until the cost of TLB misses equals the cost of creating the superpages. Given that
remapping has a much lower cost for creating superpages than copying, it is not surprising that the
more aggressive asap policy performs relatively better with it than approx-online does.
4.2.3 Remapping vs. Copying
When we compare the two superpage creation mechanisms, remapping is the clear winner, but by
highly varying margins. the differences in performance between the best overall remapping-based
algorithm (asap+remap) and the best copying-based algorithm (aonline+copying) is as large as
97% in the case of adi on both a 64-entry and 128-entry TLB. Overall, asap+remap outperforms
aonline+copying by more than 10% in eleven of the sixteen experiments, averaging 33% better
with a 64-entry TLB, and 22% better with a 128-entry TLB.
4.2.4 Discussion
Romer et al. show that approx-online is generally superior to asap when copying is used. When
remapping is used to build superpages, though, we find that the reverse is true. Using Impulse-style
remapping results in larger speedups and consumes much less physical memory. Since superpage
promotion is cheaper with Impulse, we can also afford to promote pages more aggressively.
Romer et al.'s trace-based simulation does not model any cache interference between the application
and the TLB miss handler; instead, that study assumes that each superpage promotion
costs a total of 3000 cycles per kilobyte copied [40]. Table 7 shows our measured per-kilobyte
cost (in CPU cycles) to promote pages by copying for four representative benchmarks. (Note that
cycles per 1K average baseline
bytes promoted cache hit ratio cache hit ratio
filter 5,966 99.80% 99.80%
raytrace 10,352 96.50% 87.20%
dm 6,534 99.80% 99.86%
Table
7: Average copy costs (in cycles) for approx-online policy.
we also assume a relatively faster processor.) We measure this bound by subtracting the execution
time of aol+remap from that of aol+copy and dividing by the number of kilobytes copied. For
our simulation platform and benchmark suite, copying is at least twice as expensive as Romer et
al. assumed. For gcc and raytrace, superpage promotion costs more than three times the cost
charged in the trace-driven study. Part of these differences are due to the cache effects that copying
incurs.
We find that when copying is used to promote pages, approx-online performs better with a
lower (more aggressive) threshold than used by Romer et al. Specifically, the best thresholds that
our experiments revealed varied from four to 16, while their study used a fixed threshold of 100.
This difference in thresholds has a significant impact on performance. For example, when we run
the adi benchmark using a threshold of 32, approx-online with copying slows performance by
10% with a 128-entry TLB. In contrast, when we run approx-online with copying using the best
threshold of 16, performance improves by 9%. In general, we find that even the copying-based
promotion algorithms need to be more aggressive about creating superpages than was suggested
by Romer et al. Given that our cost of promoting pages is much higher than the 3000 cycles
estimated in their study, one might expect that the best thresholds would be higher than Romer's.
However, the cost of a TLB miss far outweighs the greater copying costs; our TLB miss costs are
about an order of magnitude greater than those assumed in his study.
5 Related Work
A number of projects have proposed modifications to conventional CPU or DRAM designs to
improve memory system performance, including supporting massive multithreading [2], moving
processing power on to DRAM chips [25], or developing configurable architectures [50]. While
these projects show promise, it is now almost impossible to prototype non-traditional CPU or
cache designs that can perform as well as commodity processors. In addition, the performance
of processor-in-memory approaches are handicapped by the optimization of DRAM processes for
capacity (to increase bit density) rather than speed.
the Morph architecture [50] is almost entirely configurable: programmable logic is embedded
in virtually every datapath in the system, enabling optimizations similar to those described here.
the primary difference between Impulse and Morph is that Impulse is a simpler design that can be
used in current systems.
the RADram project at UC Davis is building a memory system that lets the memory perform
computation [34]. RADram is a PIM, or processor-in-memory, project similar to IRAM [25]. the
RAW project at MIT [46] is an even more radical idea, where each IRAM element is almost entirely
reconfigurable. In contrast to these projects, Impulse does not seek to put an entire processor in
memory, since DRAM processes are substantially slower than logic processes.
Many others have investigated memory hierarchies that incorporate stream buffers. Most of
these focus on non-programmable buffers to perform hardware prefetching of consecutive cache
lines, such as the prefetch buffers introduced by Jouppi [23]. Even though such stream buffers are
intended to be transparent to the programmer, careful coding is required to ensure good memory
performance. Palacharla and Kessler [36] investigate the use of similar stream buffers to replace
the L2 cache, and Farkas et al. [14] identify performance trends and relationships among the various
components of the memory hierarchy (including stream buffers) in a dynamically scheduled
processor. Both studies find that dynamically reactive stream buffers can yield significant performance
increases.
the Imagine media processor is a stream-based architecture with a bandwidth-efficient stream
register file [38]. the streaming model of computation exposes parallelism and locality in applica-
tions, which makes such systems an attractive domain for intelligent DRAM scheduling.
Competitive algorithms perform online cost/benefit analyses to make decisions that guarantee
performance within a constant factor of an optimal offline algorithm. Romer et al. [40] adapt
this approach to TLB management, and employ a competitive strategy to decide when to perform
dynamic superpage promotion. They also investigate online software policies for dynamically
remapping pages to improve cache performance [6, 39]. Competitive algorithms have been used to
help increase the efficiency of other operating system functions and resources, including paging,
synchronization, and file cache management.
Chen et al. [11] report on the performance effects of various TLB organizations and sizes.
Their results indicate that the most important factor for minimizing the overhead induced by TLB
misses is reach, the amount of address space that the TLB can map at any instant in time. Even
though the SPEC benchmarks they study have relatively small memory requirements, they find
that TLB misses increase the effective CPI (cycles per instruction) by up to a factor of five. Jacob
and Mudge [22] compare five virtual memory designs, including combinations of hierarchical and
inverted page tables for both hardware-managed and software-managed TLBs. They find that large
TLBs are necessary for good performance, and that TLB miss handling accounts for much of the
memory-management overhead. They also project that individual costs of TLB miss traps will
increase in future microprocessors.
Proposed solutions to this growing TLB performance bottleneck range from changing the TLB
structure to retain more of the working set (e.g., multi-level TLB hierarchies [1, 16]), to implementing
better management policies (in software [21] or hardware [20]), to masking TLB miss
latency by prefetching entries (again, in software [4] or hardware [41]).
All of these approaches can be improved by exploiting superpages. Most commercial TLBs
support superpages, and have for several years [30, 43], but more research is needed into how best
to make general use of them. Khalidi [24] and Mogul [31] discuss the benefits of systems that support
superpages, and advocate static allocation via compiler or programmer hints. Talluri et al. [32]
report on many of the difficulties attendant upon general utilization of superpages, most of which
result from the requirement that superpages map physical memory regions that are contiguous and
aligned.
6 Conclusions
the Impulse project attacks the memory bottleneck by designing and building a smarter memory
controller. Impulse requires no modifications to the CPU, caches, or DRAMs. It has one special
form of "smarts": the controller supports application-specific physical address remapping.
This paper demonstrates how several simple remapping functions can be used in different ways to
improve the performance of two important scientific application kernels.
Flexible remapping support in the Impulse controller can be used to implement a variety of
optimizations. Our experimental results show that Impulse's fine-grained remappings can result in
substantial program speedups. Using the scatter/gather through an indirection vector remapping
mechanism improves the NAS conjugate gradient benchmark performance by 210% and the volume
rendering benchmark performance by 449%; using strided remapping improves performance
of image filtering, image rotation, and dense matrix-matrix product applications by 94%, 166%,
and 21%, respectively.
Impulse's direct remappings are also effective for a range of programs. They can be used to
dynamically build superpages without copying, and thereby reduce the frequency of TLB faults.
Our simulations show that this optimization speeds up eight programs from a variety of sources by
up to a factor of 2.03, which is 25% better than prior work. Page-level remapping to perform cache
coloring improves performance of conjugate gradient by 17%.
the optimizations that we describe should be applicable across a variety of memory-bound ap-
plications. In particular, Impulse should be useful in improving system-wide performance. For ex-
ample, Impulse can speed up messaging and interprocess communication (IPC). Impulse's support
for scatter/gather can remove the software overhead of gathering IPC message data from multiple
user buffers and protocol headers. the ability to use Impulse to construct contiguous shadow pages
from non-contiguous pages means that network interfaces need not perform complex and expensive
address translations. Finally, fast local IPC mechanisms like LRPC [5] use shared memory
to map buffers into sender and receiver address spaces, and Impulse could be used to support fast,
no-copy scatter/gather into shared shadow address spaces.
Acknowledgments
We thank Al Davis, Bharat Chandramouli, Krishna Mohan, Bob Devine, Mark Swanson, Arjun
Dutt, Ali Ibrahim, Shuhuan Yu, Michael Abbott, Sean Cardwell, and Yeshwant Kolla for their
contributions to the Impulse project.
--R
AMD Athlon processor technical brief.
the Tera computer system.
the NAS parallel benchmarks.
Software prefetching and caching for translation buffers.
Lightweight remote procedure call.
Avoiding conflict misses dynamically in large direct-mapped caches
the organization and use of parallel memories.
Memory bandwidth limitations of future microprocessors.
A simulation based study of TLB performance.
Compaq Computer Corporation.
Revisiting superpage promotion with hardware support.
Image Processing for Computer Graphics.
HAL Computer Systems Inc.
Computer Architecture: A Quantitative Approach.
Control Data STAR-100 processor design
the intrinsic bandwidth requirements of ordinary programs.
Pentium Pro Family Developer's Manual
A look at several memory management units
Improving direct-mapped cache performance by the addition of a small fully associative cache and prefetch buffers
Virtual memory support for multiple page sizes.
Scalable processors in the billion-transistor era: IRAM
Fast Volume
the cache performance and optimizations of blocked algorithms.
Access ordering and memory-conscious cache utilization
Big memories on the desktop.
Surpassing the TLB performance of superpages with less operating system support.
Sparse matrix kernels for shared memory and message passing systems.
Active pages: A model of computation for intelligent memory.
RSIM reference manual
Evaluating stream buffers as a secondary cache replacement.
Interactive ray tracing for isosurface rendering.
A bandwidth-efficient architecture for media processing
Using Virtual Memory to Improve Cache and TLB Performance.
Reducing TLB and memory overhead using online superpage promotion.
ATOM: A system for building customized program analysis tools.
Increasing TLB reach using superpages backed by shadow memory.
To copy or not to copy: A compile-time technique for assessing when data copying should be used to eliminate cache conflicts
Digital Image Warping.
URSIM reference manual.
Memory system support for image processing.
Architectural adaptation for application-specific locality optimizations
--TR
Lightweight remote procedure call
The cache performance and optimizations of blocked algorithms
A simulation based study of TLB performance
To copy or not to copy
ATOM
Evaluating stream buffers as a secondary cache replacement
Avoiding conflict misses dynamically in large direct-mapped caches
Surpassing the TLB performance of superpages with less operating system support
Reducing TLB and memory overhead using online superpage promotion
Memory bandwidth limitations of future microprocessors
The intrinsic bandwidth requirements of ordinary programs
Fast volume rendering using a shear-warp factorization of the viewing transformation
The Tera computer system
Memory-system design considerations for dynamically-scheduled processors
Active pages
Increasing TLB reach using superpages backed by shadow memory
Computer architecture (2nd ed.)
Interactive ray tracing for isosurface rendering
A bandwidth-efficient architecture for media processing
A look at several memory management units, TLB-refill mechanisms, and page table organizations
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
Recency-based TLB preloading
Digital Image Warping
Image Processing for Computer Graphics
Scalable Processors in the Billion-Transistor Era
Baring It All to Software
3-D transformations of images in scanline order
Access ordering and memory-conscious cache utilization
Software-Managed Address Translation
Impulse
Memory System Support for Image Processing
Architectural adaptation for application-specific locality optimizations
Reevaluating Online Superpage Promotion with Hardware Support
Using virtual memory to improve cache and tlb performance
--CTR
Jun Miyazaki, Hardware supported memory access for high performance main memory databases, Proceedings of the 1st international workshop on Data management on new hardware, June 12-12, 2005, Baltimore, Maryland
McCorkle, Programmable bus/memory controllers in modern computer architecture, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Hur , Calvin Lin, Memory Prefetching Using Adaptive Stream Detection, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.397-408, December 09-13, 2006
Lixin Zhang , Mike Parker , John Carter, Efficient address remapping in distributed shared-memory systems, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.209-229, June 2006
Daehyun Kim , Mainak Chaudhuri , Mark Heinrich , Evan Speight, Architectural Support for Uniprocessor and Multiprocessor Active Memory Systems, IEEE Transactions on Computers, v.53 n.3, p.288-307, March 2004
Xipeng Shen , Yutao Zhong , Chen Ding, Locality phase prediction, ACM SIGOPS Operating Systems Review, v.38 n.5, December 2004
Eero Aho , Jarno Vanne , Timo D. Hmlinen, Configurable data memory for multimedia processing, Journal of Signal Processing Systems, v.50 n.2, p.231-249, February 2008
Xiaodong Li , Zhenmin Li , Yuanyuan Zhou , Sarita Adve, Performance directed energy management for main memory and disks, ACM Transactions on Storage (TOS), v.1 n.3, p.346-380, August 2005
Xiaodong Li , Zhenmin Li , Francis David , Pin Zhou , Yuanyuan Zhou , Sarita Adve , Sanjeev Kumar, Performance directed energy management for main memory and disks, ACM SIGARCH Computer Architecture News, v.32 n.5, December 2004
Zhen Fang , Lixin Zhang , John B. Carter , Ali Ibrahim , Michael A. Parker, Active memory operations, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Bruno Diniz , Dorgival Guedes , Wagner Meira, Jr. , Ricardo Bianchini, Limiting the power consumption of main memory, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Enric Gibert , Jesus Sanchez , Antonio Gonzalez, Distributed Data Cache Designs for Clustered VLIW Processors, IEEE Transactions on Computers, v.54 n.10, p.1227-1241, October 2005
Xipeng Shen , Yutao Zhong , Chen Ding, Predicting locality phases for dynamic memory optimization, Journal of Parallel and Distributed Computing, v.67 n.7, p.783-796, July, 2007
Ruchira Sasanka , Man-Lap Li , Sarita V. Adve , Yen-Kuang Chen , Eric Debes, ALP: Efficient support for all levels of parallelism for complex media applications, ACM Transactions on Architecture and Code Optimization (TACO), v.4 n.1, p.3-es, March 2007
Martin Hirzel, Data layouts for object-oriented programs, ACM SIGMETRICS Performance Evaluation Review, v.35 n.1, June 2007 | computer architecture;memory systems |
507198 | Worst and Best Irredundant Sum-of-Products Expressions. | AbstractIn an irredundant sum-of-products expression (ISOP), each product is a prime implicant (PI) and no product can be deleted without changing the function. Among the ISOPs for some function $f$, a worst ISOP (WSOP) is an ISOP with the largest number of PIs and a minimum ISOP (MSOP) is one with the smallest number. We show a class of functions for which the Minato-Morreale ISOP algorithm produces WSOPs. Since the ratio of the size of the WSOP to the size of the MSOP is arbitrarily large when $n$, the number of variables, is unbounded, the Minato-Morreale algorithm can produce results that are very far from minimum. We present a class of multiple-output functions whose WSOP size is also much larger than its MSOP size. For a set of benchmark functions, we show the distribution of ISOPs to the number of PIs. Among this set are functions where the MSOPs have almost as many PIs as do the WSOPs. These functions are known to be easy to minimize. Also, there are benchmark functions where the fraction of ISOPs that are MSOPs is small and MSOPs have many fewer PIs than the WSOPs. Such functions are known to be hard to minimize. For one class of functions, we show that the fraction of ISOPs that are MSOPs approaches 0 as $n$ approaches infinity, suggesting that such functions are hard to minimize. | 2. MSOP : STn; k n .
3.
Proof. See the Appendix. tu
In the proof of Lemma 2.1, we showed that
which is six more than the lower
bound given in 3 of Theorem 3.1. Therefore, for n 7 and
k 3, the lower bound is not tight. However, for k 1, the
lower bound is exact. That is,
Theorem 3.2.
2:
Proof. See the proof of Theorem 7.3 in the Appendix. tu
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 937
A special case of these theorems occurs when n 3 and TABLE 1
k 1. and for STn; 1 versus n
Example 3.3. ST3; 1x1 _ x2 _ x3x1 _ x2 _ x3 has the
following properties:
2. MSOP : ST3; 1 3.
3.
Fig. 1a and Fig. 1b show the MSOP and WSOP of
respectively. Interestingly, the ISOP generator
of Minato [19], which is based on Morreale's [20]
algorithm produces a WSOP for ST3; 1 instead of an
MSOP. This will be discussed in more detail later.
Definition 3.2. The redundancy ratio of a function f is
The normalized redundancy ratio of an n-variable
function f is
are the sizes of
WSOPs and MSOPs. If this ratio is small, any logic
minimization algorithm will do well since, even if a WSOP
is generated, it is not much worse than an MSOP. On the
other hand, a large ratio suggests that care should be exercised.
The normalized redundancy ratio is normalized with respect to
the number of variables. It is a convenience; it allows one to
compare the redundancy ratio of two functions with a different
number of variables.
From the expressions for MSOP : STn; k and
given in Theorem 3.1, we can state:
Theorem 3.3.
s
From the expressions for MSOP : STn; 1 and
Theorems 3.1 and 3.2, respec-
tively, we can state:
Theorem 3.4.STn; 1 2 ;
r
Table
1 shows the values of and for STn; 1, where
8. It can be seen that takes its maximum value
when That is, as n increases above 2, first increases,
peaking at 4, and then it continually decreases.
From Theorem 3.4, is monotone increasing with an
upper limit of 2. Thus, for STn; 1 functions, the number of
PIs in a WSOP is never more than two times the number of
PIs in an MSOP. An important question is whether there
exist functions where is larger than 2. Indeed, we show a
class of functions in which increases without bound as n
increases. This has important consequences for heuristics
that produce ISOPs. For such heuristics there is the prospect
of generating an ISOP whose size is much larger than the
minimum. We consider this topic now.
Definition 3.3. Let STm; kr be the n m r-variable
function
^r
where r is the AND (product) of r functions.
Theorem 3.5. STm; kr has the following properties:
r
r mr
2. MSOP : STm; k .
3.
Proof. See the Appendix. tu
For k 1, we have:
Theorem 3.6.
Example 3.4. For m 3 and k 1, ST3; 1r has 6r PIs,
We have:
Theorem 3.7.
"#r
Example 3.5. For m 4 and k 1, we have
From this, it can be seen that becomes arbitrarily large as r
approaches infinity. In this example, there are n 4r
938 IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 9, SEPTEMBER 2001
variables. This represents a class of functions for which
grows without bound as the number of variables grows.
4EXTENSION TO MULTIPLE-OUTPUT FUNCTIONS
In the case of multiple-output functions, minimization of
two-level networks or programmable logic
arrays (PLAs) can be done using characteristic functions
[26], [27], [29].
Definition 4.1. For an n-variable function with m output values,
form an n 1-variable two-valued single output function
Fx1;x2; .;xn;Xn1, where xi is a binary valued
variable, for 1 i n and Xn1 takes m values such
that Fx1;x2; .;xn;j1 iff fjx1;x2; .;xn1
represents all and only the
permitted combinations of inputs and nonzero output values of
f. F is called the characteristic function (for nonzero
outputs).
The significance of the characteristic function is seen in
Theorem 4.1 below.
Definition 4.2. XS is a literal, where X takes a value in
1g such that XS 1
if X a 2 S and XS 0, otherwise. A logical product of
literals that contains at most one literal for each variable is a
product term. Products combined with OR operators form a
sum-of-products expression (SOP). A Prime implicant (PI),
irredundant sum-of-products expression (ISOP), worst ISOP
(WSOP), and minimum SOP (MSOP) are defined in a
manner similar to the two-valued case.
Theorem 4.1 [15], [27], [29]. The number of AND gates in the
minimum AND-OR two-level network for the function
f0;f1; .;fm1 is equal to the number of PIs in the MSOP
for the characteristic function F.
Definition 4.3. An n-bit decoder has n inputs x1;x2; .;xn,
and 2n outputs f0;f1; .;f2n1, where fi 0 iff the binary
number representation of x1x2; .;xn is i.
Example 4.1. The 4-bit decoder has 16 outputs, as follows:
.
.
.
Definition 4.4. DEC_n is the characteristic function of an n-bit
decoder.
Example 4.2. DEC_4 is shown in positional cube notation in
the upper table of Fig. 2. That is, each entry in this table is
a prime implicant of DEC_4, where xi appears as xi, xi,
or don't care (absent) if the corresponding entry is 10, 01,
or 11, respectively. For X5, the entry 0111111111111111 is
the literal X , etc. Therefore, thefirst corresponds to
Fig. 2. Positional cubes for two ISOPs of DEC_4.
the prime implicant x1x2x3x4X5 .
Collectively, the 16 entries in the upper table of Fig. 2
represent an ISOP of DEC_4 with PIs. An ISOP for
DEC_4 with only eight PIs exists, as shown in the lower
table of Fig. 2.
The observations of Example 4.2 can be generalized as
follows:
Theorem 4.2. The function DEC_n has a WSOP that requires at
least 2n PIs and an MSOP that requires at most 2n PIs.
The above theorem proves the existence of an n-variable
2n-output variable function, where the sizes of the MSOP
and the WSOP are at most 2n and at least 2n, respectively.
The upper ISOP for DEC_4 shown in Fig. 2 is not a WSOP
since an ISOP with 20 PIs has been found for DEC_4.
5DERIVATION OF ALL ISOPs
Very little is known about the distribution of the sizes for
ISOPs. For example, even for single-output functions, we
know of no study that shows how many ISOPs exist with
various number of product terms.
Although various methods to generate all the ISOPs for a
logic function are known [22], [12], [21], [6], [35], [25], no
experimental results have been reported. Experiments are
computationally intensive even for functions with a small
number of variables. However, we can obtain the statistical
properties of ISOPs for some interesting functions.
Before showing the complete algorithm, consider the
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 939
has two MSOPs with three PIs and four WSOPs with
four PIs.
Fig. 3. fx1;x2;x3ST3; 1.
Example 5.1.
f ST3; 1x1 _ x2 _ x3x1 _ x2 _ x3
has six minterms (Fig.
Fig. 4 is the covering table for ST3; 1. It shows the
following relations:
To cover m1;p1 _ p6 is necessary:
To cover m3;p1 _ p2 is necessary:
To cover m2;p2 _ p3 is necessary:
To cover m6;p3 _ p4 is necessary:
To cover m4;p4 _ p5 is necessary:
To cover m5;p5 _ p6 is necessary:
To satisfy all the conditions at the same time, we have
Pf1, where
Pf is called the Petrick function [22]. By expanding
Pf into SOPs, we have
p1p3p5 _ p2p3p5p6 _ p1p2p4p5 _ p2p4p5p6
_ p1p3p4p6 _ p2p3p4p6 _ p1p2p4p6 _ p2p4p6:
Note that each product with an underline is covered by
another product having fewer literals. Such products are
redundant. Deleting these products, we have
_ p2p3p4p6 _ p2p4p6:
Pf consists of all the PIs of the Petrick function [23] and
each PI of Pf corresponds to an ISOP for f.
Furthermore, each literal pi in the PI of Pf corresponds
to a PI for f. For example, p1p3p5 corresponds to the ISOP
x1x3 _ x2x3 _ x1x2. Note that there are six ISOPs; two
have three PIs, while four have four PIs. Thus, ST3; 1
Fig. 4. Covering table of ST3;1.
In this way, all the ISOPs are obtained. For general
functions, the number of minterms and PIs are very large.
Thus, we use an ROBDD (reduced ordered binary decision
diagram) to represent the function and a Prime_TDD
(Ternary decision diagram) [31] to represent the set of all
the PIs. In the Prime_TDD for f, each path from the root
node to the constant 1 node corresponds to a PI for f.We
also use an ROBDD to represent the Petrick function. While
there are many ways to generate all the ISOPs of a given
function f, we use the following algorithm:
Algorithm 5.1 (Generation of all ISOPs for a function f).
1. Generate all the PIs for f by using the Prime_TDD
(the ternary decision diagram representing PIs) of f.
2. From the set of PIs and the set of minterms for f,
generate the Petrick function Pf (which represents
the covering table [22]).
3. Generate the Prime_TDD (which represents all the
PIs) of Pf.
4. Generate the 1-paths of the Prime_TDD and, for each
1-path, generate the corresponding ISOP.
In the Prime_TDD in Step 4, each path from the root node to
the constant 1 corresponds to a PI for Pf and to an ISOP
for f. Each 1 edge has weight 1 and each 0 edge has
weight 0. The total sum of weights from the root node to the
constant 1 nodes is the number of PIs in the ISOP. Note that
the shortest path corresponds to an MSOP and the longest
path corresponds to a WSOP.
6.1 STn; k Functions
Using Algorithm 5.1, we compare the number of PIs in
STn; kr for different n, k, and r. Table 2 shows the number
of PIs in the MSOP and the WSOP of STn; kr, as well as
the total number of PIs. Shown also are the results of the
Minato-Morreale algorithm.
The 9SYM (or SYM9) [11], [15] function shown in [3,
p. 165] is identical to ST9; 3. It has 1,680 PIs,
POP [9], a PRESTO-type [4], [33] logic minimization
algorithm, produced an ISOP with 148 products. CAMP
[1] produced an ISOP with 130 PIs, while MINI [15] did
well, producing 85 PIs.
Table
3 shows the distribution of ISOPs to the number of
PIs in an ISOP for STn; 1 for 3 n 7. This data was
obtained by Algorithm 5.1. It can be seen that the set of
MSOPs is small compared to the set of all ISOPs.
6.2 Other Functions
We also applied Algorithm 5.1 to compare the number of
PIs for multiple-output functions. Table 4 shows the
distribution of the number of PIs in ISOPs for various
functions [32].
INCn is an n-input n 1 output function such that the
value of the output is x 1, where x is the value of the
940 IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 9, SEPTEMBER 2001
Number of PIs and Redundancy Ratio for Various Functions
input; WGT5 is the same as RD53, a 5-input 3-output
function, where the output is a binary number whose value
is the number of 1s on the inputs; ROT6 computes the
square root of a 6-bit integer; LOG5 computes the logarithm
of the 5-bit integer; ADR3 is a 3-bit adder; and SQR5
computes the square of the 5-bit input.
Note that all the ISOPs for INC6 have the same number
of PIs. This means any logic minimizer obtains an exact
minimum solution. This is also is true for WGT5. For ADR3,
most of the ISOPs have 31 PIs or 33 PIs. This is consistent
with the observation that the logic minimization of ADR3 is
relatively easy. For SQR5, the distribution is very wide. The
MSOPs have 27 PIs, while WSOPs have 37 PIs. This is
consistent with the observation that the minimization of
SQR5 is more difficult. Note that SQR5 is a 10 output binary
function. The data shown is for all outputs.
Although we could not obtain the distribution for SQR6
due to the memory overflow, we conjecture that the
distribution of number of PIs for SQR6 is also wide. We
also developed WIRR, a heuristic algorithm to obtain ISOPs
with many products. For SQR5, SQR6, and 9SYM, the
numbers of PIs in the solutions are shown in Table 5.
7DISTRIBUTION OF ISOPs-AN ANALYTIC
APPROACH
The distribution of ISOPs to the number of PIs is a way to
represent the search space a heuristic algorithm must
traverse in a minimization of an expression. For the case
of STn; 1 functions, we can show a part of this distribu-
tion; a graph representation of the set of PIs allows this.
Definition 7.1. Let F be an ISOP of STn; 1.Inthegraph
representation GF of F
1. GF has nodes x1;x2; ., and xn, and
2. GF has an edge from xi to xj iff xixj is a PI in F.
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 941
Distribution of ISOPs in STn; 1 Functions
Example 7.1. Fig. 5 shows the graph representations of the
MSOP and WSOP for ST3; 1 (shown in Fig. 1).
We show that the graph representation of an ISOP of F
has a special property.
Definition 7.2. A directed graph G is strongly connected iff for
every pair of vertices a; b in G, there is a path from a to b and
from b to a. A directed graph G is minimally strongly
connected iff it is strongly connected and the removal of any
edge causes G not to be strongly connected.
Theorem 7.1. Let GF be a graph representation of F. F is an
ISOP of STn; 1 iff GF is minimally strongly connected.
Proof. See the Appendix. tu
The graph representations of the MSOP and WSOP of
shown in Fig. 5, are both strongly connected, as
they should be by Theorem 7.1. Since each edge represents a
prime implicant, an MSOP has a graph representation with
the fewest edges. This observation facilitates the enumeration
of MSOPs.
Theorem 7.2. The number of MSOPs for STn; 1 is n 1!.
Proof. See the Appendix. tu
The graph representation allows a characterization of
ISOPs. Specifically, complementing all variables in an ISOP
of STn; 1 is equivalent to reversing the direction of all
edges in the graph representation GF of F.IfGF is
minimally strongly connected, then the graph obtained
from GF by reversing the direction of all edges is also
minimally strongly connected. This proves:
Lemma 7.1. If F is an ISOP of STn; 1, then the SOP derived
from F by complementing all variables is an ISOP of
STn; 1.
Example 7.2. When all variables are complemented, the
graph representations of the ISOPs shown in Fig. 5
produce the graphs in Fig. 6, which also represent an
MSOP and a WSOP.
It is important to note the difference between changing an
ISOP F and changing the function realized by F. That is, an
function is unchanged by a complementation of
variables, i.e. it is a self-anti-dual function [32].
However, an ISOP for an STn; k function may or
may not be changed when all variables are comple-
mented. For example, F x1x2 _ x2x3 _ x3x1 is an ISOP
for ST3; 1. Complementing all variables in F yields,
F x1x2 _ x2x3 _ x3x1, a different ISOP.
It is interesting that the WSOP for ST3; 1 is unchanged
by a complementation of all variables, as can be seen by
comparing Fig. 5b with Fig. 6b. The invariance of an ISOP
with respect to complementation of all literals is a unique
characteristic of WSOPs, as shown in the next result.
Lemma 7.2. Let F be an ISOP of STn; 1. F is a WSOP iff
complementing all variables in F leaves F unchanged.
Proof. See the Appendix. tu
Distribution of ISOPs in Arithmetic Functions
942 IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 9, SEPTEMBER 2001
Number of PIs Produced by Various Algorithms on Three Benchmark Functions
It is interesting that Lemma 7.2 does not generalize to
k. Specifically, for ST5; 2, the ISOP
F x1x2x3x5 _ x3x5x1x2 _ x1x3x2x4 _ x2x4x1x3
_ x1x4x2x5 _ x2x5x1x4 _ x1x5x3x4 _ x3x4x1x5
_ x2x3x4x5 _ x4x5x2x3
is invariant with respect to complementation of all
variables. However, it is an MSOP and not a WSOP.
We can also enumerate WSOPs as follows:
Theorem 7.3. The number of WSOPs for STn; 1 is nn2.
Proof. See the Appendix. tu
The graph representation allows the enumeration of
other classes of ISOPs. For example, we can enumerate
ISOPs that have one more PI than is in the MSOP.
Specifically,
Theorem 7.4. The number of ISOPs for STn; 1 with n 1 PIs
is
Proof. See the Appendix. tu
By comparing the number of MSOPs with either the
number of WSOPs or the number of ISOPs with one more PI
than in the MSOP, we find that the former is much less than
either of the latter for large n. That is, as n approaches
infinity, the ratio of MSOPs to WSOPs approaches 0 (use
Stirling's formula to replace n 1! in the expression for
the number of MSOPs). This proves the following:
Theorem 7.5. The fraction of ISOPs for STn; 1 that are
MSOPs approaches 0 as n approaches infinity.
It is interesting that the ratio of the number of ISOPs with
than is in an MSOP) to the number of
WSOPs also approaches 0 as n approaches infinity. This
suggests that WSOPs are much more common than minimal
or near-minimal ISOPs.
The existence of an algorithm that finds the worst sum-of-
products expression for a class of functions is surprising. It
counters our expectation that a heuristic algorithm should
perform reasonably well. Also, the large difference
between the size of the worst and the best expression is
especially compelling since such an algorithm will perform
very poorly. It is, therefore, an interesting question of
whether there are other algorithms and other functions that
exhibit the same characteristics.
Fig. 6. Graph representations of Fig. 5 with all variables complemented.
Fig. 5. Graph representations of the MSOP and WSOP for ST3;1.
(a) MSOP. (b) WSOP. (a) MSOP. (b) WSOP.
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 943
We show a multiple-output function where the worst
and the best ISOPs differ greatly in size. Specifically, a
decoder with 2n outputs and n inputs realizes a function
where a WSOP has at least 2n PIs and an MSOP has at most
2n PIs. Since this is a commonly used logic function,
disparity in the size of WSOPs and MSOPs cannot be
viewed as a characteristic of contrived functions only.
Although computationally intensive, enumeration of the
ISOPs for representative functions gives needed insight into
the problem. We show an algorithm to compute all ISOPs of
a given function. We apply it to benchmark functions and
show there are significant differences in the distributions of
ISOPs. That is, some functions have a narrow distribution,
where the WSOP is nearly or exactly the same size as the
MSOP. These tend to be easy to minimize. For example, for
unate functions [17] and parity functions, there is exactly
one ISOP. Such functions are classified as trivial in the
Berkeley PLA Benchmark Set (e.g., ALU1, BCD, DIV3,
CLP1, CO14, MAX46, NEWPLA2, NEWBYTE, NEWTAG,
and RYY6) [26]. Other functions display a wide range and
tend to be hard to minimize. For example, 9SYM or SYM9
(ST9; has a wide range, i.e., the number of PIs in a
WSOP and an MSOP is 148 and 84 PIs, respectively. This
function is known to be hard to minimize.
For a class of functions, we provide an analysis showing
that the number of MSOPs is significantly smaller than the
number of WSOPs. That is, by showing a correlation with
directed graphs, we enumerate all MSOPs and all WSOPs of
the class and show that the number of MSOPs and WSOPs
is n 1! and nn2, respectively. As n increases, the ratio of
PIs in a WSOP to the PIs in an MSOP grows without bound.
This suggests such functions are hard to minimize.
A complete understanding of the minimization process
will require knowledge of the search space and how various
algorithms progress through it. However, such an understanding
is not likely to be achieved in the near future. Our
research suggests that there is merit to understanding the
correlation between the degree of difficulty in minimizing a
function and the distribution of its ISOPs.
Lemma 2.1.
Proof. There are two steps. In the first step, we prove that
an ISOP with 70 PIs exists for this function. In the second
step, we show that it is a WSOP. For the first step, it is
convenient to view the symmetric function as having
three parts. Specifically,
f0;1;3;4;6;7g f0;1g f3;4g f6;7g
A WSOP is obtained by finding a WSOP of each of the
three parts separately. Consider the 7-bit Hamming code
shown in Table 6.
For each code word, create a PI that covers two
minterms by replacing one of the most abundant bits in
A 7-Bit Hamming Code
the code word by a don't care. In the case of code word,
0000000; this creates seven PIs, each of which covers the
minterm with all variables 0 and one minterm with
exactly one 1. This covers all minterms of S7 .
Similarly, seven PIs generated from code word 1111111
cover all minterms of S7 .
All of the remaining 14 code words have either four 0s
and three 1s or four 1s and three 0s. For each, create four
PIs by changing one of the four logic values in the
majority to a don't care. Collectively, the four PIs cover
the original code word and four words that are a
distance one away from the code word. Because the
distance between any pair of code words is at least three,
a change in a single bit of a code word in the Hamming
code creates a word that is not a code word that is
distinct from either another code word or a word that is
one bit different from another code word. This implies
that those minterms a distance one away from a code
word are covered by at most one PI. It follows that each
PI is irredundant. Since each of the 14 code words
corresponds to a set of PIs that cover five distinct
minterms, there are 14 5 70 minterms total. On the
otherhand,thenumberofmintermsforS7 is
f3;4g
It follows that these PIs cover
all the minterms of S7 . In all, 7 56 7 70 PIs cover
all of the 8 minterms of the function.
It follows that this set of PIs is a cover for S7 .
Further, it is an irredundant cover and we have an ISOP.
We have proven that an ISOP with 70 PIs exists for
S7 . We show that this is a WSOP by showing that
no more than seven, 56, and seven PIs can cover the
minterms in S7 , S7 , and S7 , respectively. Since
f0;1g f3;4g f6;7g
S7 and S7 are monotone functions, their ISOPs are
unique. Each consists of seven PIs. For S7 , the ISOP of
above covers the 70 minterms associated with this
function. On the contrary, assume that the proposed
ISOP is not a WSOP. Thus, there is a set of p>56 PIs that
forms an ISOP for these minterms. Each PI covers exactly
two minterms, for a total of 2p>112 instances of a PI
covering a minterm. Let m1 and m>1 be the number of
minterms covered by one and more than one PI,
944 IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 9, SEPTEMBER 2001
respectively. m>1 70 m1. Since the set of PIs is
irredundant, each PI covers at least one minterm that is
not covered by any other PI. Thus, m1 p>56.It
follows that 2p 3m1 > 280. Further, 2p m1 > 280
4m1 470 m1 and we can write
> 4: A:1
Here, the numerator is the number of instances in which
a PI covers a minterm that is covered by more than one
PI, while the denominator represents the number of
minterms covered by more than one PI. Since this ratio
exceeds four, by the Pigeonhole Principle, there is at least
one minterm covered by at least five PIs. But, this is
impossible; each minterm is covered by no more than
four PIs (i.e., each code word is covered by a PI derived
from a code word by converting one of the four most
abundant variables, 0 or 1, to a don't care). Thus, it must
be that the proposed ISOP is a WSOP. tu
Theorem 3.1.
2. MSOP : STn; k n .
3.
Proof.
1. An implicant of STn; k has the form
xi1 xi2 xik xink1 xink2 xin :
Thatis,forthisimplicanttobe1,atleast
k variables (xi1 ;xi2 ; ., and xik ) must be 1 and at
least k variables must be 0 (xink1 ;xink2 ; ., and
xin ), where 2k n.Thisimplicantisprime;
deleting a literal creates an implicant that is 1
when STn; k should be 0. Specifically, deleting
creates an implicant that is 1
when less than n variables are 1, while deleting
creates an implicant that is
1 when more than n k variables are 1.
The number of such PIs is the number of ways 3.
to separate n variables into three parts, where
order withina part is not important. This is the
multinomial n n! .
2. First, we show that n is a lower bound on the
number of PIs of STn; k. Then, we show a set
of n PIs that covers all and only minterms in the
function STn; k. It follows that the OR of all PIs
in is an MSOP for STn; k.
Consider the set MI of n minterms of
the form xi1 xi2 .xik xik1 xik2 .xin , where ij 2
of minterms that are 1 when exactly k of the n
variables are 0. No PI for STn; k covers two or
more minterms in MI. As such, MI is a set of
independent minterms and at least n PIs are
needed.
is formed as follows: For each minterm mt in
MI, apply Algorithm 1.1 below, producing Pmt,a
PI that covers mt. Add Pmt to . Since noPI
covers two or more minterms in MI, has n
distinct PIs. Since Pmt has exactly k 0s and k 1s, it
covers only minterms in STn; k. Next, we show
that covers all minterms in STn; k by applying
Algorithm 1.1 to an arbitrary minterm mt0 of
producing Pmt0 , a PI that covers mt0.
from Pmt0 by setting all -s to 1s. Applying
Algorithm 1.1 to mt00 yields Pmt00 that is identical
to Pmt0 , from which we can conclude Pmt0 2 .
Algorithm 1.1 (Produce a PI that covers a given
minterm).
Input: Minterm mt mt0mt1 .mtn 1
Output: Prime implicant
Pmt Pmt0Pmt1 .Pmtn 1
(Initially, Pmtifor all i where
1. ZeroOnePairs 0
2. Repeat until ZeroOnePairs k do {
if
and mtimti s01
then
and ZeroOnePairs ZeroOnePairs 1g;
where index addition is mod n (in this
algorithm, we assume that subscript indices
range from 0 to n 1, i.e., the variables are
It is straightforward to show that Algorithm 1.1
produces a PI in if the minterm input has at
least k 1s and k 0s. This PI may be different
depending on the values of i chosen in each
repetitive step.
Assume
We can form an ISOP of STn; k as follows:
F xnF1 _ xnF2 _ F3;
where
a. F1 is an SOP such that each product term
is formed as the AND of i) a set X1 of
fxng, and of ii) k uncomplemented
variables from X fxngX1,wherethe
indices of the uncomplemented variables
are all as small as possible (given the choice
of X1). Because X1 can be chosen in nk1
ways, F1 has nk1 product terms.
b. F2 is an SOP such that each product
term is formed as the AND of i) a set
X2 of k 1 uncomplemented variables,
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 945
where X2 X fxng and of ii) k complemented
variables from X fxngX2, where
the indices of the complemented variables are
all as small aspossible. Because X2 can be
chosen in nk1 ways, F2 has nk1 product
terms.
c. F3 is one of the WSOPs for STn 1;k.
Fromthe inductive hypothesis, F3 has
F is an expression for STn; k as follows:
Consider a minterm m in STn; k.Ifm has at
least k 1s and at least k 0s, regardless of the
value of xn, then m is covered by F3.Ifm has
exactly k 0s, including a 0 value for xn, and at
least k 1s, then it is covered by xnF1.Ifm has
exactly k 1s, including a 1 value for xn, and at
least k 0s, it is covered by xnF2. Thus, F
covers all minterms in STn; k. Because each
PI in xnF1 and xnF2 (and also in F3) has
exactly k uncomplemented and exactly
complemented variables, F covers only
minterms with at least k 0s and at least k 1s,
i.e., only minterms in STn; k. It follows that
F is an SOP for STn; k.
If F has no redundant PIs, its n1
k1
k1
an ISOP for STn; k. Thus,
Next, we show that F has no redundant
PIs. First, each PI in xnF1 covers a minterm m0
having k 0s and n k 1s that is not covered
by any combination of PIs from xnF2 and F3.
Thus, no PI in xnF1 is redundant. By a similar
argument, no PI in xnF2 is redundant.
Second, no PI in F3 is redundant, as follows:
Since F3 is a WSOP for STn 1;k,noPIin
F3 is covered by the OR of one or more PIs in
F3. If the OR of PIs from xnF1 and xnF2 covers
aPIP from F3, then it follows that at least
one product term P1 in F1, when ANDed
with at least one product term P2 in F2, yields
a non-0 result. Let
and
If P1P2 0,nosi is the same as a uj and no tpis the same as a vq. But, t1;t2; ., and tk were
chosen to be as small as possible without
overlapping s1;s2; .,andsk1, while
v1;v2; ., and vk were chosen to be as
small as possible without overlapping
u1;u2; ., and uk1. Consider the indices
I f1; 2; .;kg. The smallest index in I that
appears neither in S fs1;s2; .;sk1g nor in
U fu1;u2; .;uk1g appears in both T
ft1;t2; .;tkg and V fv1;v2; .;vkg, causing
P1P2 0, a contradiction. tu
Theorem 3.5. STm; kr has the following properties:
r
r mr
2. MSOP : STm; k .
3.
Proof. Items 1, 2, and 3 follow from the observation that a
minterm in the product function STm; kr can be
viewed as the AND of a minterm from each of the factor
functions STm; k. Thus, a PI of STm; kr can be
viewed as the product of a PI from each STm; k and
directly.
Also,
r mr
r
follows directly. That is, m is an upper bound on the
number of PIs in an MSOP for STm; kr, as an ISOP for
STm; kr can be formed as the AND of PIs from the
MSOPs of STm; k. As is shown by Voight and Wegener
[36], certain product functions can have fewer PIs in their
MSOPs than the product of the number of PIs in the
MSOPs of the factor functions. However, when the factor
functions are STm; k, we can observe the following: Let
M be the set of minterms covered by STm; k, in which
exactly k variables are uncomplemented. Since the PIs of
exactly k uncomplemented and k complemented
variables, none cover two or more minterms
in M. It follows that at least m PIs are needed to cover
STm; k. It follows that at least PIs are needed to
cover the minterms in STm; kr that are the product of
minterms in the set M for each factor function. Thus,
r mr
A WSOP for STm; kr can be formed as the product of
WSOPs for each factor function. Since 2 m 2k is a
lower bound on the number of PIs in each factor
directly. tu
Theorem 7.1. Let GF be the graph representation of F. F is an
ISOP of STn; 1 iff GF is minimally strongly connected.
Proof. (if) Consider a minimally strongly connected digraph
GF , where F is its corresponding SOP. Thus, for every
edge xj;xi in GF , there is an implicant xjxi in F.Onthe
contrary, assume that F is not an ISOP for STn; 1. That
is, either 1) F does not cover STn; 1 or 2) F covers
STn; 1 but has a redundant PI.
Consider 1). If F does not cover STn; 1, then there is
a minterm mt such that mt either a) has no complemented
variables or no uncomplemented variables, but F is 1
for this assignment or b) has at least one uncomplemented
variable and at least one uncomplemented variable,
but F is 0 for this assignment. The first part, a), is not
possible; all PIs cover only minterms with at least one
946 IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 9, SEPTEMBER 2001
complemented variable and at least one uncomplemented
variable. The second part, b), is also not possible, as
follows: Because GF is strongly connected, there is a path
xj xk1 ;xk2 ; ;xkm xi, from xj to xi. Since xj 0 and
xi 1, the assignment of values to xk1 ;xk2 ; ., and xkm
corresponding to mt has the property that there is an s
such that xks 0 and xks1 1. The corresponding PI
xks xks1 is in F and is 1 for the assignment associated
with mt. Thus, F covers mt.
Consider 2). If F covers STn; 1, but has a redundant
PI, xjxi, then F0, which is F with xjxi removed, also
covers STn; 1. But, GF0 is GF with one edge, xjxi,
removed. It follows that GF is not minimally strongly
connected.
(only if) Let GF be a graph representation of an ISOP F
of STn; 1. Assume, on the contrary, that GF is not
minimally strongly connected. That is, either 1) GF is not
strongly connected or 2) GF is strongly connected, but is
not minimal.
If GF is not strongly connected, there are two
nodes xj and xi such that no path exists from xj to
xi. Let Sucxj be the set of all nodes for which
there is a path from xj, i.e., all successors of xj. Let
Prexi be the set of all nodes for which there exists
a path to xi, i.e., all predecessors of xi. Consider a
minterm mt that is 0 for xj and all variables associated
with nodes in Sucxj and is 1 for xi and all variables
associated with nodes in Prexi. Since there is no path
from xj to xi, Sucxj\Prexi and such an
assignment assigns exactly one value to nodes in
Sucxj[Prexi[fxj;xig. Choose the values of all
other variables to be 1 (or 0). No edge in GF has a 0 at
its tail and a 1 at its head. Thus, all PIs are 0 and F is not
an SOP for STn; 1. It is, thus, not an ISOP, a contradiction
If GF is strongly connected, but is not minimal, there
is at least one edge xj;xi that can be removed without
affecting the connectedness of GF . It follows that GF0 ,
where F0 is F with xjxi removed, is a graph representation
of F0, an SOP for the same function as F. Thus, F is
an SOP, but not an ISOP, contradicting the assumption.tu
Theorem 7.2. The number of MSOPs for STn; 1 is n 1!.
Proof. From Theorem 7.1, an MSOP for STn; 1 corresponds
to a directed graph with the fewest edges which
is strongly connected, but is not strongly connected
when any edge is removed. Such a graph is a directed
cycle of arcs through all variables. As such, it represents
a cyclic permutation on the variables. The number of
such permutations is n 1!. tu
Lemma 7.2. Let F be an ISOP of STn; 1. F is WSOP iff
complementing all variables in F leaves F unchanged.
Proof. (if) Let F be an ISOP that is unchanged by a
complementation of all variables. This implies that if xixj
is a PI of F, then so also is xjxi. It follows that if xi;xj is
an edge in GF , then xj;xi is an edge in GF . That is, all
edges between nodes occur in pairs, one going one way
and the other going the other way. In such a graph, there
are n 1 pairs or 2n 2 edges in all. (Replace each pair
by an undirected edge. If the directed graph is strongly
connected, the undirected graph must be connected.
From Harary [14], there must be n 1 edges.) Since there
are 2n 2 edges in GF , there are 2n 2 PIs in F and,
thus, F is a WSOP.
(only if) Let F be a WSOP of STn; 1. We show that
GF consists of cycles of length 2 only. Thus, if xixj is a PI
of F, so also is xjxi. It follows that complementing all
variables of F leaves F unchanged. Suppose that GF
contains a cycle of length m, where m>2. Such a cycle
represents a strongly connected subgraph of GF in which
there are m edges. However, the cycle can be replaced by
a minimally strongly connected graph with more edges
(e.g., where all edges occur in pairs). The result is a
strongly connected graph, where the deletion of an edge
leaves it unconnected, which has more edges than the
original version. This contradicts the statement that F is
a WSOP. tu
Theorem 7.3. The number of WSOPs for STn; 1 is nn2.
Proof. From Theorem 7.1, a WSOP for STn; 1 corresponds
to a minimally strongly connected graph with the largest
number of edges. We show that this graph consists of
cycles of length 2 exclusively, as follows: Suppose, on the
contrary, the graph has a cycle of length m>2. There are
m edges in this cycle. However, this subgraph can be
replaced by a subgraph with more edges, 2m 1.It
follows that the original graph does not represent a
WSOP.
Each cycle of length 2 connects two nodes by edges in
the two directions. Replace each pair of edges by an
undirected edge, forming an undirected tree with n 1
edges. Thus, there are 2n 1 PIs in a WSOP for
STn; 1. This proves Theorem 3.2. It follows that the
number of WSOPs is the number of undirected trees on n
labeled nodes. Cayley [5] in 1889 showed that this
number is nn2. tu
Theorem 7.4. The number of ISOPs for STn; 1 with n 1 PIs
is
Proof. From Theorem 7.1, an ISOP of STn; 1 that has n 1
PIs corresponds to a minimally strongly connected graph
with edges. All graphs with this property have two
cycles of nodes, of which i are common, where
2. We can represent each instance as a
permutation of nodes that has been divided into three
nonempty sets, the i common nodes, nodes N1, in one
cycle only, and nodes N2 in the other cycle only. N1 and
N2 must be nonempty since an empty set corresponds to
a redundant edge. There are n! ways to permute the
nodes and n1 ways to divide them into threenonempty sets. However, this double counts graphs
since interchanging N1, and N2 does not change the
graph. Thus, the total number of graphs with n 1 edges
is 1 n1 n!. tu
SASAO AND BUTLER: WORST AND BEST IRREDUNDANT SUM-OF-PRODUCTS EXPRESSIONS 947
ACKNOWLEDGMENTS
This work was supported in part by a Grant in Aid for
Scientific Research of the Ministry of Education, Culture,
Sports, Science, and Technology of Japan and in part by
NPS Direct Funded Grant UHCD1. Discussions with
Dr. Shin-ichi Minato were quite useful. Mr. Munehiro
Matsuura and Mr. Shigeyuki Fukuzawa did part of the
experiments. The authors acknowledge the contributions of
two referees. This paper is an extended version of T. Sasao
and J.T. Butler, Comparison of the Worst and Best Sum-of-
Products Expressions for Multiple-Valued Functions,
Proceedings of the IEEE International Symposium on Multiple-Valued
Logic, pp. 55-60, May 1997.
--R
Logic Minimization Algorithms for VLSI Synthesis.
Logic Design of Digital Systems
Introduction to Switching and Automata Theory.
Graph Theory.
Logic Design and Switching Theory.
Representation of Discrete Functions
Switching Theory for Logic Synthesis
Advanced Logical Circuit Design Techniques.
The TTL Data Book for Design Engineers
--TR
PALMINIMYAMPERSANDmdash;fast Boolean minimizer for personal computers
Two-level logic minimization: an overview
Switching Theory for Logic Synthesis
Logic Design and Switching Theory
Logic Design of Digital Systems
Logic Minimization Algorithms for VLSI Synthesis
A Remark on Minimal Polynomials of Boolean Functions
A State-Machine SynthesizerMYAMPERSANDmdash;SMS
An application of multiple-valued logic to a design of programmable logic arrays
--CTR
Alan Mishchenko , Tsutomu Sasao, Large-scale SOP minimization using decomposition and functional properties, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Olivier Coudert , Tsutomu Sasao, Two-level logic minimization, Logic Synthesis and Verification, Kluwer Academic Publishers, Norwell, MA, 2001 | symmetric functions;worst sum-of-products expressions;prime implicants;logic minimization;multiple-output functions;minimum sum-of-products expressions;graph enumeration;irredundant sum-of-products;heuristic minimization;complete sum-of-products expressions;minimally strongly connected digraphs |
507237 | A logical foundation for deductive object-oriented databases. | Over the past decade, a large number of deductive object-oriented database languages have been proposed. The earliest of these languages had few object-oriented features, and more and more features have systematically been incorporated in successive languages. However, a language with a clean logical semantics that naturally accounts for all the key object-oriented features, is still missing from the literature. This article takes us another step towards solving this problem. Two features that are currently missing are the encapsulation of rule-based methods in classes, and nonmonotonic structural and behavioral inheritance with overriding, conflict resolution and blocking. This article introduces the syntax of a language with these features. The language is restricted in the sense that we have omitted other object-oriented and deductive features that are now well understood, in order to make our contribution clearer. It then defines a class of databases, called well-defined databases, that have an intuitive meaning and develops a direct logical semantics for this class of databases. The semantics is based on the well-founded semantics from logic programming. The work presented in this article establishes a firm logical foundation for deductive object-oriented databases. | Introduction
The objective of deductive object-oriented databases is
to combine the best of the deductive and object-oriented ap-
proaches, namely to combine the logical foundation of the
deductive approach with the modeling capabilities of the
object-oriented approach. Based on the deductive object-oriented
database language proposals as well as the work in
object-oriented programming languages and data models, it
is becoming clear that the key object-oriented features in
deductive object-oriented databases include object identity,
complex objects, typing, rule-based methods, encapsulation
of methods, overloading, late binding, polymorphism, class
hierarchy, multiple behavioral inheritance with overriding,
blocking, and conflict handling. However, a clean logical
semantics that naturally accounts for all the features is still
missing from the literature. In particular the encapsulation
of rule-based methods in classes, and non-monotonic multiple
behavioral inheritance have not been addressed properly
so far.
In object-oriented programming languages and data
models, methods are defined using functions or procedures
and are encapsulated in class definitions. They are invoked
through instances of the classes. In deductive object-oriented
databases, we use rules instead of functions and
procedures. By analogy, methods in deductive object-oriented
databases should be defined using rules and encapsulated
in class definitions. Such methods should be
invoked through instances of the classes as well. However,
most existing deductive object-oriented database languages,
including F-logic [9], IQL [1], Datalog meth [2], ROL [12],
do not allow rule-based methods to be encapsulated
in the class definitions. The main difficulty is
that the logical semantics is based on programs that are sets
of rules. If rules are encapsulated into classes, then it is not
clear how to define their semantics. Several proposals such
as Datalog meth and Datalog ++ provide encapsulation but
use rewriting-based semantics which do not address the issue
directly. The authors of [3] address encapsulation but
do not include other important object-oriented features, like
inheritance.
Non-monotonic multiple behavioral inheritance is a fundamental
feature of object-oriented data models such as
O 2 [5] and Orion [10]. The user can explicitly redefine (or
override) the inherited attributes or methods and stop (or
block) the inheritance of attributes or methods from super-
classes. Ambiguities may arise when an attribute or method
is defined in two or more superclasses, and the conflicts
need to be handled (or resolved). Unfortunately, a logical
semantics for multiple inheritance with overriding, blocking
and conflict-handling has not been defined. The main
difficulty is that the inherited instances of a superclass may
not be well-typed with respect to its type definition because
of overriding and blocking. Most deductive object-oriented
database languages, including F-logic 1 , LOGRES [4], LIV-
1 F-logic however supports indeterminate non-monotonic default value
ING IN LATTICE [7], COMPLEX [6], only allow monotonic
multiple structural inheritance, which is not powerful
enough. Some deductive object-oriented languages such
as Datalog meth only support non-monotonic single inheritance
by allowing method overriding. One extreme case
is IQL, which does not support multiple inheritance at the
class level at all. Instead, it indirectly supports it at the instance
level via the union type so that inherited instances
of a superclass can still be well-typed with respect to its
type definition which is the union of the type for its direct
instances and the type for its non-direct instances. ROL
has a semantics that accounts for non-monotonic multiple
structural inheritance with overriding and conflict-handling
in a limited context, but without blocking. Datalog ++ takes
a quite different approach towards non-monotonic inheri-
tance. It disallows the inheritance of conflicting attributes
and methods, like in C++. It provides mechanisms for
the user to block the inheritance of attributes and methods.
However, it only provides an indirect, rewriting-based semantics
for such non-monotonic inheritance.
This paper provides a direct well-defined declarative semantics
for a deductive object-oriented database language
with encapsulated rule-based methods and non-monotonic
behavioral inheritance with overriding, conflict resolution
and blocking. In order to keep the setting simple, we omit
some well understood features that don't affect the semantics
described, e.g. set-valued attribute values, and we focus
on a static database rather than a dynamic database,
(see [13] for the semantics of updates to the database).
In the language, methods are declared in the class defini-
tions, and the methods are invoked through instances of the
classes. We introduce a special class, none, to indicate that
the inheritance of an attribute or method in a subclass is
blocked i.e. it won't be inherited from its superclasses. We
provide a very flexible approach to conflict resolution. Our
mechanism consists of two parts. One part, the default part
is similar to the method used in Orion, namely a subclass
inherits from the classes in the order they are declared in
the class definition. The other part allows the explicit naming
of the class the attribute or method is to be inherited
from. Therefore, a subclass can inherit attribute or method
definitions from any superclasses. We then define a class
of databases, called well-defined databases, that have an intuitive
meaning and develop a direct logical semantics for
this class of databases. The semantics naturally accounts
for method encapsulation, multiple behavioral inheritance,
overriding, conflict handling and blocking, and is based on
the well-founded semantics [16] from logic programming.
We define a transformation that has a limit, I for well-defined
databases, and prove that I , if it is defined, is a
minimal model of the database.
inheritance. The value inherited depends on which inheritance step is done
first at run time.
This paper is organized as follows. We introduce the syntax
and semantics of the language using an example in Section
2. In Section 3 the class of well-defined databases and
the semantics of well-defined databases are defined, and the
main results are presented. Section 4 concludes the paper,
reiterating our results and comparing this work with related
work. Due to space limitation, the paper is quite terse and
we have omitted proofs. They are included in [14].
Example
Our language in fact supports many of the important
object-oriented features in a rule-based framework with a
well-defined declarative semantics. In this section, we introduce
and demonstrate concepts that are important in the
paper. A more extensive description of the syntax can be
found in [14].
The schema in Figure 1(a) defines four classes, person,
employee, student, and wstudent (working student).
The class person has three attributes, name, birthyear,
and spouse, and two methods: married(person) and
single(). The attribute birthyear has a default value of
1945. Method married(X) is true if the person the method
is applied to has a spouse, X, and method single() is true
if the person is not married. Notice, that the semantics of
negation are defined using extended negation as failure [11].
The class employee inherits all attribute declarations, default
values and method declarations from class person unless
they are blocked or overridden in class employee. We
say that class employee is a direct subclass of person and
person is a direct superclass of employee. New attributes
can also be declared in subclasses. The attribute declarations
for name, birthyear, and spouse, and the method
declarations for married(person), and single() are inherited
but the default value of birthyear is overridden in
employee, i.e., the default value for attribute birthyear is
redefined to 1960. The class student also inherits from
person. Two methods are declared in student, namely
extrasupport() and support().
The class wstudent inherits from two classes, employee
and student. With multiple inheritance, there can be conflicting
declarations i.e. default values, attributes and methods
may be declared in more than one superclass. There
is one possible conflict to be resolved in wstudent, default
value birthyear is defined on both employee and student.
There are two ways that conflicts can be resolved. A conflict
resolution declaration indicates explicitly which class a
property is to be inherited from e.g. birthyear \Delta student
indicates that the definition of birthyear and the default
value 1970 are inherited from student. If there is a conflict
and there is no conflict resolution declaration then the
property is inherited from the superclasses in the order the
are listed in the class declaration. Notice that
Key
declaration
value declaration
ffl! default value
class person [
name
birthyear
birthyear ffl! 1945;
spouse
single() fsingle() :- :married(X)g
class employee isa person [
birthyear ffl! 1960;
class student isa person [
birthyear ffl! 1970;
student X;
:student X;
class wstudent isa employee; student [
birthyear \Delta student;
(a) Schema
employee tom [name ! "T om"; birthyear ! 1963;
spouse
student sam [name ! "Sam"]
wstudent pam [name ! "P am"; spouse ! tom]
(b) Instance
Figure
1. Sample Database
the method support() is blocked in wstudent (i.e. its return
type is none), and the method extrasupport() in wstudent
overrides the method extrasupport() in student. A method
declaration in a subclass overrides a method declaration in a
superclass if the methods have the same signature, independent
of their return values. A method has the same signature
as another method if the method has the same method
label and the same arguments, e.g. extrasupport() in student
has the same signature as extrasupport() in wstu-
dent. While classes employee and student are direct superclasses
of wstudent, person is an indirect superclass of
wstudent.
The instance in Figure 1(b) contains three objects with
oids tom, sam, and pam. In the database instance, each object
is associated with a class and attributes are assigned
values. For example, object tom is a direct instance of em-
ployee, and the value of its attribute name is "Tom". The
value of attribute birthyear is 1963, i.e. the default 1960
in employee is not used. The value of its attribute spouse
is object identifier pam. We say that employee is the primary
class of object tom, and object tom is an non-direct
instance of person. The birthyear of sam is 1970, i.e. the
default in class student is used because a value for attribute
birthyear is not provided in object sam. The value of attribute
birthyear is not given in object pam, nor in class
wstudent. The default value 1970 is inherited from student
because there is a conflict resolution declaration in wstu-
dent.
We can ask the following queries on the sample database
in
Figure
1. The queries demonstrate how methods are encapsulated
in classes, i.e. a method is declared in a class
and invoked through instances of the class.
1. Find the birthyear of Sam.
?- student O[name ! "Sam"; birthyear ! X]
The default value of birthyear for instances in class
student is returned,
2. Find what support Sam gets.
?- student O[name ! "Sam"; support() ! X]
The support() method in class student invokes the
extrasupport() method. The extrasupport() rules in
turn invoke the married(person) and single() methods
defined in class person. As Sam has no spouse,
Sam is not married, so Sam is single, and the third rule
for extrasupport() is used. The extrasupport() that
receives is 100, so
3. Find what support Pam gets.
wstudent
This method support() is blocked on wstudent, an error
message indicating that this method is undefined is
returned.
4. Find all students whose extra support is not 500.
?- student O [extrasupport() ! X]; X !? 500
This query returns the oids of all the objects that belong
to class student or subclasses of student whose
value for method extrasupport is not 500. The answer
is samg.
We make the following observations. Two kinds of
classes are distinguished: value classes and oid classes.
There are two special value classes, none and void. Class
none is used to indicate that the inheritance of an attribute
or method from a superclass is blocked in a subclass. Class
void has only one value, namely nil, which is returned
by a method if no other value is returned. Like in C++ and
Java, we have a special variable, This, that is used to refer
to the current object. Variables are represented throughout
the paper using uppercase alphabetic characters.
A schema K is a set of class declarations, which can be
represented abstractly as a tuple
where C is a finite set of oid classes, isa is a finite set of
superclass declarations, ff is a finite set of attribute decla-
rations, ffi is a finite set of default value declarations, - is
a finite set of method declarations, and - is a finite set of
conflict resolution declarations. For simplicity, we assume
that there is no abbreviation in ff, ffi , and -. We write ff(c),
(c) for the sets of attribute, default
value, method, attribute conflict resolution declarations, and
method conflict resolution declarations in ff, ffi , -, and -
for the class c respectively. We impose constraints on the
schema to capture the intended semantics of multiple inheritance
with overriding, blocking and conflict handling. An
instance I is a set of object declarations, that can be represented
as a tuple I = (-) where - is a set of ground oid
membership expressions called oid assignments and - is a
set of ground positive attribute expressions called attribute
value assignments. A database DB consists of two parts:
the schema K and the instance I, which can be represented
abstractly as a tuple
Figure 1. A
query is a sequence of expressions prefixed with ?-.
In this section, we define the semantics of a database
and queries. First we give the meaning of the schema
and instance of the database, then we identify a class of
databases, called well-defined databases, and finally, we define
the meaning of the rule based methods of well-defined
databases, based on the meaning of the schema and in-
stance. The semantics of a database is based on the well-founded
semantics except in this case the semantics of the
rule-based methods must take into account the meaning of
the schema and the instance of the database.
3.1 Semantics of Schema and Instance
Encapsulation is dealt with in this subsection; each at-
tribute, default value and method that are applicable to a
class are identified. In order to determine which attributes,
default values, and methods are applicable to a class, it is
necessary to consider inheritance with overriding, blocking
and conflict handling. Recall that ff(c), ffi (c) and -(c) are
the sets of attribute declarations, default value and method
declarations respectively that are defined on c. In this sec-
tion, we define ff (c), ffi (c), and - (c), the attribute decla-
rations, default value and method declarations that are applicable
to class c, taking inheritance, overriding, conflict
resolution and blocking into account.
In [14], we define difference operators that
find the attribute declarations (default value declarations,
method declarations respectively) that are defined on one
class and not redefined on another class. Consider the
database in Figure 1. The difference between the sets of
attribute declarations for person and student is:
person [name ) string];
person [birthyear ) integer];
person [spouse ) person],
The result is the attribute declarations in person that are not
redefined in student. In Figure 1 the difference between the
default attribute declarations for person and student is:
This is not surprising because the default value for
birthyear is redefined in student.
The following definition outlines an algorithm to find
the applicable declarations ff (c), ffi (c), - (c),
which are the sets of declarations that are implicitly or
explicitly declared on c with the blocked declarations re-
moved, and the name of the class to which they apply
changed. For example, consider the class wstudent in Figure
1. The algorithm produces:
ff
wstudent[name
wstudent[birthyear
wstudent[spouse
wstudent[birthyear ffl! 1970],
Overriding with Conflict Handling and Blocking The
semantics of multiple inheritance with overriding, conflict
handling and blocking are defined using the difference operators
as follows:
1. If a class does not have any superclasses, then there is
no inheritance, overriding, conflict resolution or block-
ing. The declarations in the class are the only ones that
apply to the class.
That is, if there does not exist a class c 0 such that
2. if c isa c 1 ; :::; c n , then
(a) we extend the sets of declarations to include new
declarations due to explicit conflict resolution
declarations
and c 00 [l
and 9 c 00 [l ffl!
and :9 c[l ffl!
(c) and M 2 - bci
(b) we extend the sets of declarations to include declarations
that are inherited from both direct and
indirect superclasses using the difference opera-
tor
are defined analogously.
(c) we remove blocked declarations and change the
class names in the sets of declarations
and c 0 6= noneg
and :9 c 00 [l
the type of M is
is obtained from M by substituting c for c 0 g
The symbols ff bc (c); ff bci (c), etc. are used only in
the definition of applicable declarations, and are not referred
to anywhere else in this paper. Let
(C; isa; ff; ffi; -) be a database. Then ff
Cg. If ff ; ffi , and - are defined, then the
semantics of the schema
by ff ; ffi , and - .
We have dealt with non-monotonic inheritance within a
database schema. We now describe the semantics of inheritance
within an instance of a database, by introducing the
notions of isa , - , and - . We overload the isa notion so
that if c isa c 1
isa as the reflexive transitive closure of isa, that captures
the general inheritance hierarchy. Note that c isa c.
We say that is a non-direct instance of class c, (denoted
by c
c is a value class, o is a value, and o is an element in the
collection which c denotes, or c is an oid class, o is an oid,
and there exists a c 0 such that c 0 isa c and c 0 -. The
notion of - captures the semantics of instance inheritance;
that is, if an oid is a direct instance of a subclass c, then it is
a non-direct instance of c and the superclasses of c.
In the case, where there is a default value declaration for
an attribute in a class, the instances of the class inherits the
default value for the attribute. We extend the notion - to -
to capture such intended semantics:
-g.
be a database. If -
and - are defined, then the semantics of the instance I =
(-) is given by - ; - .
It is possible to define a database that has no intuitive
meaning. For example it is possible to define a database
schema with a cycle in its class hierarchy or an attribute in
a class that has two distinct default values, or a database
instance where an object is an instance of more than one
class, or an attribute has more than one value for an ob-
ject. In [14], we discuss a number of constraints that can
be used to guarantee an intended semantics of the database
and queries on the database, we give properties that demonstrate
that the set of expressions defined have the intended
semantics, and define a well-defined database. In the following
subsection, we are concerned only with well-defined
databases, that is databases with an intuitive meaning.
A database instance does not have an intuitive meaning
if an object is a direct instance of more than one class; or if
an attribute has more than one value for an object.
3.2 Semantics of Databases and Queries
In this paper, we focus on static databases rather than
dynamic databases i.e. databases where classes of oids
and their attribute values remain the same. The semantics
for dynamic databases can be found in [13]. The classes
of oids and their attributes form our extensional database
(EDB) in the traditional deductive database sense. The
methods, however, are represented intensionally by method
rules. They define our intensional database (IDB). In this
section, we define the semantics of methods based on the
well-founded semantics proposed in [16]. Our definition
differs from [16] in the following ways. We are concerned
with a typed language with methods rather than an untyped
language with predicates. We introduce a well-typed concept
and take typing into account when deducing new facts
from methods. The definition of satisfaction of expressions
is simple in [16] and more complex in this paper because of
the many kinds of expressions. Our definition reflects the
fact that our model effectively has two parts, an extensional
database (EDB) that models oid membership and attribute
expressions, and an intensional database (IDB) that models
method expressions. The EDB is a 2-valued model while
the IDB is a 3-valued model. In the EDB, expressions are
true if they're in the model otherwise they are false. In the
IDB, method expressions are true if they are in the model,
false if their complement belongs to the model, otherwise
they are undefined. When a method expression is undefined,
either the method isn't defined on the invoking object, or it
isn't possible to assign a truth value to that expression. Every
well-defined program has a total model, unlike in the
well-founded semantics, where a program may have a partial
model. In fact we prove that every well-defined program
has a minimal model. We first define terminology that
is needed later in this section.
Herbrand Base Let be a
well-defined database. The Herbrand base BDB based on
DB is the set of all ground simple positive method expressions
formed using the method names in DB (without ab-
breviations).
The definition for compatible sets of expressions can be
found in [16]. Consider the set
Because
;, the set is incompatible.
Ground method expressions are required to be well-typed
with respect to the appropriate class declarations. Let
be a well-defined database,
and
method expression. Then / is well-typed in
DB iff the following hold:
1. there exists a class c such that c
2. there exists a method in - (c) with the method type
A set of ground method expressions is well-typed in DB
iff each ground method expression is well-typed in DB.
Methods can return values. However, for the same argu-
ments, a method should return only one value. We formalize
this using the notion of consistency. A set of ground
method expressions are consistent iff there do not exist
such that
r .
Interpretation Let be a
well-defined database. A partial interpretation of DB is a
is a compatible, consistent, and
well-typed set of method expressions in DB, and each atom
in S is an element of the Herbrand base. A total interpretation
is a partial interpretation that contains every well-typed
method atom of the Herbrand base or its negation. For an
interpretation I = (-; S), - and - form an extensional
database whereas S forms an intensional database.
Note that S contains both positive and negative expres-
sions, and different interpretations of DB have the same extensional
database but different intensional databases. A
ground substitution ' is a mapping from variables to oids
and values. It is extended to terms and expressions in the
usual way.
Satisfaction Let be a well-defined
database and I = (-; S) an interpretation of DB.
The notion of satisfaction of expressions, denoted by j=, and
its negation, denoted by 6j=, are defined as follows.
1. The satisfaction of ground positive and negative oid
membership expressions, ground positive and negative
attribute expressions, and ground arithmetic comparison
expressions are defined in the usual way.
2. For a ground positive method expression , I
3. For a ground negative method expression : , I
4. For a ground composite
Vn ],
ffl I
ffl I 6j= iff I 6j= c o or I 6j= o:V i for some i with
For a ground composite Vn ],
ffl I
ffl I 6j= iff I 6j= o:V i for some i with
5. For a method rule
each ground substitution ',
ffl I j= 'A; or
ffl I 6j= 'A and for each ground method rule with
head 'A there exists an L i with
that I
ffl there exists an L i with 1 - i - n such that neither
I
In other words, I j= means that is true in I; I 6j=
means that is false in I; if neither I
then is unknown in I.
Model Let be a well-defined
database and I = (-; S) an interpretation of DB. Then
I is a model of DB if I satisfies every ground method rule
in - .
Consider the following database:
class person [
spouse
single()fsingle() :- :married()g
person sam[spouse ! pam]
person pam
The following set is a model of this database:
Due to the typing and compatibility constraints as in
ROL [12], it is possible that a database has no models. Also,
a well-defined database may have several models. Our intention
is to select a proper minimal model as the intended
semantics of the database.
An unfounded set for a database with respect to an interpretation
provides a basis for false method expressions in
our semantics. The greatest unfounded set (GUS) is the set
of all the expressions that are false in a database with respect
to an interpretation and is used to provide the negative
expressions when finding the model of a database. The definition
for unfounded sets and greatest unfounded sets can
be found in [16]. The greatest unfounded set is used in the
definition of a model, i.e. a limit of the following transformation
Transformation Let be a
well-defined database. The transformation TDB of DB is
a mapping from interpretation to interpretation defined as
follows.
(-; W(I)) if W(I) is well-typed and
compatible
undefined otherwise
where
is a method rule in DB
and there exists a ground substitution ' such that
I
G is the GUS of DB with respect to I.
Model For all countable ordinals h the tuple I h for database
the limit of the transformation
TDB is defined recursively by:
1. For limit ordinal h, I
2. For successor ordinal k
Note that 0 is a limit ordinal, and I
sequence reaches a limit I .
We now prove that I is a model.
Theorem 3.1 Let DB be a well-defined database. If I
(-; S) is defined, then it is a model of DB. 2
Minimal model Let be a model of a
database DB. We say that model M is minimal if there
does not exist an expression / in S such that (-; S \Gamma /)
is still a model.
We now prove that for a well-defined database DB, I is
a minimal model of DB if it is defined.
Theorem 3.2 Let DB be a well-defined database. If I
(-; S) is defined, then it is a minimal model of DB. 2
Semantics of Databases The semantics of a well-defined
database represented by
the limit I if it is defined.
Semantics of Queries Let
be a well-defined database, Q a query of the form
substitution for variables of
Q. Assume I is defined. Then the answer to Q based on
DB is one of the following:
1. true if I
2. false if there exists an L i with 1 - i - n such that
I 6j= 'L i , and
3. unknown otherwise.
In other words, for a method expression /, if I
then expression / is true, if I
is false, and if I 6j= / and I 6j= :/ then expression /
is undefined. Let us consider an example with unknown
answers. Consider the following database:
class person [
spouse
single()fsingle() :- :married()g
person sam[spouse ! pam]
person pam
Then I
pam]g; ;) is a three-valued model, in which the answers to
the following queries are unknown.
There are two reasons why I may be undefined. One is
that the inferred set of method expressions is not well-typed.
The other is that it is not consistent. For the first problem,
we could define another constraint on method rules using
type substitution as in [13] to constrain the database. For
the second problem, run-time checking is necessary.
Logical semantics have played an important role in
database research. However, the object-oriented approach
to databases was dominated by "grass-roots" activity where
several systems were built without the accompanying theoretical
progress. As a result, many researchers feel the area
of object-oriented databases is misguided [9]. The deductive
object-oriented database research, however, has taken
quite a different approach. It has logical semantics as its
main objective and started with a small set of simple features
taken from the object-oriented paradigm such as F-logic
[9], and gradually incorporates more and more difficult
features that can be given a logical semantics such as
ROL [12] and Datalog++ [8].
The main contribution of the paper is the addition of two
outstanding object-oriented features to deductive object-oriented
databases together with a direct logical semantics.
The two outstanding features were rule-based methods and
the encapsulation of these methods in classes, and multiple
behavioral inheritance, with overriding, blocking, and conflict
handling. We have shown that these object-oriented
features which are believed to be difficult to address, can
indeed be captured logically. We believe that the semantics
given in this paper have a far reaching influence on
the design of deductive object-oriented languages and even
object-oriented languages in general. The language and semantics
defined on the language form the theoretical basis
for a practical query language. Indeed, the practical deductive
object-oriented database language ROL2 [15] supports
the theory discussed here.
Our work differs from the work of others in many ways.
Most existing deductive object-oriented database languages
do not allow rule-based methods to be encapsulated in the
class definitions. Those that do, do not address the issue
directly. Also, most existing deductive object-oriented
database languages do not allow non-monotonic multiple
behavioral inheritance. ROL does, but deals with conflict
handling only in a limited context and doesn't have block-
ing. Datalog ++ provides blocking and disallows the inheritance
of conflicting properties. F-logic supports monotonic
structural inheritance and indeterminate non-monotonic default
value inheritance by allowing a database to have multiple
possible models. For a class, not only its subclasses
but also its elements can inherit its properties.
--R
Object as a Query Language Primitive.
Methods and Rules.
A Logic for Encapsulation in Object Oriented Languages.
COMPLEX: An Object-Oriented Logic Programming System
The LIVING IN A LATTICE Rule Language.
Implementing Abstract Objects with Inheritance in Datalog neg
Logical Foundations of Object-Oriented and Frame-Based Languages
Introduction to Object-Oriented Databases
ROL: A Deductive Object Base Language.
Incorporating Methods and Encapsulation into Deductive Object-Oriented Database Languages
A Logic for Deductive Object-Oriented Databases
A Real Deductive Object-Oriented Database Language
The Well-Founded Semantics for General Logic Programs
--TR
C-logic of complex objects
Integrating object-oriented data modelling with a rule-based programming paradigm
LLO
The well-founded semantics for general logic programs
The O<subscrpt>2</subscrpt> system
The ObjectStore database system
The GemStone object database management system
Introduction to object-oriented databases
An overview of three commercial object-oriented database management systems
The LIVING IN A LATTICE rule language
Methods and rules
A logic for programming with complex objects
Design and implementation of ROCK MYAMPERSANDamp; ROLL
On the declarative and procedural semantics of deductive object-oriented systems
Logical foundations of object-oriented and frame-based languages
ROL: a deductive object base language
Object identity as a query language primitive
A Formal Definition of the Chimera Object-Oriented Data Model
The Story of O2
COMPLEX
Implementing Abstract Objects with Inheritance in Datalogneg
A Logic for Encapsulation in Object Oriented Languages
Object Migration in ISA Hierarchies
ROL2
Incorporating Methods and Encapsulation into Deductive Object-Oriented Database Languages
Overview of the ROL2 Deductive Object-Oriented Database System | deductive databases;nonmonotonic multiple inheritance;declarative semantics;object-oriented databases;rule-based languages |
507257 | The regular viewpoint on PA-processes. | PA is the process algebra allowing non-determinism, sequential and parallel compositions, and recursion. We suggest viewing PA-processes astrees, and usingtree-automata techniques for verification problems on PA. Our main result is that the set of iterated predecessors of a regular set of PA-processes is a regular tree language, and similarly for iterated successors. Furthermore, the corresponding tree automata can be built effectively in polynomial time. This has many immediate applications to verification problems for PA-processes, among which a simple and general model-checking algorithm. | Introduction
Veri-cation of In-nite State Processes is a very active -eld of research today in the
concurrency-theory community. Of course, there has always been an active Petri-nets com-
munity, but researchers involved in process algebra and model-checking really became interested
into in-nite state processes after the proof that bisimulation was decidable for normed
BPA-processes [BBK87]. This prompted several researchers to investigate decidability issues
for BPP and BPA (with or without the normedness hypothesis) (see [CHM94, Mol96, BE97]
for a partial survey).
From BPA and BPP to PA: BPA is the inon-determinism
recursionj fragment of process algebra. BPP is the inon-determinism parallel composition
recursionj fragment. PA (from [BEH95]) combines both and is much less tractable. A few
years ago, while more and more decidability results for BPP and BPA were presented, PA
was still beyond the reach of the current techniques. Then Mayr showed the decidability of
reachability for PA processes [May97c], and extended this into decidability of model-checking
for PA w.r.t. the EF fragment of CTL [May97b]. This was an important breakthrough,
allowing Mayr to successfully attack more powerful process algebras [May97a] while other
decidability results for PA were presented by him and other researchers (e.g. [Ku#96, Ku#97,
JKM98]).
A -eld asking for new insights: The decidability proofs from [May97b] (and the following
papers) are certainly not trivial. The constructions are quite complex and hard to
check. It is not easy to see in which directions the results and/or the proofs could be adapted
or generalized without too much trouble. Probably, this complexity cannot be avoided
with the techniques currently available in the -eld. We believe we are at a point where it
is more important to look for new insights, concepts and techniques that will simplify the
-eld, rather than trying to further extend already existing results.
Our contribution: In this paper, we show how tree-automata techniques greatly help
dealing with PA. Our main results are two Regularity Theorems, stating that Post (L)
(resp. Pre (L)) the set of con-gurations reachable from (resp. allowing to reach) a conguration
in L is a regular tree language when L is, and giving simple polynomial-time
constructions for the associated automata. Many important consequences follow directly,
including a simple algorithm for model-checking PA-processes.
Why does it work ? The regularity of Post (L) and Pre (L) could only be obtained
after we had the combination of two main insights:
1. the tree-automata techniques that have been proved very powerful in several -elds
(see [CKSV97]) are useful for the process-algebraic community as well. After all, PA
is just a simple term-rewrite system with a special context-sensitive rewriting strategy,
not unlike head-rewriting, in presence of the sequential composition operator.
RR n-0123456789
2. the syntactic congruences used to simplify notations in simple process algebras help one
get closer to the intended semantics of processes, but they break the regularity of
the behavior. The decidability results are much simpler when one only introduces
syntactic congruences at a later stage. (Besides, this is a more general approach.)
Plan of the paper: We start by recalling the basics of tree-automata theory (# 1) before
we introduce our de-nition for the PA process algebra (# 2). After we explain how sets of PA
processes can be seen as tree languages (# 3) we give a simple proof showing how Post (t)
and Pre (t) are regular tree languages and start listing applications to veri-cation problems.
We then move on to Post (L) and Pre (L) for L a regular language (# 5). These are our
main technical results and we devote # 6 to the important applications in model-checking.
We end up with an extension to reachability and model-checking under constraints (# 7) and
some simple but important techniques allowing to deal with PA processes modulo structural
equivalence (# 8).
Related work: Several recent works in the -eld use tree-automata to describe the behaviour
of systems. We use them to describe set of con-gurations.
The set of all reachable con-gurations of a pushdown automaton form a regular (word)
language. This was proven in [B#c64] and extended in [Cau92]. Applications to the model-checking
of pushdown automata have been proposed in [FWW97, BEM97].
The decidability of the -rst-order theory of the rewrite relation induced by a ground
term rewrite system relies on ground tree transducers [DT90] (note that PA is de-ned by a
conditional ground rewrite system).
Among the applications we develop for our regularity theorems, several have been suggested
by Mayr's work on PA [May97c, May97b] and/or our earlier work on RPPS [KS97a,
KS97b].
Regular tree languages and tree automata
We recall some basic facts on tree automata and regular tree languages. For more details,
the reader is referred to any classical source (e.g. [CDG
A ranked alphabet is a -nite set of symbols F together with an arity function
This partitions F according to arities: the set of
terms over F and call them -nite trees or just trees.
A tree language over F is any subset of T (F).
A (-nite, bottom-up) tree automaton A is a tuple hF ; Q; F; Ri where F is a ranked
alphabet, is a -nite set of states, F ' Q is the subset of -nal states, and R is
a -nite set of transition rules of the form f(q is the arity j(f)
of symbol f 2 F . Tree automata with "-rules also allow some transition rules of the form
The transition rules de-ne a rewrite relation on terms built on F [Q (seeing states from
Q as nullary symbols). This works bottom-up. At -rst the nullary symbols at the leaves
INRIA
The Regular Viewpoint on PA-Processes 5
are replaced by states from Q, and then the quasi-leaf symbols immediately on top of leaves
from Q are replaced by states from Q. We write t A
can be rewritten
(in some number of steps) to q 2 Q and say t is accepted by A if it can be rewritten into a
-nal state of A. We write L(A) for the set of all terms accepted by A. Any tree language
which coincide with L(A) for some A is a regular tree language. Regular tree languages are
closed under complementation, union, etc.
An example: Let F be given by F ffg. There is an
automaton accepting the set of all occurs an even number of times in t.
A is given by Q
g.
Let t be g(f(g(a); b)). A rewrites t as follows: g(f(g(a); b)) 7\Gamma! g(f(g(q 0
Hence t 7\Gamma! q 0 2 F so that t 2 L(A).
If we replace R by
we have an automaton accepting the set of all t where there is an even
number of g's along every path from the root to a leaf.
The size of a tree automaton, denoted by jAj, is the number of states of A augmented
by the size of the rules of A where a rule f(q 2. In this paper,
we shall never be more precise than counting jQj, the number of states of our automata.
Notice that, for a -xed F where the largest arity is m - 2, jAj is in O(jQj m ).
A tree automaton is deterministic if all transition rules have distinct left-hand sides
(and there are no "-rule). Otherwise it is non-deterministic. Given a non-deterministic tree
automaton, one can use the classical isubset constructionj and build a deterministic tree
automaton accepting the same language, but this construction involves a potential exponential
blow-up in size. Telling whether L(A) is empty for A a (non-necessarily deterministic)
tree automaton can be done in time O(jAj). Telling whether a given tree t is accepted by a
given (non-necessarily deterministic) A can be done in time polynomial in jAj
A tree automaton is completely speci-ed (also complete) if for each f 2 Fn and q
Q, there is a rule f(q adding a sink state and the obvious rules, any A
can be extended into a complete one accepting the same language.
2 The PA process algebra
For our presentation of PA, we explicitly refrain from writing terms modulo some simplication
laws (e.g. the neutral laws for 0). Hence our use of the IsNil predicate (see below),
inspired by [Chr93].
This viewpoint is in agreement with the earliest works on (general) process algebras like
CCS, ACP, etc. It is a key condition for the results of the next section, and it clearly does
not prevent considering terms modulo some structural congruence at a later stage, as we
demonstrate in section 8.
RR n-0123456789
6 D.Lugiez, Ph.Schnoebelen
2.1 Syntax
is a set of action names.
is a set of process variables.
is the set of PA-terms, given by the following abstract syntax
t;
Given t 2 EPA , we write Var(t) the set of process variables occurring in t and Subterms(t)
the set of all subterms of t (includes t).
A guarded PA declaration is a -nite set
a i
ng of process rewrite
rules. Note that the X i 's need not be distinct.
We write Var (\Delta) for the set of process variables occurring in \Delta, and Subterms (\Delta) the
union of all Subterms(t) for t a right- or a left-hand side of a rule in \Delta.
there is a rule iX a
! tj in \Deltag and \Delta(X) is S
a2Act \Delta a (X). Var ?
is the set of variables for which \Delta provides no rewrite.
In the following, we assume a -xed Var and \Delta.
2.2 Semantics
A PA declaration \Delta de-nes a labeled transition relation ! \Delta ' EPA \Theta Act \Theta EPA . We always
omit the \Delta subscript when no confusion is possible, and use the standard notations and
abbreviations: t w
inductively
de-ned via the following SOS rules:
a
a
a
a
(X a
a
a
a
a
where the predicate is inductively de-ned by
ae
true if
false otherwise.
The IsNil predicate is a syntactic test for termination, and indeed
Lemma 2.1. The following three properties are equivalent:
1.
2. t 6! (i.e. t is terminated),
3.
INRIA
The Regular Viewpoint on PA-Processes 7
Proof. (3 This derivation used has some X i
a i
induction over t to prove that
is obvious from the de-nition.
as a tree language
We shall use tree automata to recognize sets of terms from EPA . This is possible because
EPA is just a T (F) for F given by F kg. Of
course, we shall keep using the usual in-x notation for terms built with i:j or ikj.
We begin with one of the simplest languages in EPA :
Proposition 3.1. For any t, the singleton tree language ftg is regular, and an automaton
for ftg needs only have jtj states.
Similarly, an immediate consequence of Lemma 2.1 is
Corollary 3.2. L ? , the set of terminated processes, is a regular tree language, and an
automaton for L ? needs only have one state.
4 Regularity of the reachability set
For
Post (t) def
denote the set of
iterated predecessors (resp. the set of iterated successors, also called the reachability set) of
t.
These notions do not take into account the sequences w 2 Act of action names allowing
to move from some t to some t 0 in Post (t). Indeed, we will forget about action names until
section 7 which is devoted to Pre [C](t) and Post [C](t) for C ' Act .
Given two tree languages L; L 0 ' EPA , we let
4.1 Regularity of Pre
We de-ne (L t ) t2EPA , an in-nite family of tree languages, as the least solution of the following
set of recursive equations. The intuition is that these are quasi-regular equations satis-ed
RR n-0123456789
by Pre (t).
Y a
Y a
Y a
Y a
Y a
(1)
Observe that all equations de-ne L t as containing all LY 's for Y a process variable allowing
a one step transition Y a
t.
Lemma 4.1. For any t 2 EPA , L
Proof. (Sketch) The proof that u
is an induction over the length of
the transition sequence from u to t, then a case analysis of which SOS rule gave the last
transition, and then an induction over the structure of t.
The proof that u 2 L t implies u
t is a -xpoint induction, followed by a case analysis
over which summand of which equation is used, and relies on simple lemmas about
reachability, such as it 1
The equations from (1) can easily be transformed into regular equations, just by introducing
new variables for sets ftg in the de-nitions for the L t:t 0 's. Now, because any given L t
only depends on a -nite number of L u 's and fug's, namely only for u's in Subterms(t) [
Subterms (\Delta), we have 1
Corollary 4.2. For any t 2 EPA , the set L t is a regular tree language.
and the corresponding tree automaton has O(j\Deltaj states. This entails
Theorem 4.3. For any t 2 EPA , Pre (t), Pre(t) and Pre + (t) are regular tree languages.
4.2 Regularity of Post
We de-ne (L 0
two in-nite families of tree languages, as the least solution of the
following set of recursive equations. Our aim is that L 0
should coincide with
In section 5.1, we shall see that Corollary 4.2 holds even when \Delta is in-nite (but Var (\Delta) must be -nite).
INRIA
The Regular Viewpoint on PA-Processes 9
Post (t) (resp. Post
X a
X a
(2)
Again, these can easily be turned into regular equations. Again, any given L 0
t or L 00
only
depends on a -nite number of L 0
u 's and fug's.
Corollary 4.4. For any t 2 EPA , the sets L 0
t and L 00
are regular tree languages.
and the corresponding tree automata have O(j\Deltaj states.
As with Pre (t), we can easily show
Lemma 4.5. For any t 2 EPA , L 0
Post (t) and L 00
hence the corollary
Theorem 4.6. For any t 2 EPA , Post (t), Post(t) and Post + (t) are regular tree languages
that can be constructed eoeectively.
Theorems 4.3 and 4.6 will be generalized in sections 5 and 7. However, we found it
enlightening to give simple proofs of the simplest variants of our regularity results.
Already, Theorems 4.3 and 4.6 and the eoeective constructibility of the associated automata
have many applications.
4.3 Some applications
Theorem 4.7. The reachability problem iis t reachable from t 0 ?j is in P.
Proof. Combine the cost of membership testing for non-deterministic tree automata and the
regularity of Pre (t 0 ) or the regularity of Post (t).
For a dioeerent presentation of PA and ! \Delta , [May97c] shows that the reachability problem is
NP-complete. In section 8, we describe how to get his result as a byproduct of our approach.
Many other problems are solved by simple application of Theorems 4.3 and 4.6:
boundedness. Is Post (t) in-nite ?
RR n-0123456789
covering. (a.k.a. control-state reachability). Can we reach a t 0 in which Y
(resp. do not occur).
inclusion. Are all states reachable from t 1 also reachable from t 2 ? Same question modulo
a regularity preserving operation (e.g. projection).
liveness. where a given \Delta live if, in all reachable states, at least one transition
from \Delta 0 can be -red.
5 Regularity of Post
(L) and Pre
(L) for a regular language
In this section we prove the regularity of Pre (L) and Post (L) for a regular language L.
For notational simplicity, given two states q; q 0 of an automaton A, we denote by q k q 0
state q 00 such that q k q 0 A
possibly using "-rules.
5.1 Regularity of Pre
Ingredients for A Pre : Assume AL is an automaton recognizing L ' EPA . A Pre is a
new automaton combining several ingredients:
ffl A? is a completely speci-ed automaton accepting terminated processes (see Corollary
3.2).
ffl AL is the automaton accepting L.
ffl We also use a boolean to record whether some rewriting steps have been done.
States of A Pre : A state of A Pre is a 3-tuple (q ?
where Q ::: denotes the set of states of the relevant automaton.
Transition rules of A Pre : The transition rules of A Pre are de-ned as follows:
type 0: all rules of the form 0 7\Gamma! (q
7\Gamma! q L .
type 1a: all rules of the form X 7\Gamma! (q ? ; q L ; true) s.t. there exists some u
7\Gamma! q L .
type 1b: all rules of the form X 7\Gamma! (q
7\Gamma! q L .
type 2: all rules of the form (q
type 3a: all rules of the form (q
state of A? .
INRIA
The Regular Viewpoint on PA-Processes 11
type 3b: all rules of the form (q
Lemma 5.1. For any t 2 EPA , t A Pre
there is some u 2 EPA and some p 2 N
such that t p
u, u A?
Proof. By structural induction over t. There are three cases:
1. Because A Pre has no "-rules, we only have to observe that its rules of
type 0, 1a and 1b exactly correspond to what the lemma requires.
2.
required that, for
A Pre
there is a type 3 rule (q 1
The induction hypothesis entails there are t 1
corresponding to the
rewrite of t 1 and t 2 by A Pre . Now if A Pre used a type 3b rule, then b
. If we used a type 3a rule, then q 1
is a -nal state, therefore u 1 2 L? is a terminated process, hence t 1 :t 2
and
Conversely, assume
7\Gamma! q L . Then u is some
for
In the -rst case the ind. hyp. entails t 1
A Pre
AL
A Pre
Now we can use a type 3b rule to show t A Pre
with u AL
L .
In the second case,
AL
? a -nal state of A? . We
can use a type 3a rule to show t A Pre
3. This case is similar to the previous one (actually it is simpler).
If we now let the -nal states of A Pre be all states (q q L is a -nal state of
AL , then t
! u for some u accepted by AL ioe A Pre accepts t (this is where we use the
assumption that A? is completely speci-ed.)
Theorem 5.2. (Regularity)
(1) If L is a regular subset of EPA , then Pre (L) is regular.
(2) Furthermore, from an automaton AL recognizing L, is it possible to construct (in polynomial
time) an automaton A Pre recognizing Pre (L). If AL has k states, then A Pre
needs only have 4k states.
RR n-0123456789
Proof. (1) is an immediate consequence of Lemma 5.1. Observe that the result does not
need the -niteness of \Delta (but Var (\Delta) must be -nite).
(2) Building A Pre eoeectively requires an eoeective way of listing the type 1a rules. This
can be done by computing a product of AX , an automaton for Post + (X), with A? and
AL . Then there exists some u
7\Gamma! q L ioe the the language
accepted by the -nal states f(q a -nal state of AX g is not-empty. This gives
us the pairs q ? ; q L we need for type 1a rules. Observe that we need the -niteness of \Delta to
build the AX 's.
5.2 Regularity of Post
Ingredients for A Post : Assume AL is an automaton recognizing L ' EPA . A Post is a
new automaton combining several ingredients:
Automata A? and AL as in the previous construction, but this time we need to assume
each of them is a completely speci-ed automata.
A \Delta is a completely speci-ed automaton recognizing the subterms of \Delta. It has all
states q s for s 2 Subterms (\Delta). We ensure it A \Delta
by taking as transition
rules
belongs to Subterms (\Delta). In addition, the
automaton has a sink state q ? and the obvious transitions so that it is a completely
speci-ed automaton.
ffl Again, we use a boolean b to record whether rewrite steps have occurred.
States of A Post : The states of A Post are 4-uples (q ?
Transition rules of A Post : The transition rules are:
type 0: all rules of the form 0 7\Gamma! (q
type 1: all rules of the form X 7\Gamma! (q
7\Gamma! q L , and
type 2: all "-rules of the form (q
s is a rule in \Delta
with X AL
7\Gamma! q L .
type 3: all rules of the form
type 4a: all rules of the form
INRIA
The Regular Viewpoint on PA-Processes 13
type 4b: all rules of the form
is a -nal state of
A? .
Lemma 5.3. For any t 2 EPA , t A Post
there is some u 2 EPA and some
such that u p
t, u AL
Proof. We -rst prove the ()) direction by induction over the length k of the rewrite t A Post
We distinguish four cases:
1. and we used a type 0 or type 1 rule. Taking
sati-es the requirements.
2. k ? 1 and the last rewrite step used a type 2 "-rule: Then the rewrite has the form
z -
true). By ind. hyp., there is a u 0 and a p 0 s.t.
t. Now u
The existence of the type 2 rule entails
t. Taking the requirements.
3. k ? 1 and the last rewrite step used a type 4 rule: Then t is some t 1 :t 2 and the
type 4 rule applied on top of two rewrite sequences t i 7\Gamma! (q i
2.
The ind. hyp. gives us, for
If the last rule was a type 4a rule, then b
t. Taking the requirements.
Otherwise the the last rule was a type 4b rule. Then q 1
? is a -nal state and t 1
A?
entails that t 1 is a terminated process. Hence
t. Again,
taking the requirements.
4. k ? 1 and the last rewrite step used a type 3 rule: This case is similar (actually
simpler) to the previous one.
For the (() direction, we assume u p
t with the accompanying conditions (a.c.), and
proceed by induction over the length of the transition sequence (i.e. over p), followed by
structural induction over u. There are -ve cases:
1. and the a.c.'s ensure we can use a type 0 rule to show t A Post
2. 0: Like the previous case but with a type 1 rule.
3. 0: Then the sequence has the form X 1
t. Here the a.c.'s read
Subterms (\Delta). If we now take a q 0
RR n-0123456789
14 D.Lugiez, Ph.Schnoebelen
s.t.
L (one such q 0
must exist) and let b 0 be false ioe the ind. hyp.
gives us t A Post
there must be a type 2 "-rule (q
We use it to show t A Post
4.
t is a combination of some u 1
with Additionally, if
For 2, The rewrites t A?
7\Gamma! q L and u A \Delta
used some t i
A?
AL
L and u i
A \Delta
If we now de-ne b i according to p i , the ind.
hyp. entails that, for
A Post
There are two cases. If t 1 2 L? then q 1
? is a -nal state of A? and A Post has a type
4b rule (q 1
that we can use. If t 1 62 L? ,
then There is a type 4a rule that we can use.
5. Similar to the previous case (actually it is simpler).
If we now let the -nal states of A Post be all states (q q L is a -nal state of
AL , then A Post accepts a term t ioe u
t for a u accepted by AL ioe t belongs to Post (L).
Theorem 5.4. (Regularity)
(1) If L is a regular subset of EPA , then Post (L) is regular.
(2) Furthermore, from an automaton AL recognizing L, is it possible to construct (in polynomial
time) an automaton A Post recognizing Post (L). If AL has k states, then A Pre
needs only have O(k:j\Deltaj) states.
Proof. Obvious from the previous construction.
Our results relate t and Pre (t) (resp. Post (t)). A natural question is to ask if the
relation i
!j (i.e. f(t; u) j t
ug is recognizable in some sense. The most relevant notion of
recognizability related to our problem is linked to ground tree transducers, GTT's for short,
details. Since it can be shown that the
relation induced by a ground
rewrite system is recognizable by a GTT, we tried to extend this result to our PA case where
the rules are ground rewrite rules with simple left hand sides, but where there is a notion of
pre-x rewriting. Unfortunately, this pre-x rewriting entails that our
! is not stable under
contexts and the natural extensions of GTT that could handle such conditional rules are
immediately able to recognize any recursively enumerable relation.
6 Model-checking PA processes
In this section we show a simple approach to the model-checking problem solved in [May97b].
We see this as one more immediate application of our main regularity theorems.
INRIA
The Regular Viewpoint on PA-Processes 15
We consider a set P of atomic propositions. For
Mod (P ) denotes the set of PA processes for which P holds. We only consider propositions
P such that Mod (P ) is a regular tree-language. Thus P could be it can make an a-labeled
step right nowj, ithere is at least two occurences of X inside tj, ithere is exactly one occurence
of X in a non-frozen positionj,
The logic EF has the following syntax:
and semantics
Thus EX' reads iit is possible to reach in one step a state s.t. 'j and EF' reads iit is possible
to reach (via some sequence of steps) a state s.t. 'j.
De-nition 6.1. The model-checking problem for EF over PA has as inputs: a given \Delta, a
given t in EPA , a given ' in EF. The answer is yes ioe t
If we now de-ne Mod(') def
Mod
Mod
Theorem 6.2. (1) For any EF formula ', Mod (') is a regular tree language.
(2) If we are given tree-automata AP 's recognizing the regular sets Mod (P ), then a tree-
automaton A' recognizing Mod(') can be built eoeectively.
Proof. A corollary of (3) and the regularity theorems.
This gives us a decision procedure for the model-checking problem: build an automaton
for Mod (') and check whether it accepts t. We can estimate the complexity of this approach
in term of j'j and n alt (').
We de-ne n alt (') the number of alternation of negations and temporal connectives in '
as
RR n-0123456789
Theorem 6.3. (Model-checking) An automaton for Mod (') can be computed in time2 j'jj\Deltaj2 O(j'jj\Deltaj)
Proof. We assume all automata for the Mod (P )'s have size bounded by M (a constant).
We construct an automaton for Mod (') by applying the usual automata-theoretic constructions
for intersection, union, complementation of regular tree languages, and by invoking
our regularity theorems for Pre and Pre . All constructions are polynomial except for
complementation. With only polynomial constructions, we would have a 2 O(j'j) size for
the resulting automaton. The negations involving complementation are the cause of the
non-elementary blowup.
Negations can be pushed inward except that they cannot cross the temporal connectives
EF and EX. Here we have one exponential blowup for determinization at each level of
alternation. This is repeated n alt (') times, yielding the given bound on the number of
states hence the overall complexity.
The procedure described in [May97b] is non-elementary and today the known lower
bound is PSPACE-hard. Observe that computing a representation of Mod(') is more general
than just telling whether a given t belongs to it. Observe also that our results allow
model-checking approches based on combinations of forward and backward methods (while
Theorem 6.2 only relied on the standard backward approach.)
7 Reachability under constraints
In this section, we consider reachability under constraints. Let C ' Act be a (word)
language over action names. We write t C
that t 0 can be reached from t under the constraint C. We extend our notations and write
Pre [C](L), Post with the obvious meaning.
Observe that, even if we assume C is regular, the problem of telling whether t C
!, i.e.
whether Post [C](t) is not empty, is undecidable for the PA algebra. This can be proved
by a reduction from the intersection problem for context-free languages as follows: Let \Sigma
be an alphabet and # some distinguished symbol. We use two copies a; a of every letter
a in \Sigma [ f#g. Context-free languages can be de-ned in BPA (PA without k), that is, for
any context-free language L 1 (resp. L 2 ) on \Sigma, we can de-ne PA rules such that X 1
w:#
w:#
These rules don't overlap. We now introduce the regular
an
holds ioe
is undecidable.
INRIA
The Regular Viewpoint on PA-Processes 17
In this section we give suOEcient conditions over C so that the problem becomes decidable
(and so that we can compute the C-constrained Pre of a regular tree language).
Recall that the shu-e w k w 0 of two -nite words is the set of all words one can obtain
by interleaving w and w 0 in an arbitary way.
De-nition 7.1. ffl
m )g is a (-nite) seq-decomposition of C ioe for
all w; w 0 2 Act we have
m )g is a (-nite) paral-decomposition of C ioe for all w; w
we have
The crucial point of the de-nition is that a seq-decomposition of C must apply to all
possible ways of splitting any word in C. It even applies to a decomposition w:w 0 with
so that one of the C i 's (and one of the C 0
contains ''. Observe that the
formal dioeerence between seq-decomposition and paral-decomposition comes from the fact
that w k w 0 , the set of all shu-es of w and w 0 usually contains several elements.
De-nition 7.2. A family Cng of languages over Act is a -nite decomposition
system ioe every C 2 C admits a seq-decomposition and a paral-decomposition only using C i 's
from C .
Not all C ' Act admit -nite decompositions, even in the regular case. Consider
(ab) and assume
is a -nite paral-decomposition. Then for every k, there is
a shu-e of a k and b k in C. Hence there must be a
. Now if
then there there must exist a shu-e w 00 of a k and b k 0
with w 00 2 C. This is only
possible . Hence all i k 's are distinct, contradicting -niteness.
A simple example of a -nite decomposition system is i.e. the set
of all singleton languages with words shorter than k. Here the paral-decomposition of fwg
is f(fw 1
g)g where the w i 's are all subwords 2 of w (and w 0
i is the
corresponding remainder). This example shows that decomposability is not composability:
not all pairs from C appears in the decomposition of some member of C.
More generally, for any linear weight function ' of the form '(w) def
with
N, the sets C ('=k)
belong to -nite decomposition system.
Assume C is a -nite decomposition system. We shall show
Theorem 7.3. (Regularity)
For any regular L ' EPA and any C 2 C , Pre [C](L) and Post [C](L) are regular tree
languages.
A subword of w is any w 0 obtained by erasing letters from w at any position.
RR n-0123456789
Ingredients for A Post [C] : We build A Post [C] in the same way as A Post but states contain
a new C 2 C component.
States of A Post [C] : The states of A Post [C] are 5-uples (q ?
Transition rules of A Post [C] : The transition rules are:
type 0: all rules of the form 0 7\Gamma! (q
type 1: all rules of the form X 7\Gamma! (q
type 2: all "-rules of the form (q
s is
a rule in \Delta with X AL
appears in the
seq-decomposition of C.
type 3: all rules of the form
appears in the paral-decomposition of C 00 .
type 4a: all rules of the form
type 4b: all rules of the form
state of A? s.t. (C; C 0 ) appears in the seq-decomposition of C 00 .
Lemma 7.4. For any t 2 EPA , t
A Post [C]
there is some u 2 EPA and
some w 2 C such that u w
t, u AL
Proof. A Post [C] is A Post equipped with a new component and the proof follows exactly
the lines of the proof of Lemma 5.3. We refer to this earlier proof and only explain how we
deal with the new C components.
The ()) direction is as in lemma 5.3. The new observations in the 4 cases are:
1. that we can take
2. k ? 1 and the last rewrite step used a type 2 "-rule: Use the fact that w
3. k ? 1 and the last rewrite step used a type 4 rule: Use the fact that C:C 0 ' C 00 .
4. k ? 1 and the last rewrite step used a type 3 rule: Use the fact that w 1 2 C and
entail that there exists at least one shu-ing w of w 1 and w 2 s.t. w 2 C 00 .
INRIA
The Regular Viewpoint on PA-Processes 19
The (() direction is as in lemma 5.3. The new observations in the 5 cases are:
1. 0: The type 0 rules allow all C's containing ''.
2.
3. 0: Then the sequence has the form X a
t. Now if
there must be a (C in the seq-decomposition of C s.t. a 2 C 0 and w
that there is a type 2 rule C) we can use.
4.
and we have the type 4a rule we need. Otherwise there is a pair (C in the seq-
decomposition of C s.t. w This pair gives us the type 4b rule we
need.
5.
some shu-e of w 1 and w 2 . Therefore
there is a in the paral-decomposition of C s.t. w This pair
gives us the type 3 rule we need.
If we now let the -nal states of A Post [C] be all (q is a -nal state of
AL , then A Post [C] accepts a term t ioe t 2 Post [C](t). (The set of -nal states can easily be
adapted so that we recognize Post
Ingredients for A Pre [C] : Same as in the construction of A Pre , with an additional C 2 C
component.
States of A Pre : A state of A Pre is a 4-tuple (q ?
The -nal states are all (q is a -nal state of AL and C i the constraint
to satisfy.
Transition rules of A Pre : The transition rules of A Pre are de-ned as follows:
type 0: all rules of the form 0 7\Gamma! (q
type 1a: all rules of the form X 7\Gamma! (q ? ; q L ; true; C) s.t. there exists some u
with u A?
7\Gamma! q L .
type 1b: all rules of the form X 7\Gamma! (q
type 2: all rules of the form
appears in
the paral-decomposition of C 00 .
RR n-0123456789
type 3a: all rules of the form
is a -nal state of A?
and (C; C 0 ) appears in the seq-decomposition of C 00
type 3b: all rules of the form
Lemma 7.5. For any t 2 EPA , t
A Pre [C]
there is some u 2 EPA and some
u, u A?
Proof. A Pre [C] is A Pre equipped with a new component and the proof follows exactly the
lines of the proof of Lemma 5.3. We refer to this earlier proof and only explain how we deal
with the new C components.
1. X: The conditions on the C component for the existence of rules of type 0,
1a and 1b agree with the statement of the lemma.
2.
A Pre
there is a type 3
rule C). Also, the ind. hyp. gives t i
In the type 3b case, w 1 2 C. In the type 3a case, we use C
Here we have either (1)
In the -rst case we apply the induction hypothesis with C itself on t 1 and some C 0
containing " on t 2 , then we can use a type 3b rule. In the second case, there must be
a in the seq-decomposition of C, with w and we just have to use
the ind. hyp. and a type 3a rule.
3. This case is similar to the previous one. The (() direction uses the pair
accouting for w in the paral-decomposition of C. The ()) direction uses the
crucial fact that whenever t i
shu-ing w of w 1 and w 2 , in particular for the w that C must contain.
7.1 Applications to model-checking
The above results let us apply the model-checking method from section 6 to an extended
EF logic where we now allow all hCi' formulas for decomposable C. The semantics is given
by Mod (hCi') def
INRIA
The Regular Viewpoint on PA-Processes 21
Decomposability of C is a quite general condition. It excludes the undecidable situations
that would exist in the general regular case and immediately includes the extensions proposed
in [May97b].
Observe that it is possible to combine decomposable constraints already in the model-checking
algorithm: when C 2 C and C are decomposable, we can deal with hC "
directly (i.e. without constructing a -nite decomposition system containing C and C 0 )
because it is obvious how to extend the construction for A Pre [C] to some A Pre [C;C 0 ] where
several C components are dealt with simultaneously.
We can also deal with hC [ C 0 i' and hC:C 0 i' directly since Pre [C [ C 0 ](L) and
Pre [C:C 0 ](L) are Pre [C](L) [ Pre [C 0 ](L) and Pre [C](Pre [C 0 ](L)) for any C; C 0 and
L.
8 Structural equivalence of PA terms
In this section we investigate the congruence j induced on PA terms by the following
equations:
This choice of equations is motivated by the fact that several recent works on PA (and
extensions) only consider processes up-to this same congruence. Our techniques could deal
with variants.
It is useful to explain how our de-nition of PA compares with the de-nition used
in [May97c, May97b]. We consider a transition system between terms from EPA . The
terms Mayr considers for his transition system can be seen as equivalence classes, modulo
j, of our EPA terms. Write [t] j for the set ft g. The transition relation used by
Mayr coincides with a transition relation de-ned by
a
In the following, we speak of iPAj j when we mean the transition system one obtains with
j-classes of terms as states, and transitions given by (4).
Our approach is more general in the sense that we can de-ne the other approach in
our framework. By contrast, if one reasons modulo j right from the start, one loses the
information required to revert to the other approach.
For example, the reachability problem ido we have t
u ?j from Theorem 4.7 asks for
a very precise form for u. The reachability problem solved in [May97c] asks for u modulo j.
In our framework, this can be stated as igiven t and u, do we have t 0
RR n-0123456789
22 D.Lugiez, Ph.Schnoebelen
and (see below). In the other framework, it is impossible to state our problem.
(But of course, the -rst motivation for our framework is that it allows the two regularity
theorems.)
The rest of this section is devoted to some applications of our tree-automata approach
to problems for PAj . The aim is not exhaustivity. Rather, we simply want to show that
our framework allows solving (not just stating) problems from the other framework and its
variants.
8.1 Structural equivalence and regularity
are the associativity-commutativity axioms satis-ed by : and k. We call
them the permutative axioms and write t =P u when t and t 0 are permutatively equivalent.
are the axioms de-ning 0 as the neutral element of : and k. We call them the
simpli-cation axioms and write t & u when u is a simpli-cation of t, i.e. u can be obtained
by applying the simpli-cation axioms from left to right at some positions in t. Note that
& is a (well-founded) partial ordering. We write . for (&) \Gamma1 . The simpli-cation normal
form of t, written t#, is the unique u one obtains by simplifying t as much as possible (no
permutation allowed).
Such axioms are classical in rewriting and have been extensively studied [BN98]. j
coincide with (=P because the permutative axioms commute with the
simpli-cation axioms, we have
This lets us decompose questions about j into questions about =P and questions about &.
We start with =P .
Lemma 8.1. For any t, the set [t] =P
ug is a regular tree language, and an
automaton for [t] =P needs only have m:(m=2)! states if
Proof. (Sketch) This is because [t] =P is a -nite set with at most (m=2)! elements. (The
exponential blowup cannot be avoided.)
The simpli-cation axioms do not have the nice property that they only allow -nitely
many combinations, but they behave better w.r.t. regularity. Write [L] & for fu j t &
Lemma 8.2. For any regular L, the sets [L] . , [L] & , and [L]# are regular tree languages.
From an automaton AL recognizing L, we can build automata of size O(jAj) for these three
languages in polynomial time.
Proof. 1. [L] . : u is in [L] . ioe u is some t 2 L with additional 0's that can be simpli-ed
out. Hence an automaton accepting [L] . is obtained from AL by adding a new state q 0
for the subterms that will be simpli-ed. We also add rules 0 7\Gamma! q 0 , q 0 k q 0 7\Gamma! q 0 , and
INRIA
The Regular Viewpoint on PA-Processes 23
accepting these subterms, and, for any q in AL , rules q:q 0 7\Gamma! q, q 0 :q 7\Gamma! q,
simulating simpli-cation.
2. have been simpli-ed. A simple
way to obtain an automaton for [L] & is to synchronize the automaton AL accepting L with
the complete automaton A 0 recognizing terms built with 0, : and k only. A 0 has only two
states: q 0 and q 6=0 .
Once the two automata are synchronized, we have t 7\Gamma! (q; q 0
7\Gamma! q and t A0
We simulate simpli-cation of nullable terms with additional "-rules. Namely, whenever there
is a rule (q
we add an "-rule (q
add a symmetric rule if q 0
do the same for : instead of k.
Now a routine induction on the length of derivations shows that s 7\Gamma! (q; q 0 ) ioe 9t 2 L
7\Gamma! q.
3. [L]#: The simplest way to see regularity is to note that [L]# is
Note that for a regular L, [L] =P and [L] j are not necessarily regular [GD89]. However we
have
Proposition 8.3. For any t, the set [t] j is a regular tree language, and an automaton for
needs only have m:(m=2)! states if
Proof. Combine (5) with lemmas 8.1 and 8.2.
8.2 Structural equivalence and behaviour
Seeing terms modulo j does not modify the observable behaviour because of the following
standard result:
Proposition 8.4. j is a bisimulation relation, i.e. for all t j t 0 and t a
! u there is a
The proof is standard but tedious. We shall only give a proof sketch.
Proof. For any single equation l = r in the de-nition of j, we show that the set
f(loe; roe)g of all instances of the equation is a bisimulation relation. A complete proof of this
for takes the better part of p 95 of the book [Mil89] and the other equations can be
dealt with similarly, noting that IsNil() is compatible with j. Then there only remains to
prove that the generated congruence is a bisimulation. This too is standard: the SOS rules
for PA obey a format ensuring that the behaviour of a term depends on the behaviour of its
subterms, not their syntax.
We may now de-ne a new transition relation between terms: t a
some u; u 0 . This amounts to the i[t] j
a
from (4) and is the simplest way to translate
RR n-0123456789
problems for PAj into problems for our set of terms.
We adopt the usual abbreviations
Proposition 8.5. For any w 2 Act , t w
Proof. By induction on the length of w, and using Proposition 8.4.
8.3 Reachability modulo j
Now it is easy to prove decidability of the reachability problem modulo j: t
Post (t)"
Recall that [u] j and Post (t) are regular tree-languages one can build eoeectively.
Hence it is decidable whether they have a non-empty intersection.
This gives us a simple algorithm using exponential time (because of the size of [u] j ).
Actually we can have a better result 3 :
Theorem 8.6. The reachability problem in PAj , igiven t and u, do we have t
in NP.
Proof. NP-easiness is straightforward in the automata framework: We have t
for some u 0 s.t. note that ju 00 j - juj. A simple algorithm
is to compute u#, then guess non-deterministically a permutation u 00 , then build automata
A 1 for [u 00 ] & and A 2 for Post (t). These automata have polynomial-size. There remains to
checks whether A 1 and A 2 have a non-empty intersection to know whether the required u 0
exists.
Corollary 8.7. The reachability problem in PAj is NP-complete.
Proof. NP-hardness of reachability for BPP's is proved in [Esp97] and the proof idea can be
reused in our framework. We reduce 3SAT to reachability in PAj . Consider an instance P
of 3SAT. P has m variables and n clauses, so that it is some
every \Gammag. We de-ne the following
r i;j for
(Note that j).) The (R1) rules pick a valuation v for the X r 's, the (R3) rules
use v to list satis-ed clauses, the (R2) rules discard unnecessary elements. Finally
Other applications are possible, e.g.:
First proved in [May97c]
INRIA
The Regular Viewpoint on PA-Processes 25
Proposition 8.8. The boundedness problem in PAj is decidable in polynomial-time.
Proof. [t] j can only reach a -nite number of states in PAj ioe t can only reach a -nite
number of non-j terms in PA. Now because the permutative axioms only allow -nitely
many variants of any given term, Post (L) contains a -nite number of non-j processes ioe
[Post (L)]# is -nite.
8.4 Model-checking modulo j
The model-checking problem solved in [May97b] considers the EF logic over PAj . Translated
into our framework, this amounts to interpret the temporal connectives in terms of ) instead
of !: if we write Mod j (') for the interpretation modulo j, we have
Mod
Additionally, we only consider atomic propositions P compatible with j, i.e. where t
and t j u imply u
Model-checking in PAj is as simple as model-checking in PA:
Lemma 8.9. For any EF-formula ' we have Mod j
Proof. By structural induction over ', using Prop. 8.5 and closure w.r.t. j for the hCi'
case.
The immediate corollary is that we can use exactly the same approach for model-checking
in PA with or without j.
Conclusion
In this paper we showed how tree-automata techniques are a powerful tool for the analysis
of the PA process algebra. Our main results are two general Regularity Theorems with
numerous immediate applications, including model-checking of PA with an extended EF
logic.
The tree-automata viewpoint has many advantages. It gives simpler and more general
proofs. It helps understand why some problems can be solved in P-time, some others in
NP-time, etc. It is quite versatile and many variants of PA can be attacked with the same
approach.
Acknowledgments
We thank H. Comon and R. Mayr for their numerous suggestions,
remarks and questions about this work.
RR n-0123456789
26 D.Lugiez, Ph.Schnoebelen
--R
Decidability of bisimulation equivalence for processes generating context-free languages
More in-nite results
Verifying in-nite state processes with sequential and parallel composition
Reachability analysis of pushdown auto- mata: Application to model-checking
Rewriting and All That.
On the regular structure of pre-x rewriting
Tree automata and their application
Decidable subsets of CCS.
Decidability and decomposition in process algebras.
Applications of Tree Automata in Rewriting
The theory of ground rewrite systems is decidable.
Petri nets
A direct symbolic approach to model checking pushdown systems (extended abstract).
The reachability problem for ground TRS and some extensions.
A model for recursive-parallel programs
A formal framework for the analysis of recursive- parallel programs
Combining Petri nets and PA-processes
Model checking PA-processes
Tableaux methods for PA-processes
Communication and Concurrency.
--TR
Decidability of bisimulation equivalence for processes generating context-free languages
Process algebra
On the regular structure of prefix rewriting
Petri nets, commutative context-free grammars, and basic parallel processes
Tree languages
rewriting and all that
Efficient algorithms for pre* and post* on interprocedural parallel flow graphs
An automata-theoretic approach to branching-time model checking
Communication and Concurrency
A Formal Framework for the Analysis of Recursive-Parallel Programs
Combining Petri Nets and PA-Processes
Bisimulation Equivanlence Is Decidable for Normed Process Algebra
Decidable First-Order Transition Logics for PA-Processes
Deciding Bisimulation-Like Equivalences with Finite-State Processes
The Reachability Problem for Ground TRS and Some Extensions
Reachability Analysis of Pushdown Automata
How to Parallelize Sequential Processes
Infinite Results
Model Checking PA-Processes
An Automata-Theoretic Approach to Interprocedural Data-Flow Analysis
Regularity is Decidable for Normed PA Processes in Polynomial Time
Tableau Methods for PA-Processes
--CTR
Ahmed Bouajjani , Javier Esparza , Tayssir Touili, A generic approach to the static analysis of concurrent programs with procedures, ACM SIGPLAN Notices, v.38 n.1, p.62-73, January
Vineet Kahlon , Aarti Gupta, On the analysis of interacting pushdown systems, ACM SIGPLAN Notices, v.42 n.1, January 2007
Anne Labroue , Philippe Schnoebelen, An automata-theoretic approach to the reachability analysis of RPPS systems, Nordic Journal of Computing, v.9 n.2, p.118-144, Summer 2002
Markus Mller-Olm, Precise interprocedural dependence analysis of parallel programs, Theoretical Computer Science, v.311 n.1-3, p.325-388, 23 January 2004
Antonn Kuera , Philippe Schnoebelen, A general approach to comparing infinite-state systems with their finite-state specifications, Theoretical Computer Science, v.358 n.2, p.315-333, 7 August 2006
Denis Lugiez , Philippe Schnoebelen, Decidable first-order transition logics for PA-processes, Information and Computation, v.203 n.1, p.75-113, November 25, 2005
Kamal Lodaya, A regular viewpoint on processes and algebra, Acta Cybernetica, v.17 n.4, p.751-763, January 2006 | verification of infinite-state systems;process algebra;tree automata |
507259 | Axioms for real-time logics. | This paper presents a complete axiomatization of two decidable propositional real-time linear temporal logics: Event Clock Logic (EventClockTL) and Metric Interval Temporal Logic with past (MetricIntervalTL). The completeness proof consists of an effective proof building procedure for EventClockTL. From this result we obtain a complete axiomatization of MetricIntervalTL by providing axioms translating formulae, the two logics being equally expressive. Our proof is structured to yield axiomatizations also for interesting fragments of these logics, such as the linear temporal logic of the real numbers (TLR). | Introduction
Many real-time systems are safety-critical, and therefore deserve to be specified
with mathematical precision. To this end, real-time linear temporal logics
[5] have been proposed and served as the basis of specification languages.
preliminary version of this paper appeared in the Proceedings of the Tenth
International Conference on Concurrency Theory (CONCUR), Lecture Notes in
Computer Science 1466, Springer-Verlag, 1998, pp. 219-236.
?? This work is supported in part by the ONR YIP award N00014-95-1-0520, the NSF
CAREER award CCR-9501708, the NSF grant CCR-9504469, the DARPA/NASA
grant NAG2-1214, the ARO MURI grant DAAH-04-96-1-0341, the Belgian National
Fund for Scientific Research (FNRS), the European Commission under WGs Aspire
and Fireworks, the Portuguese FCT under Praxis XXI, the Walloon region, and
Belgacom.
Preprint submitted to Elsevier Preprint 12 November 1999
They use real numbers for time, which has advantages for specification and
compositionality. Several syntaxes are possible to deal with real time: freeze
quantification [4,12], explicit clocks in a first-order temporal logic [11,21], integration
over intervals [10], and time-bounded operators [17]. We study logics
with time-bounded operators, because those logics are the only ones which
have, under certain restrictions, a decidable satisfiability problem [5].
The logic extends the operators of temporal logic to allow the
specification of time bounds on the scope of temporal operators. For example,
the that "every p event is followed
by some q event after exactly 1 time unit." It has been shown that the logic
undecidable and even not recursively axiomatizable [4]. One reason
for this undecidability result is the ability of to specify exact
distances between events; these exact distance properties are called punctuality
properties. The logic MetricIntervalTL is obtained from
removing the ability to specify punctuality properties: all bounds appearing
in temporal operators must be non-singular intervals. For example, the formula
which expresses that "every p event is followed by some q
event after at least 1 time unit and at most 2 time units," is a MetricIntervalTL
formula, because the interval [1; 2] is non-singular. The logic MetricIntervalTL
is decidable [3]. This decidability result allows program verification using automatic
techniques. However, when the specification is large or when it contains
first-order parts, a mixture of automatic and manual proof generation is more
suitable. Unfortunately, the current automatic reasoning techniques (based on
timed automata) do not provide explicit proofs. Secondly, an axiomatization
provides deep insights into a logic. Third, a complete axiomatization serves
as a yardstick for a definition of relative completeness for more expressive logics
(such as first-order extensions) that are not completely axiomatizable, in
the style of [16,20]. This is why the axiomatization of time-bounded operator
logics is cited as an important open question in [5,17].
We provide a complete axiom system for decidable real-time logics, and a
proof-building procedure. We build the axiom system by considering increasingly
complex logics: LTR [6], EventClockTL with past clocks only, Event-
with past and future clocks (also called
with past and future operators.
The method that we use to show the completeness of our axiomatization is
standard: we show that it is possible to construct a model for each consistent
formula. More specifically, our proof of completeness is an adaptation and
an extension of the proof of completeness of the axiomatization of
The handling of the real-time operators requires care and represents the core
technical contribution of this paper. Some previous works presented axioms
for real-time logics, but no true (versus relative) completeness result for dense
real-time. In [12], completeness results are given for real-time logics with explicit
clocks and time-bounded operators, but for time modeled by a discrete
time domain, the natural numbers. In [9,7], a completeness result is presented
for the qualitative (non real-time) part of the logics considered in this pa-
per. There, the time domain considered is dense but the hypothesis of finite
variability that we consider 1 is dropped and, as a consequence, different techniques
have to be applied. In [17], axioms for real-time logics are proposed.
These axioms are given for first-order extensions of our logics, but no relative
completeness results are studied (note that no completeness result can be
given for first-order temporal logics.) Finally, a relative completeness result
is given for the duration calculus in [10]. The completeness is relative to the
hypothesis that valid interval logic formulae are provable.
Models and logics for real-time
2.1 Models
As time domain T, we choose the nonnegative real numbers R
0g. This dense domain is natural and gives many advantages detailed
elsewhere: compositionality [6], full abstractness [6], stuttering independence
[1], easy refinement. These advantages, and the results of this paper, mainly
depend on density: they can easily be adapted for the rational numbers Q, the
real numbers R. To avoid Zeno's paradox, we add to our models the condition
of finite variability [6] (condition (3) below): only finitely many state changes
can occur in a finite amount of time.
An interval I ' T is a convex subset of time. Given t 2 T, we freely use
notations such as t + I for the interval ft 0 j 9t 00 2 I with t
the constraint "t ? t 0 for all t 0 2 I ", # I for the interval ft ? 0j9t g.
A bounded non-empty interval has an infimum (also called greatest lower
bound, or left endpoint, or begin) and a supremum (also called least upper
bound, or right endpoint, or end). Such an interval is thus usually written
as e.g. (l; r], where l is the left endpoint, the rounded parenthesis in "(l"
indicates that l is excluded from the interval, r is the right endpoint, and
the square parenthesis in "r]" indicates that r is included in the interval.
The interval is called left-open and right-closed. If we extend the notation, as
usual, by allowing r to be 1, then any interval can be written in this form.
Two intervals I and J are adjacent if the right endpoint of I, noted r(I), is
equal to the left endpoint of J , noted l(J ), and either I is right-open and J
is left-closed or I is right-closed and J is left-open. We say that a non-empty
1 In every finite interval of time, the interpretation of propositions can change only
finitely many times.
interval I is singular if r(I). In this case, we often use the
rather than [t; t]. Similarly, ! l abbreviates (0; l), etc. An interval sequence
is an infinite sequence of non-empty bounded intervals so
that (1) the first interval I 0 is left-closed with left endpoint 0, (2) for all i - 0,
the intervals I i and I i+1 are adjacent, and (3) for all t 2 T, there exists an
Consequently, an interval sequence partitions time so
that every bounded subset of T is covered by finitely many elements of the
partition. Let P be a set of propositional symbols. A state s ' P is a set of
propositions. A timed state sequence -s; -
I) is a pair that consists of an
infinite sequence - s of states and an interval sequence -
I. Intuitively, it states
the period I i during which the state was s i . Thus, a timed state sequence -
can be viewed as a function from T to 2 P , indicating for each time t 2 T a
state -
2.2 The Linear Temporal Logic of Real Numbers (LTR)
The formulae of LTR [6] are built from propositional symbols, boolean con-
nectives, the temporal "until" and "since" and are generated by the following
where p is a proposition.
The LTR formula OE holds at time t 2 T of the timed state sequence - , written
according to the following definition, where we
An LTR formula OE is satisfiable if there exists - and a time t such that (-; t)
OE, an LTR formula OE is valid if for every - and every time t we have (-; t)
This logic was shown to be expressively equivalent to the monadic first-order
logic of the order over the reals [15].
Our operators U; S are slightly non-classical, but more intuitive: they do not
require OE 2 to start in a left-closed interval.
On the other hand, each of them is slightly weaker than its classical variant,
but together they have the same expressive power, as we show by providing
mutual translations below in sections 2.2.1 and 2.4.1. It is thus a simple matter
of taste. We will note the classical until as -
U.
2.2.1 Abbreviations
In the sequel we use the following abbreviations:
defined below).
the "Until" reflexive for its first argument;
the "Until" reflexive for its two arguments;
meaning "just after in the future" or "for a short time in the
future". The dual of fl is noted K + in [9], and it means thus "arbitrarily
close in the future". We don't introduce it, since we will see that due to
finite variability, fl is his own dual.
meaning "eventually in the future";
meaning "always in the future";
ffl their reflexive counterparts: \Sigma
meaning "unless in the future";
ffl its reflexive counterparts: W
and the past counterpart of all those abbreviations:
the "Since" reflexive for its first argument;
the "Since" reflexive for its two arguments;
meaning "just before in the past" or "arbitrarily close in the
meaning "eventually in the past";
meaning "always in the past";
ffl their reflexive counterparts:
meaning "unless in the past";
ffl its reflexive counterparts: Z
2.3 Event-Clock Temporal Logic
The formulae of EventClockTL [22] are built from propositional symbols, boolean
connectives, the temporal "until" and "since" operators, and two real-time op-
erators: at any time t, the history operator / I OE asserts that OE was true last in
the interval t \Gamma I, and the prediction operator . I OE asserts that OE will be true
next in the interval t I. The formulae of EventClockTL are generated by the
following
I OE
where p is a proposition and I is an interval which can be empty, singular and
whose bounds are natural numbers (or infinite). The
holds at time t 2 T of the timed state sequence - , written (-; t) according
timevalue
of the
for
timed
trace
sequence
event tick reset event reset
event
reset
undefined small big blocked small small blocked small
tick
event
clock
Fig. 1. A History clock evolving over time
to the rules for LTR and the following additional clauses:
t
A . I OE formula can intuitively be seen as expressing a constraint on the value
of a clock that measures the distance from now to the next time where the
formula OE will be true. In the sequel, we use this analogy and call this clock a
prediction clock for OE. Similarly, a / I OE formula can be seen as a constraint on
the value of a clock that records the distance from now to the last time such
that the formula OE was true. We call such a clock a history clock for OE. For a
history (resp. prediction) clock about OE,
ffl the next / =1 OE (resp. previous . =1 OE) is called its tick;
ffl the point where OE held last (resp. will hold next) is called its event;
ffl the point (if any) at which OE will hold again (resp. held last) is called its
reset;
ffl if OE is true at time t and was true just before t (resp. and will still be true
just after t) then we say that the clock is blocked at time t;
ffl if OE was never true before t (resp. will never be true after t) then the clock
is undefined at time t.
The main part of our axiomatization consists in describing the behavior and
the relation of such clocks over time. For a more formal account on the relation
between EventClockTL formulae and clocks, we refer the interested reader
to [22]. We simply recall:
Theorem 1 [22] The satisfiability problem for EventClockTL is complete for
Pspace.
which is the best result that can be expected, since any temporal logic has
this complexity.
Example 1 (p ! . =5 p) asserts that after every p state, the first subsequent
p state is exactly 5 units later (so in between, p is false); the formula (/ =5
asserts that whenever the last p state is exactly 5 units ago, then q is true
now (time-out).
2.4 Metric-Interval Temporal Logic
restricts the power of MetricTLin an apparently different way
from EventClockTL: here the real-time constraints are attached directly to
the until, but cannot be punctual. The formulae of MetricIntervalTL [3] are
built from propositional symbols, boolean connectives, and the time-bounded
"until" and "since" operators:
U I OE 2 j OE 1
S I OE 2
where p is a proposition and I is a nonsingular interval whose bounds are
natural numbers or infinite. The holds at time
of the timed state sequence - , written (-; t) according to the
following definition (the propositional and boolean clauses are as for LTR):
U I OE 2 iff 9t
Here, we have used the classical until to respect the original definition, but
this doesn't matter as explained in subsection 2.2.1.
Theorem 2 [3] The satisfiability problem for MetricIntervalTL is complete for
Expspace.
So although the logics are equally expressive, their translation must be difficult
enough to absorb the difference in complexity. Our translation, presented in
section 5, indeed gives an exponential blowup of formulae.
2.4.1 Abbreviations
In the sequel we use the following abbreviations:
U (0;1) OE 2 , the untimed "Until" of MetricIntervalTL.
UOE expresses that the next OE-interval is left-closed.
U I OE 2 .
U I OE, meaning "within I";
ffl I OE j :\Sigma I :OE, meaning "always within I";
and the past counterpart of all those abbreviations. The fact that we use the
same notations as in the other logics is intentional and harmless, since the
definitions are semantically equivalent.
Furthermore, now that we have re-defined the basic operators of EventClockTL,
we also use its abbreviations.
asserts that every q state is preceded by a p state
of time difference at most 5, which is right-closed, and all intermediate states
are r states; the formula (p ! \Sigma [5;6) p) asserts that every p state is followed
by a p state at a time difference of at least 5 and less than 6 time units. This
is weaker than the EventClockTL example, since p might also hold in between,
and of course because 5 units are not exactly required.
3 Axiomatization of EventClockTL
In section 4, we will present a proof-building procedure for EventClockTL. In
this section, we simply collect the axioms used in the procedure, and present
their intuitive meaning. Our logics are symmetric for past and future (a duality
that we call the "mirror principle"), except that time begins but does not
end: therefore the axioms will be only written for the future, but with the
understanding that their mirror images, obtained by replacing U by S, . by /,
etc. are also axioms. This does not mean that we have an axiomatization of the
future fragment of these logics: our axioms make past and future interact, and
our proof technique makes this interaction is unavoidable, mainly in axiom
(11).
3.1 Qualitative axioms (complete for LTR)
We use the rule of inference of replacement of equivalent formulae:
(1)
All propositional tautologies (2)
For the non-metric part, we use the following axioms and their mirror images:
They mainly make use of the fl operator, because as we shall see, it corresponds
to the transition relation of our structure. Axiom (3) is the usual
necessitation or modal generalization rule, expressed as an axiom. Similarly,
(4) is the usual weakening principle, expressed in a slightly non-classical form.
(5), (6) allow to distribute fl with boolean operators. Note that the validity
of (6) requires finite variability. (7), (8) describe how the U and S operators
are transmitted over interval boundaries. (9) gives local consistency conditions
over this transmission. (10) ensures eventuality when combined with (11). It
can also be seen as weakening the left side of the U to ?. The induction axiom
is essential to express finite variability: If a property is transmitted over
interval boundaries, then it will be true at any point; said otherwise, any point
is reached by crossing finitely many interval boundaries.
The axioms below express that time begins (12) but has no end (13):
We have written the other axioms so that they are independent of the begin
or end axioms, in order to deal easily with other time domains (see subsection
4.4). This is why some apparently spurious fl? occur above, e.g. in (11): they
are useful when the future is bounded.
Remark 3 Theorem 21 shows that the axioms above form a complete axiomatization
of the logic of the real numbers with finite variability, defined as LTR
in [6]. The system proposed in [6] is unfortunately unsound, redundant and
incomplete. Indeed, axiom F5 of [6] is unsound; axiom F7 can be deduced
from axiom F8; and the system cannot derive the induction axiom (11). To
see this last point, take the structure formed by R -0 followed by R, with finite
variability: it satisfies the system of [6] (corrected according to [7]) but not the
induction axiom. Thus this valid formula cannot be derived in their system.
3.2 Quantitative axioms
For the real-time part, we first describe the static behavior; intersection, union
of intervals can be translated into conjunction, disjunction due to the fact that
there is a single next event:
. I[J OE $ . I OE - . J OE (14)
. I"J OE $ . I OE - . J OE (15)
Since . is a strict future operator, the value 0 is never used:
If we do not constrain the time of next occurrence, we simply require a future
occurrence:
Finally the addition corresponds to nesting:
The next step of the proof is to describe how a single real-time . I OE evolves
over time, using fl and -. We use (20) to reduce left-open events to the easier
case of left-closed ones.
These axioms are complete for formulae where the only real-time operators
are prediction operators . I OE and they all track the same (qualitative) formula
OE. For a single history tracked formula, we use the mirror of the axioms plus
an axiom expressing that the future time is infinite, so that any bound will be
exceeded:
The description provided by these axioms are mostly expressed by the automaton
of figure 2, showing the possible evolution of history predicates.
This figure will receive a formal status in lemma 22. Most consequences of
OE
Fig. 2. The possible evolutions of a history clock
these axioms can simply be read from this automaton: For instance, /?1 OE !
(/ ?1 OE - :OE)U - fl /!1 OE is checked by looking at paths starting from /?1 OE.
As soon as several such formulae are present, we cannot just combine their
individual behavior, because the .; / have to evolve synchronously (with the
common implicit real time). We use a family of axioms (and their mirrors) to
express this common speed. They express the properties of order and addition,
but expressed with different clocks. Said otherwise, the ordering of the ticks
should correspond to the ordering of their events. We use U (or W) to express
the ordering: :pUq means that q will occur before (or at the same time as)
any p. E.g. in (26), the antecedent / =1 OE states that OE ticks now, thus after of
together with /. Then their events shall be in the same order: :OES/. Similarly,
(30) says that if last OE was less than 1 ago, and / was even closer, than last
was less than 1 ago as well.
(.
(.
3.3 Theorems
We will use in the proof some derived rules of LTR (and thus EventClockTL):
Lemma 4 The rules of modus ponens and modal generalization are derivable.
OE
Proof.
ffl the rule of modus ponens (32) is derived from replacement (1) as follows:
from OE we deduce propositionally OE $ ?; by (1) we replace OE by ? in
propositionally /.
ffl the rule of modal generalization (33) (also called necessitation) is derived
similarly from (1) and (3): From OE, we deduce :OE $ ?. Replacing in (3),
we obtain :(/U:OE). By taking / := ?, we get \LambdaOE.
We'll also need some theorems:
. I OE $ :\SigmaOE - . I OE (43)
. I OE
. I OE ! . J OE with (I ' J) (45)
Proof.
By (13), we can remove the condition fl? in the mirror of (6).
We use (5) and duality through (34).
Expanding the definition of -, we have to prove ?SOE ! ?S?. This
results from the mirror of (4) with OE := ?; / := ?; / 0 := OE.
From (36). So all - formulae are false at the beginning of time.
(38) By (8).
By (7).
(40) By (13), (10).
(41) Take (14) with I := ;; J := [0; 0]. By (16) we obtain
(42) We'll prove its mirror. By (14), / I / ! /? 0/. By (17), \Sigma/. By (10), fl/.
(43) By (15), (14), (17).
(44) By (15), (14), (17).
(45) By (15). (or by (14)).
By (4).
4 Completeness of the axiomatic system for EventClockTL
As usual, the soundness of the system of axioms can be proved by a simple
inductive reasoning on the structure of the axioms. We concentrate here on
the more difficult part: the completeness of the proposed axiomatic system. As
usual with temporal logic, we only have weak completeness: for every valid formula
of EventClockTL, there exists a finite formal derivation in our axiomatic
system for that formula. So if As often, it is more convenient to
prove the contrapositive: every consistent EventClockTL formula is satisfiable.
Due to the mirror principle, most explanations will be given for the future
only.
Our proof is divided in steps, that prove the completeness for increasing fragments
of EventClockTL.
(1) We first deal with the qualitative part, without real-time. This part of
the proof follows roughly the completeness proof of [19] for discrete-time
logic.
(a) We work with worlds that are built syntactically, by maximal consistent
sets of formulae.
(b) We identify the transition relation, and its syntactic counterpart: it
was the "next" operator for discrete-time logic [19], here it is the fl,
expressing the transition from a closed to an open interval, and -,
expressing the transition from an open to a closed interval.
(c) We impose axioms describing the possible transitions for each operator
(d) We give an induction principle (11) that extends the properties of
local transitions to global properties.
(2) For the real-time part:
(a) We give the statics of a clock;
(b) We describe the transitions of a clock;
(c) By further axioms, we force the clocks to evolve simultaneously. The
completeness of these axioms is proved by showing that only realistic
clock evolutions are allowed by the axioms.
4.1 Qualitative part
Let us assume that the formula ff is consistent and let us prove that it is
satisfiable. To simplify the presentation of the proof, we use the following
lemma:
Lemma 5 Every EventClockTL formula can be rewritten into an equivalent
formula of (using only the constant 1).
Proof. First by the use of the theorem . I OE . !I OE - . #I OE (44), every
formula . I OE with l(I) 6= 0 can be rewritten as a conjunction of formulae
with 0-bounded intervals. Using the axioms .-m+n OE $ .-m .-n OE (18) and
every interval can be decomposed into a nesting of
operators associated with intervals of length 1. \Xi
In the sequel, we assume that the formula ff for which we want to construct a
model is in EventClockTL 1 , as allowed by lemma 5.
We now define the set C(ff) of formulae associated with ff:
ffl Sub: the sub-formulae of ff.
ffl The formulae of Sub subject to a future real-time constraint:
Subg. We will say that a prediction clock is associated to these formulae.
ffl For these formulae, we will also track flOE when the next occurrence of OE is
left-open: this will simplify the notation. The information about OE will be
reconstructed by axiom (20). Rg.
ffl To select whether to track OE or flOE, we need the formulae giving the openness
of next interval:
ffl The formulae giving the current integer value of the clocks: I = f.!1 OE; . =1 OE;
Jg. Thanks to our initial transformation, we only have to
consider whether the integer value is below or above 1.
Among these, the "tick" formulae will be used in F to determine the fractional
parts of the clocks: Ig.
ffl We also define the mirror sets. For instance,
ffl The formulae giving the ordering of the fractional parts of the clocks, coded
by the ordering of the ticks:
g.
ffl The eventualities:
ffl The constant true ?, because -? will be used in lemma 14.
We close the union of all sets above under :; fl; - to obtain the closure of ff,
noted C(ff). This step preserves finiteness since we stop after adding just one of
each of these operators. Theorems (39), (38) show that further addition would
be semantically useless. For the past, we only have (6), (37). They also give
the same result, since we only have two possible cases: if -? is true, we can
move all negations outside and cancel them, except perhaps one. Otherwise,
we know that all -/ are false by (4). In each case, at most one - or fl and
one are needed. We use the notational convention to identify formulas with
their simplified form. For example, we write OE 2 C(ff) $ flOE 2 C(ff) to mean
is the simplification operator.
Note that although we are in the qualitative part, we need already include the
real-time formulae that will be used later. In this subsection they behave as
simple propositions.
A propositionally consistent structure
A set of formulae F ae C(ff) is complete w.r.t. C(ff) if for all formulae OE 2
C(ff), either OE 2 F or :OE 2 F ; it is propositionally consistent if (i) for all
We call such a set a propositional atom of C(ff).
We define our first structure, which is a finite graph,
the set of all propositional atoms of C(ff) and \Delta ' \Theta is the transition
relation of the structure. \Delta is defined by considering two sub-relations:
represents the transition from a right-closed to a left-open interval;
represents the transition from a right-open to a left-closed interval.
propositional atoms. We define
The transition relation \Delta is the union of \Delta
or
Now we can define that the atom A is singular iff it contains a formula of the
symmetrically OE - OE.
Lemma 6 In the following, A and B are atoms:
(1) A is singular iff it is irreflexive (i.e.
(2) If A\Delta [ B, then A is not singular and (B is singular or
A is not singular and (B is singular or
singular, then there is at most one atom A such that
a unique C such that B \Delta ] C.
A is initial iff it contains : -?. It is then singular, since it contains ? -
?. A is monitored iff it contains ff, the formula of which we check floating
satisfiability.
Any atom A is exactly represented by the conjunction of the formulae that it
contains, written -
A. By propositional completeness, we have:
Lemma 7 ' W
A.
For any relation \Delta, we define the formula \Delta(A) to be W
B. The formula
B can be simplified to V
:flOE2A :OE, because in the propositional
structure, all other members of a B are allowed to vary freely and thus
cancel each other by the distribution rule.
Lemma
Proof.
Dually, W
B can be simplified to V
OE2A -OE. Therefore:
Lemma 9 ' -
Now let \Delta + be the transitive closure of \Delta. Since \Delta
Similarly,
Lemma
Using the disjunction rule for each reachable -
A, we obtain: '
(A). Now we can use the induction axiom
Using necessitation (33) and modus ponens (32), we obtain:
Lemma
An EventClockTL-consistent structure
We say that an atom A is EventClockTL-consistent if it is propositionally consistent
and consistent with the axioms and rules given in section 3. Now, we
consider the structure -
\Delta), where -
is the subset of propositional atoms
that are EventClockTL-consistent and -
\Lambdag.
Note that the lemmas above are still valid in the structure -
\Pi as only inconsistent
atoms are suppressed. We now investigate more deeply the properties
of the structure -
\Pi and show how we can prove from that structure that the
consistent formula ff is satisfiable.
We first have to define some notions.
ffl A maximal strongly connected substructure
(MSCS)\Omega is a non-empty set of
of the structure -
\Pi such that:
(1) for all
every atom can reach all atoms of \Omega\Gamma i.e.,
\Omega is strongly connected;
(2) for all
such that D 1
2\Omega then
i.e.,\Omega is maximal.
ffl A
MSCS\Omega is called initial if for all D 1
\DeltaD 2 and D 2
2\Omega then D 1 2 \Omega\Gamma i.e.
\Omega has no incoming edges.
ffl A
MSCS\Omega is called final if for all D 1
\DeltaD 2 and D 1
2\Omega then D 2 2 \Omega\Gamma
i.e.\Omega
has no outgoing edges.
ffl A
MSCS\Omega is called self-fulfilling if for every formula of the form OE 1 UOE 2 2 A
with A 2 \Omega\Gamma there exists B
2\Omega such that OE 2 2 B.
We now establish two properties of MSCS of our structure -
\Pi.
Every final
MSCS\Omega of the structure -
\Pi is self-fulfilling.
Proof. Let us make the hypothesis that there exists OE 1 UOE 2 2 A with A
2\Omega
and for all B 2 D, OE 2 62 B. By lemma 12 and as by hypothesis OE 2 62 B, for
theorem (46) and a propositional reasoning, we conclude
. Using the axiom (10) and the hypothesis that OE 1
we obtain ' -
by definition of \Sigma, we obtain ' -
contradiction with ' -
which is impossible since A is, by hypothesis,
consistent. \Xi
Lemma 14 Every non-empty initial
MSCS\Omega of the structure -
\Pi contains an
initial atom, i.e. there exists A
2\Omega such that -? 62 A.
Proof. By definition of initial MSCS, we know that for all D 1
\DeltaD 2 and D 2 2 \Omega\Gamma
then us make the hypothesis that for all A 2 \Omega\Gamma -? 2 A. By
the mirror of lemma 12 we conclude, by a propositional reasoning and the
hypothesis that -? 2 D for all D such that D -
contradicts axiom (12), so A 62 -
\Pi,
thus\Omega is empty. \Xi Actually such initial
MSCS are made of a single initial atom.
In the sequel, we concentrate on particular paths, called runs, of the structure
\Pi. A run of the structure -
\Delta) is a
is an infinite sequence of atoms and -
is an infinite sequence of intervals such that:
(1) Initiality: A 0 is an initial atom;
(2) Consecution: for every i - 0, A i
(3) Singularity: for every i - 0, if A i is a singular atom then I i is singular;
Alternation: I 0 I alternates between singular and open intervals,
i.e. for all i ? 0, I 2i is singular and I 2i+1 is open.
(5) Eventuality: the set fA n ; :::; A n+mg is a final MSCS.
Note that, for the moment, the timing information provided in -
I is purely
qualitative (singular or open); therefore any alternating sequence is adequate
at this qualitative stage. Later, we will construct a specific sequence satisfying
also the real-time constraints. In the sequel, given
I), ae(t) denotes the
atom A i such that t 2 I i .
Lemma 15 The transition relation -
\Delta of the structure -
\Pi is total, i.e. for all
atoms A 2 -
, there exists an atom
such that A -
\DeltaB.
Proof. We prove -
is consistent and can thus be completed to form an atom B. Assume it is not:
by definition
We can replace ? in (13), giving ' fl: -
\Phi.
By
\Phi. By (5), the set fflOEj fl OE 2 Ag [ ffl:OEj: fl OE 2 Ag is
inconsistent. Using (34) again, the set fflOEjflOE 2 Ag[f:flOEj:flOE 2 Ag ' A
is inconsistent, and thus A is inconsistent, contradicting A 2 -
. \Xi
Lemma 16 For every atom A of the structure -
\Pi, there is a run ae that passes
through A.
Proof.
(1) Initiality, i.e. every atom of -
\Pi is either initial or can be reached by an
initial atom. Let us consider an atom A, if A is initial then we are done,
otherwise, let us make the hypothesis that it can not be reached by an
initial atom, it means: for all B -
so by propositional
completeness -? 2 B. By lemma 12 and a propositional reasoning, we
Using axiom (12) we obtain a contradiction in A.
We use this path for the first part of the run.
(2) Consecution, by construction.
(3) Singularity: i.e., every odd atom is not singular. For the first and second
part of the run, we can obtain this by taking a simple path (thus without
self-loops). Since the first atom A 0 is initial, it is singular; from there on,
non-singular and singular states will alternate by lemma 6. For the final
repetition, this technique might not work when the MSCS is a single
atom. Then we know that this single atom is non-singular, and thus
Singularity is also verified.
Alternation: we can choose any alternating interval sequence, since the
timing information is irrelevant at this point.
Eventuality, i.e. every atom of -
\Pi can reach one of the final MSCS of -
\Pi.
It is a direct consequence of the fact that -
\Delta is total and the fact that -
is finite. We use this reaching path for the second part of the run, then
an infinite repetition of this final MSCS.
A run
I) of the structure -
\Pi has the qualitative Hintikka property if it
respects the semantics of the qualitative temporal operators which is expressed
by the following conditions (real-time operators will be treated in the following
H1 if A i is singular then I i is singular;
either I i is singular and there exists j ? i s.t. OE 2 2 A j and for all k s.t.
or I i is not singular and
(2) or there exists
either I i is singular and there exists and for all k s.t.
or I i is not singular and
(2) or there exists
We call such a run a qualitative Hintikka run. Next, we show properties of
some additional properties of runs related to the Hintikka properties above:
Lemma 17 For every run
I) of the structure -
\Pi, with -
for every i - 0 such that \SigmaOE 2 A
ffl either I i is singular and there exists
ffl or I i is non-singular and there exists j - i such that OE 2 A j .
Proof. First let us prove the following properties of the transition relation -
\Delta:
Recall that \SigmaOE j ?UOE,
and by definition of -
propositional reasoning, we obtain
that
definition
of -
the mirror of axiom (8) and a propositional reasoning, we obtain
By the two properties above, we have that if \SigmaOE 2 A i then either OE appears in
A j with j ? i if I i is singular (and thus right closed), j - i if I i is not singular
(and thus an open interval) or OE is never true and \SigmaOE propagates for the rest
of the run. But this last possibility is excluded by our definition of run: by
clause (5), every run eventually loops into a final (thus self-fulfilling by lemma
MSCS\Omega\Gamma Then either OE is realized before this looping or \SigmaOE
2\Omega and by
2\Omega and is thus eventually realized. \Xi
Lemma For every run
I) of the structure -
\Pi, for every position i
in the run if OE 1 UOE 2 2 A i then the right implication of property H2 is verified,
i.e:
ffl either A i is singular and there exists and for all k s.t.
ffl or A i is not singular and
(2) or there exists
Proof. By hypothesis we know that OE 1 UOE 2 2 A i and we first treat the case
where A i is singular.
ffl By the axiom (10) and lemma 17, we know that there exists j ? i such that
us make the hypothesis that A j is the first OE 2 -atom after A i .
ffl It remains us to show that: for all k s.t. We reason by
induction on the value of k.
Base case: By hypothesis we have OE 1 UOE 2 2 A i and also A i
(as A i is right closed) and thus for all flOE 2 A i ; OE 2 A i+1 by definition
of -
theorem (35) and axiom (5), and the fact that by hypothesis OE 2 62 A i+1 ,
(Prop) allows us to conclude that OE 1 2 A i+1 .
Induction case: induction hypothesis, we
know that OE 1 2 A k\Gamma1 and OE 1
as is the first position after i where OE 2 is verified).
To establish the result, we reason by cases :
I k is open and thus I k\Gamma1 is singular and right closed. We have A
and thus for all flOE 2 C(ff); flOE 2 A i by definition of
As OE 1 UOE 2 2 A k\Gamma1 by induction hypothesis and the axiom (7) we
conclude that OE 1 UOE 2 2 A k . Using the axiom (9), theorem (35), axiom
(5) and the fact that OE 2 62 A k , and (Prop), we conclude that OE 1 2 A k .
(2) I k is closed which implies that I k\Gamma1 is right open and A k\Gamma1
. By
definition of -
have that for all -OE 2 C(ff); -OE 2 A k
A
we have :OE 2 2 A k . Using those properties and the mirror of axiom
(8) we conclude that OE 1 - OE 1 UOE 2 2 A k .
We now have to treat the case where A i is not singular. By the axiom
and lemma 17 we know that there exists a later atom A j , i.e. j - i, such that
and we are done. Otherwise j ? i, and we must
prove that for all k s.t. this can be done by the reasoning
above. \Xi
We now prove the reverse, i.e. every time that OE 1 UOE 2 is verified in an atom
along the run then OE 1 UOE 2 appears in that atom. This lemma is not necessary
for qualitative completeness but we use this property in the lemmas over real-time
operators.
Lemma 19 For every run
I) of the structure -
\Pi, for every position
ffl either A i is singular and there exists and for all k s.t.
ffl or A i is not singular and
(2) or there exists
then
Proof. We reason by considering the three following mutually exclusive cases:
(1) A i is singular and there exists
We reason by induction to show that OE 1 UOE 2 2 A j \Gammal for all l s.t.
ffl Base case: l = 1. By hypothesis, we know that OE 2 2 A j . We now reason
by cases:
(a) if A j \Gamma1 is right closed then we have A j \Gamma1
by definition of
. Using the axiom (9) we deduce by (Prop) that
(b) if A j \Gamma1 is right open then we know that
by hypothesis) and thus OE 1 2 A j \Gamma1 . Also as A j \Gamma1
Using the mirror of axiom (8) and a propositional reasoning, we
by definition of -
ffl Induction case: 1 - l we have established the result
us show that we have the result for
A j \Gammal . First note that by hypothesis, OE 1 2 A j \Gamma(l\Gamma1) . We again reason by
cases:
(a) I j \Gammal is right closed. Then we have A j \Gammal
and by definition
of -
\Gammal and by axiom (7) we have that OE 1 UOE 2 2 A j \Gammal .
(b) A j \Gammal is right open. Then we have A j \Gammal -
and by definition
of -
\Gammal . We know
that by hypothesis, OE 1 2 A j \Gammal as singular and I j \Gammal
(by induction
hypothesis). Using the mirror of axiom (8) and a propositional
reasoning, we obtain -(OE 1 and by definition of -
that OE 1 UOE 2 2 A j \Gammal .
(2) A i is not singular and OE 2 2 A j . As A i is not singular, we have A i
definition of -
. By the axiom (9) and a proposition
reasoning, we obtain the desired result: OE 1 UOE 2 2 A i .
(3) A i is not singular, OE 2 62 A j , and there exists and for all
This case is treated by an inductive reasoning
similar to the first one above.
We have also the two corresponding mirror lemmas for the S-operator.
From the previous proved lemmas, it can be shown that the qualitative axioms
of section 3 are complete for the qualitative fragment of EventClockTL, i.e. the
logic LTR.
Lemma 20 A run ae has the Hintikka property for LTR formulae: for every
Proof. The Hintikka property was proved in the lemmas above, but expressed
without reference to time t. It remains to prove that this implies the usual
definition, by induction on formulae.
We must prove 9t 0 ?
from H2. Of course, we take t 0 somewhere in I j , so that t 0
can be divided in 3 parts: the part in I i , which is empty when I i
is singular, the part in some I k (i j), the part in I j . Each of them
(2) Conversely, the usual definition implies H2: First note that given t, if
ae(t) is not singular but I i is singular, it means that A
lemma 6. Thus we can merge I i ; I i+1 to ensure that I i is singular iff A i
is singular without loss of generality. Let j be the first index where OE 2 ,
I i is singular, or else j - i. We can take t 0 ? t in I j without loss
of generality. Since we need t 00 must
(H3) is symmetric. \Xi
Finally, we have the following theorem that expresses the completeness of the
qualitative axioms for the logic LTR:
Theorem 21 Every LTR formula that is consistent with the qualitative axioms
is satisfiable.
Proof. Let ff be a consistent LTR formula. We construct -
\Delta). Let
be an atom of the structure such that ff 2 -
. Such an atom B exists as
ff is consistent. By lemma 16, there exists a run
I) such that
for some i - 0. By lemma 20, we have (ae; t) thus ff is
We now turn to the completeness of real-time axioms.
4.2 Quantitative part
A run
I) of the structure -
\Pi has the timed Hintikka property if it
respects the Hintikka properties defined previously and the two following additional
properties:
H4 . I OE 2 ae(t) iff there exists t 0 2 t+I such that OE 2 ae(t 0 ) and
A run that respects those additional properties is called a well-timed run. In
the sequel, we will show that for each run of the structure -
\Pi, we can modify
its sequence of intervals, using a procedure, in such a way that the modified
run is well-timed.
Recall that given a tracked formula OE 2 R,
ffl . =1 OE is called its tick;
called its event (note that the second case need
not be considered thanks to the axioms (20),
ffl (OE -:OE) - (:OE - flOE) is called its reset.
The evolution of the real-time predicates is described by figure 2. We can now
see the status of this drawing:
Lemma 22 For any tracked formula OE 2 R, the projection of -
\Pi (restricted to
atoms containing the formulae COE) on OE; /!1 OE; / =1 OE; /?1 OE; \Sigma\GammaOE is contained in
figure 2.
Proof. It suffices to show that no further consistent atoms nor transitions can
be added to the figure.
ffl Atoms: from the axioms (15), (17), (14), (16).
ffl Transitions: We simply take all missing arrows of the figure, and show that
they cannot exist. As the proof is fairly long, we only show some excerpts.
(1) Assume that an atom A containing OE; / =1 OE is linked to an atom B containing
OE in this way: A -
B, by axioms (14),
(15), (16), we have : /!1 OE 2 B. Now by definition of -
and by (34), : fl /!1 OE 2 A. Now the main step: we use the mirror of
(23), negated on both sides. :fl? is impossible by (13), and thus we can
conclude contradicting OE 2 A.
(2) Now we show the only two transitions which are eliminated by the restriction
to COE. The first one is A -
contains /!1 OE; :OE; COE and
contains /!1 OE; OE. We prove using (9). In more detail, COE
abbreviates :OEU(OE -:OE). Applying (9) and unfolding U - , we obtain
using first disjunct is impossible,
by (5), (34), (38).
On the other hand, by definition of -
whence the contradiction
(3) The second transition eliminated is A -
contains /?1 OE; :OE; COE
and B contains /!1 OE; :OE. By definition of -
A. By axiom
(22), contradicting /?1 OE 2 A.
A constraint is a real-time formula of an atom A i . The begin of a constraint
is the index e at which its previous event, tick or reset occurred. The end of
a constraint is the index j at which its next event, tick or reset occurs. This
vocabulary refers to the order of time only: the begin is always before the
corresponding end, whether for history or prediction operators. Begins, ends,
ticks, resets, events are always singular. We say that (the history clock of) OE is
active between an event OE and the next reset of OE. It is small between its event
and the next tick or reset. After this, it is big. When it is big, it doesn't give
actual constraints, since it can stay big for any time, on one hand, and on the
other hand because it has passed first through a tick, which is forced to be 1
time unit apart from the event. Thus the monotonicity of time will ensure that
big constraints are indeed semantically true. We define the scope of a constraint
as the interval between the event and the next tick or reset, or equivalently
between its begin and its end. The same vocabulary applies symmetrically for
prediction operators. Actual constraints are either equalities (the time spend
in their scope must be 1), linking an event to a tick, or inequalities (the time
spend in their scope must be less than 1). An inequality is always linked to
a small clock. Constraints can be partially ordered by scope: it is enough to
solve constraints of maximal scope, as we shall see. A constraint of maximal
scope always owns indexes: they are found at the end of its scope. The scope
of an inequality extends from an event to a reset. Whether an atom A i is in
the scope of a constraint, and which, can be deduced from its contents. The
table below shows the contents of an atom A i that is the end of an equality.
We distinguish the prediction and history cases. The table is simplified by the
fact that we can assume that events are closed. The begin atom is the closest
one in the past to contain the indicated formulae.
Table
Equality constraints - ticking clocks
begin end in A i
. =1 OE (tick) OE; :OES . =1 OE (event)
The table below shows the contents of an atom A i indicating that the clock is
small. It is thus in the scope of a constraint, whose begin is before and whose
end is after. The begin (resp. end) is the closest atoms with the indicated
contents.
Table
Small clocks
begin in A i end
.
(tick or reset)
Note that the existence of the begin and ends is guaranteed by fig. 2: a clock
cannot stay small forever. In this section, we furthermore enforce that it will
not stay small more than 1 unit of time.
The proof shows that these constraints can be solved iff they are compatible
in the sense that the scope of an equality cannot be included in the scope
of an inequality, nor strictly in the scope of another equality. The axioms for
several clocks ensure this compatibility.
The previous section has built a run
I), where -
I is irrelevant, that
is qualitatively correct. From any such run
I), we now build a well-timed
run
J) by attributing a well-chosen sequence of intervals
to the atoms of the run, so as to satisfy the real-time
constraints.
Before, we introduce two lemmas on which the algorithm relies, that can also
be read from fig. 2:
Lemma 23 For every run
I) of the structure -
\Pi, we have that if
Proof. This lemma is a direct consequence of the mirrors of axioms (14) and
(17). \Xi
Lemma 24 For every run
I) of the structure -
\Pi, we have that if
-:/;/;:/S . =1 / 2 A i then there exists
Proof. This lemma is a direct consequence of the mirror of axiom (10). \Xi
The algorithm proceeds by induction along the run, attributing time points
As a consequence, an open interval (t attributed
when i is odd: we don't mention it, and just define t i for even i.
i.e. we attribute the interval [0; 0] to the initial atom A 0 .
(2) Induction: we identify and solve the tightest constraint containing i. We
define b as the begin of this tightest constraint, by cases:
(a) equality constraints:
(i) If there is an / =1 / 2 A i there has been a last (singular) atom
A b containing / before at time t b .
(ii) Else, if -:/;/;:/S . =1 / 2 A i there has been a last atom A b
containing . =1 / before A i , at time t b .
We set t
(b) If there are no equality constraints, we consider inequality constraints:
(i) We compute the earliest begin b of the small clocks using table 2.
t i has to be between t i\Gamma2 and t b + 1. We choose t
1)=2.
(ii) Otherwise, we attribute (say) t i\Gamma2 + 1=2 to A i .
The algorithm selects arbitrarily an equality constraint, but is still deterministic
Lemma 25 If two equality constraints have the same end i, their begins
are identical.
Proof. Four combinations of equality constraints are possible:
(1) The first constraint is / =1 OE
(a) The second constraint is / =1 /: A i contains thus /-1 / by (14). We
apply (26) to obtain :OES/.
We repeat this with /; OE inverted to obtain :/SOE. These formulae
imply by the mirror of Lemma 19 that / cannot occur before OE, and
conversely, thus they occur in the same atom.
(b) The second constraint is the event /; :-/ with :/S . =1 /: then A i
contains /-1 OE by (14). We apply (29) to obtain : . =1 /SOE.
Since A i contains :/U - / =1 OE since its eventuality / =1 OE is true
now. We apply (28) to obtain :OEZ(.
we know that the tick occurs first (perhaps ex-aequo) among the
possibilities that end the Z.
These formulae imply by Lemma 19 that . =1 / cannot occur before
OE, and conversely, thus they occur in the same atom.
(2) The first constraint is the event OE with :OES . =1 OE 2 A
(a) The second constraint is / This case is simply the previous
one, with OE; / inverted.
(b) The second constraint is the event / with :/S . =1 /: A i contains
since its eventuality OE is true now. We apply (27) to obtain
By :/S . =1 /, the tick . =1 / occurred first.
We repeat this with /; OE inverted. These formulae imply by Lemma 19
that . =1 / cannot occur before . =1 OE, and conversely, thus they occur
in the same atom.
Solving an equation at its end also solves current partial inequations:
Lemma 26 If A i is in the scope of an inequation, and the end of an equation,
then the begin A j of the inequation is after the begin A b of the equation (b ! j).
Proof. There are 3 possible forms of inequations in A i (see table 4.2):
its begin, i.e. =1 / 2 A j . We must show that b ! j. The
equation can be:
(a)
thus . The first case
is true as by hypothesis :/S must occur before /
in the past), and gives b - j.
(b) OE; :OES . =1 OE 2 A i and . =1 OE 2 A b :
using (27), we obtain : . =1 OEZ(. . The first case is true,
by hypothesis, and gives b - j.
We cannot assume because the mirror of lemma 25 then gives
its begin (its event), i.e. / 2 A j . We must show that b ! j.
The equation can be:
(a)
We apply (26) to obtain :OES/, meaning by the mirror of lemma 19
that b - j. :/SOE 62 A i , for otherwise we apply (30) yielding /!1 OE 2
contradicting / =1 OE 2 A i by (15), so we conclude b ! j.
(b) OE; :OES . =1 OE 2 A i and . =1 OE 2 A b :
by j. We cannot have the reverse
:/S . =1 OE, for otherwise we apply the mirror of (31) and deduce
its begin (a reset). Either .!1 / 2 A i already, or if the
event is in A i , we use axiom (23) to show . Since there is no
intervening / between j and i, the fig.2 implies .!1 / 2 A j+1 and thus
Because :(:/S . =1 /) 2 A i , we deduce .!1 / 2 A j .
Now, we must show that b ! j. The equation can be:
(a) / =1 OE 2 A i and its event OE 2 A b :
As apply (28) to obtain :OEZ(.
means b - j. Again because there are no intervening / between j
and i, using lemma 19 we have :/U / =1 OE 2 A j . Using the mirror
of (31), /!1 OE; :OE 2 A j , thus is impossible, since :OE 2 A j and
. We conclude b ! j.
(b) OE; :OES . =1 OE 2 A i and . =1 OE 2 A b :
so :/U - OE 2 A i , and we use (27) to obtain : . =1 OEZ(.
A i . The reset / occurs strictly before the tick, so the first case is
using
because there are no intervening / between positions j and i, we
have :/U / =1 OE 2 A j . Using the mirror of (30), .!1 OE 2 A j . The
second case is thus true, and means b - j. impossible, since
. We conclude b ! j.
We now show that the algorithm Attr assigns time bounds of intervals that
are increasing.
Lemma 27 The sequence t i built by Attr is increasing.
Proof. In the notation of the definition, this amounts to prove t
when b is defined, since t i is either (in the case of an equality) or the
middle point of (in the case of an inequality). If b is not defined
(no constraints) then it is trivially verified as we attribute t i\Gamma2 + 1=2 to t i . We
prove the non trivial cases by induction on i:
(1) base case: 2. Either:
(a) no constraint is active, b is undefined;
just have to prove 0 ! 1.
(2) induction: We divide in cases according to the constraint selected at
whose begin is called
(a) an equality: by lemmas 25, 26, its begin was before, i.e., b i\Gamma2 ! b.
By inductive hypothesis, t i is increasing: t b i\Gamma2 ! t b . Thus t
(b) an inequality: Thus the begin b i\Gamma2 - b i , since it was obtained by
sorting. By inductive hypothesis, t i is increasing: so t b i\Gamma2
. By
inductive hypothesis, t
+1. Thus t
Furthermore, the algorithm Attr ensures that time increases beyond any bounds:
Lemma 28 The sequence of intervals -
J of
J) built by our
algorithm has finite variability: for all t there exists an i - 0 such that
Proof. Although there is no lower bound on the duration of an interval, we
show that the time spend in each passage through the final cycle of -
is at least 1=2. Thus any real number t will be
reached before index 2tc, where c is the number of atoms in the final cycle.
We divide in cases:
(1) If the cycle A n A contains an atom which is not in the scope
of any constraint, the time spent there will be 1=2.
(2) Else, the cycle contains constraints, and thus constraints of maximal
scope. This scope, however, cannot be greater than one cycle. Let e the
end of such a constraint. Thus e is in the scope of no other constraint
with an earlier begin.
The time spent in the scope of the constraint until i is at least 1=2:
Let again b be the begin of the scope of the constraint. t e\Gamma2 - t b (since
the begin and end are singular and distinct), thus our algorithm gives
1=2. Since the scope cannot be greater than
one cycle, the time spent in a cycle is at least 1=2.
This procedure correctly solves all constraints:
Lemma 29 The interval attribution Attr transforms any run ae in a well-timed
run Attr(ae).
Proof. We show the two supplementary properties of a well-timed run:
We must show that the next / occurs in t \Gamma I. / I /
can be:
(a) /?1 /: These constraints are automatically satisfied because:
(i) the mirror of the eventuality rule (17) guarantees / has occurred.
us take the first such j, which is the
corresponding event.
(ii) According to fig.2, / will stay false, and eventually we will reach
(iii) the axiom (25) guarantees that satisfying the equality will entail
satisfying the greater-than constraint, since they refer to the
same tracked event, and since the equality is later. In formulae,
for any t i 2 I i ,
(b) / =1 /: Since this is an equality constraint, the algorithm Attr must
have chosen an equality constraint with begin b. Thus t 1. By
lemma 25, the begin event OE is also in A b .
(c) /!1 /: If i isn't even (singular), we know that the constraint will still
be active in the next atom because the end of a constraint is
always singular. By (22):
ffl It might become an equality (the clock may tick), in which case
it is treated as in the previous case (with i+1 instead of i). Then
the monotonicity of time will ensure that I
ffl If it is still the same inequality, it is treated below (with
instead of i). Then the monotonicity of time will ensure that
I
Thus at this point we can assume that i is even. Let j ! i be the
begin of the constraint, OE 2 A j . The constraint selected by Attr at i
can be:
(i) an equality: by lemma 26, its begin b ! j, so that t
(ii) or the constraint chosen in A i is an inequality. The pair /!1 / 2
is also an inequality in A i : let f be its begin. The
algorithm has selected the constraint with the earliest begin b.
(2) Let . I / 2 Very similarly, we must show that the next / occurs
I / can be:
(a) .?1 /: These constraints are automatically satisfied because:
(i) the eventuality rule (17) guarantees / will occur: 9j ? i
A j . We take the first such j, which is the corresponding event.
We can assume it is singular.
Figure
2 guarantees that there is first a tick: 9k
(iii) the reset rule (25) guarantees that satisfying the equality will
entail satisfying the greater-than constraint, since they refer to
the same end event, and since the equality is later. In formulae,
for any t i 2 I i ,
(b) . =1 /: let A j contain the next event of /. Since this is an equality con-
straint, the algorithm Attr must have chosen an equality constraint
at A j . By lemma 25, its begin is i. Thus t
(c) .!1 /: Let A j contain the next event of /. The constraint selected by
Attr at j can be:
(i) an equality: by lemma 26 its begin b ! i, so that t
(ii) or the constraint chosen in A j is an inequality. The pair .!1 / 2
is also an inequality in A j : let f be its begin. The
algorithm has selected the constraint with the earliest begin b.
The reader now expects a proof for the converse implication. This is not needed
thanks to (43). \Xi
As a consequence of the last lemmas, we have:
Lemma timed run built by Attr has the Hintikka property for Event-
Finally, we obtain the desired theorem:
Theorem 31 Every EventClockTL-consistent formula is satisfiable.
Proof. If ff is a EventClockTL-consistent formula then there exists an ff-monitored
atom A ff in -
\Pi. By lemma 16, there exists a set of runs \Sigma that pass through
A ff and by the properties of the procedure Attr, lemma 18, lemma 28 and
lemma 29, at least one run ( -
has the Hintikka property for Event-
ClockTL. It is direct to see that ( -
I) is a model for ff at time t 2 I ff (the
interval of time associated to A ff in ( -
I) ) and thus ff is satisfiable. \Xi
Corollary 32 The rule (1) and axioms (2)-(31) form a complete axiomatization
of EventClockTL.
4.3 Comparison with automata construction
In spirit, the procedure given above can be considered as building an automaton
corresponding to a formula. The known procedures [3] for deciding
use a similar construction, first building a timed automaton
and then its region automaton. We could not use this construction directly
here, because it involves features of automata that have no counterpart in the
logic, and thus could not be expressed by axioms. However, the main ideas are
similar. The region automaton will record the integer value of each clock: we
code this by formulae of the form .!1 . =1 ::: . =1 OE. It will also record the ordering
of the fractional parts of the clocks: this is coded here by formulae of the
form :. =1 :::. =1 OEU. =1 :::. =1 /. There are some small differences, however. For
simplicity we maintain more information than needed. For instance we record
the ordering of any two ticks, even if these ticks are not linked to the current
value of the clock. This relationship is only inverted for a very special case:
when a clock has no previous and no following tick, we need not and cannot
maintain its fractional information. It is easy to build a more careful and more
efficient tableau procedure, that only records the needed information.
The structure of atoms constructed here treats the eventualities in a different
spirit than automata: here, there may be invalid paths in the graph of atoms.
It is immediate to add acceptance conditions to eliminate them and obtain a
more classical automaton. But it is less obvious to design a class of automata
that is as expressive as the logic: this is done in [14].
4.4 Other time domains
As we have already indicated incidentally, our proofs are written to adapt to
other time domains Twith minimal change. We only consider totally ordered
dense time, however. For instance, we could use as time domain:
(1) The real numbers, T= R: We replace (12) by the mirror of (13).
(2) The rational numbers, T= Q: If we force the bounds of an interval to
be rational as well, nothing has to be changed. Otherwise, a transition
from an open interval to an open interval is now possible, if the common
bound is irrational. This defeats the induction axiom (11). We postpone
the study of this case to a further paper, but the basic ideas of the proof
still apply.
(3) A bounded real interval:
(a) closed For the qualitative part, we replace (13) by the
mirror of (12). For the quantitative part, we first remove the axiom
(25). If the duration of the interval
stating that the beginning is at distance d from the end. Otherwise,
we add the best approximation of this:
(b) open For the qualitative part, we replace (13) by the
mirror of (12): from a qualitative point of view, an open interval is
indistinguishable from an infinite one.
5 Translating MetricIntervalTL into EventClockTL
The logics have been designed from a different philosophical standpoint: MetricIn-
restricts the undecidable logic MetricTL by "relaxing punctuality",
i.e., forbidding to look at exact time values; EventClockTL, in contrast, forbids
to look past the next event in the future. However, we have shown in [14] that,
surprisingly, they have the same expressive power. The power given by nesting
connectives allows to each logic to do some of its forbidden work. Here, we
need more than a mere proof of expressiveness, we need a finite number of
axioms expressing the translation between formulae of the two logics. We give
below both the axioms and a procedure that use them to provide a proof of
the equivalence.
First, we suppress intervals containing 0:
U I /
U J /) with
Then we replace bounded untils -
U I with 0 62 I by simpler \Sigma I , provided
U I /
where l is the left endpoint of I, the intervals
Ig.
We suppress classical until using:
For infinite intervals, we reduce the lower bound l ? 0 to 0 using
For finite intervals with left bound equal to 0, we exclude it if needed with
(49), and we use the . operator:
Note that the formulae .!u OE and .-u OE can be reduced to formulae that only
use constant 1 using the axioms (18) and (19).
When the left bound of the interval is different from 0 and the right bound
different from 1, we reduce the length of the interval to 1 using:
Then we use the following rules recursively until the lower bound is reduced
to 0:
In this way, any MetricIntervalTL formula can be translated into a Event-
formula where bounds are always 0 or 1. Actually, we used a very
small part of EventClockTL; we can further eliminate .!1 OE:
showing that the very basic operators . have the same expressive power
as full MetricIntervalTL.
The converse translation is much simpler:
. I OE $ :\Sigma !I OE - \Sigma Inf0g OE (62)
OEU/
5.1 Axiomatization of MetricIntervalTL
To obtain an axiom system for MetricIntervalTL, we simply translate the axioms
of EventClockTL and add axioms expressing the translation.
Indeed, we have translations in each direction:
Therefore, to prove a MetricIntervalTL formula -, we translate it into Event-
and prove it there using the procedure of section 4. The proof - can
be translated back to MetricIntervalTL in T (-) proving T (S(-)). Indeed, each
step is a replacement, and replacements are invariant under syntax-directed
translation preserving equivalence:
To finish the proof we only have to add T (S(-))
- . Actually the translation axioms
above are stronger, stating T (S(-. In our case, T (defined by (62), (63))
is so simple that it can be considered as a mere shorthand. Thus the axioms
(1)-(29) and (49)-(60) form a complete axiomatization of MetricIntervalTL,
with . I ; U now understood as shorthands.
Theorem 33 The rule (1), axioms (2)-(29), and axioms (49)-(60) form a
complete axiomatization of MetricIntervalTL.
6 Conclusion
The specification of real-time systems using dense time is natural, and has
many semantical advantages, but discrete-time techniques (here proof techniques
[8,18]) have to be generalized. The model-checking and decision techniques
have been generalized in [2,3]. Unfortunately, the technique of [3] uses a
translation to automata which are more powerful and complex than temporal
logic, and thus is not suitable for building a completeness proof.
This paper provides complete axiom systems and proof-building procedures
for linear real time, extending the technique of [19]. This procedure can be
used to automate the proof construction of propositional fragments of a larger
first-order proof.
Some possible extensions of this work are:
ffl The proof rules are admittedly cumbersome, since they exactly reflect the
layered structure of the proof: for instance, real-time axioms are clearly separated
from the qualitative axioms. More intuitive rules can be devised if we
this constraint. This paper provides an easy way to show their com-
pleteness: it is enough to prove the axioms of this paper. This also explains
why we have not generalized the axioms, even when obvious generalizations
are possible: we prefer to stick to the axioms needed in the proof, to facilitate
a later completeness proof using this technique.
ffl The logics used in this paper assume that concrete values are given for real-time
constraints. As demonstrated in the HyTech checker [13], it is often
useful to mention parameters instead (symbolic constants), and derive the
needed constraints on the parameters, instead of a simple yes/no answer.
ffl The extension of the results of this paper to first-order variants of MetricIn-
should be explored. However, completeness is often lost in first-order
variants [23].
ffl The development of programs from specifications should be supported: the
automaton produced by the proposed technique might be helpful as a program
skeleton in the style of [24].
--R
the existence of refinement mappings.
Model checking in dense real time.
the benefits of relaxing punctuality.
A really temporal logic.
Logics and models of real time: a survey.
A really abstract concurrent model and its temporal logic.
Basic tense logic.
Automatic verification of finite-state concurrent systems using temporal-logic specifications
An axiomatization of the temporal logic with Until and Since over the real numbers.
Semantics and completeness of duration calculus.
the next generation.
the regular real-time languages
Tense Logic and the Theory of Order.
A complete proof systems for QPTL.
Specifying message passing and time-critical systems with temporal logic
Checking that finite-state concurrent programs satisfy their linear specification
the glory of the past.
the anchored version of the temporal framework.
Temporal Logic of Real-time Systems
State clock logic: a decidable real-time logic
Incompleteness of first-order temporal logic with until
Synthesis of Communicating Processes from Temporal-Logic Specifications
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Incompleteness of first-order temporal logic with until
Temporal logic for real time systems
Half-order modal logic: how to prove real-time properties
The existence of refinement mappings
Model-checking in dense real-time
The benefits of relaxing punctuality
Checking that finite state concurrent programs satisfy their linear specification
A really abstract concurrent model and its temporal logic
Specifying Message Passing and Time-Critical Systems with Temporal Logic
The Regular Real-Time Languages
State Clock Logic
The Glory of the Past
The anchored version of the temporal framework
Logics and Models of Real Time
Semantics and Completeness of Duration Calculus
A Complete Proof Systems for QPTL
HYTECH
Synthesis of communicating processes from temporal logic specifications
--CTR
Carsten Lutz , Dirk Walther , Frank Wolter, Quantitative temporal logics over the reals: PSpace and below, Information and Computation, v.205 n.1, p.99-123, January, 2007 | axiomatization;completeness;real time;temporal logic |
507260 | From rewrite rules to bisimulation congruences. | The dynamics of many calculi can be most clearly defined by a reduction semantics. To work with a calculus, however, an understanding of operational congruences is fundamental; these can often be given tractable definitions or characterisations using a labelled transition semantics. This paper considers calculi with arbitrary reduction semantics of three simple classes, firstly ground term rewriting, then left-linear term rewriting, and then a class which is essentially the action calculi lacking substantive name binding. General definitions of labelled transitions are given in each case, uniformly in the set of rewrite rules, and without requiring the prescription of additional notions of observation. They give rise to bisimulation congruences. As a test of the theory it is shown that bisimulation for a fragment of CCS is recovered. The transitions generated for a fragment of the Ambient Calculus of Cardelli and Gordon, and for SKI combinators, are also discussed briefly. | Introduction
The dynamic behaviour of many calculi can be defined most clearly by a reduction semantics,
comprising a set of rewrite rules, a set of reduction contexts in which they may be applied, and a
structural congruence. These define the atomic internal reduction steps of terms. To work with a
calculus, however, a compositional understanding of the behaviour of arbitrary subterms, as given by
some operational congruence relation, is usually required. The literature contains investigations of
such congruences for a large number of particular calculi. They are often given tractable definitions
or characterisations via labelled transition relations, capturing the potential external interactions
between subterms and their environments. Defining labelled transitions that give rise to satisfactory
operational congruences generally requires some mix of calculus-specific ingenuity and routine work.
In this paper the problem is addressed for arbitrary calculi of certain simple forms. We give
general definitions of labelled transitions that depend only on a reduction semantics, without requiring
any additional observations to be prescribed. We first consider term rewriting, with ground or
left-linear rules, over an arbitrary signature but without a structural congruence. We then consider
calculi with arbitrary signatures containing symbols 0 and j, a structural congruence consisting of
associativity, commutativity and unit, left-linear rules, and non-trivial sets of reduction contexts.
This suffices, for example, to express CCS-style synchronisation. It is essentially the same as the
Computer Laboratory, University of Cambridge. Email: Peter.Sewell@cl.cam.ac.uk
INTRODUCTION
class of Action Calculi in which all controls have some number of arguments of
In each case we define labelled transitions, prove that bisimulation is a congruence and
give some comparison results.
Background: From reductions to labelled transitions to reductions. Definitions of the
dynamics (or small-step operational semantics) of lambda calculi and sequential programming languages
have commonly been given as reduction relations. The -calculus has the rewrite rule
(-x:M)N \Gamma!M [N=x] of fi reduction, which can be applied in any context. For programming lan-
guages, some control of the order of evaluation is usually required. This has been done with abstract
machines, in which the states, and reductions between them, are ad-hoc mathematical objects. More
elegantly, one can give definitions in the structural operational semantics (SOS) style of Plotkin
[Plo81]; here the states are terms of the language (sometimes augmented by e.g. a store), the reductions
are given by a syntax-directed inductive definition. Explicit reformulations using rewrite rules
and reduction contexts were first given by Felleisen and Friedman [FF86]. (We neglect semantics in
the big-step/evaluation/natural style.)
In contrast, until recently, definitions of operational semantics for process calculi have been
primarily given as labelled transition relations. The central reason for the difference is not mathe-
matical, but that lambda and process terms have had quite different intended interpretations. The
standard interpretation of lambda terms and functional programs is that they specify computations
which may either not terminate, or terminate with some result that cannot reduce further. Confluence
properties ensure that such result terms are unique if they exist; they can implicitly be
examined, either up to equality or up to a coarser notion. The theory of processes, however, inherits
from automata theory the view that process terms may both reduce internally and interact with
their environments; labelled transitions allow these interactions to be expressed. Reductions may
create or destroy potential interactions. Termination of processes is usually not a central concept,
and the structure of terms, even of terms that cannot reduce, is not considered examinable.
An additional, more technical, reason is that definitions of the reductions for a process calculus
require either auxiliary labelled transition relations or a non-trivial structural congruence. For
example, consider the CCS fragment below.
Its standard semantics has reductions P \Gamma!Q but also labelled transitions P ff
\Gamma!Q and P -
ff
\Gamma!Q.
These represent the potentials that P has for synchronising on ff. They can be defined by an SOS
Out
ff
\Gamma!P
In
\Gamma!P
Com
ff
Par
\Gamma!Q
\Gamma!Q
\Gamma!Q
\Gamma! is either \Gamma!, ff
\Gamma! or -
ff
\Gamma!. It has been noted by Berry and Boudol [BB92], following work
of Ban-atre and Le M'etayer [BM86] on the \Gamma language, that semantic definitions of process calculi
could be simplified by working modulo an equivalence that allows the parts of a redex to be brought
syntactically adjacent. Their presentation is in terms of Chemical Abstract Machines; in a slight
variation we give a reduction semantics for the CCS fragment above. It consists of the rewrite rule
Q, the set of reduction contexts given by
and the structural congruence j defined to be the least congruence satisfying
and use of j on the right, this gives exactly the same reductions
as before. For this toy calculus the two are of similar complexity. For the -calculus ([MPW92],
building on [EN86]), however, Milner has given a reduction semantics that is much simpler that the
rather delicate SOS definitions of - labelled transition systems [Mil92]. Following this, more recent
name passing process calculi have often been defined by a reduction semantics in some form, e.g.
the HO- [San93], ae [NM95], Join [FG96], Blue [Bou97], Spi [AG97], dpi [Sew98b], D- [RH98] and
Ambient [CG98] Calculi.
Turning to operational congruences, for confluent calculi the definition of an appropriate operational
congruence is relatively straightforward, even in the (usual) case where the dynamics is
expressed as a reduction relation. For example, for a simple eager functional programming language,
with a base type Int of integers, terminated states of programs of type Int are clearly observable up
to equality. These basic observations can be used to define a Morris-style operational congruence.
Several authors have considered tractable characterisations of these congruences in terms of bisimulation
- see e.g. [How89, AO93, Gor95] and the references therein, and [GR96] for related work on
an object calculus.
For non-confluent calculi the situation is more problematic - process calculi having labelled transition
semantics have been equipped with a plethora of different operational equivalences, whereas
rather few styles of definition have been proposed for those having reduction semantics. In the
labelled transition case there are many more-or-less plausible notions of observation, differing e.g.
in their treatment of linear/branching time, of internal reductions, of termination and divergence,
etc. Some of the space is illustrated in the surveys of van Glabbeek [Gla90, Gla93]. The difficulty
here is to select a notion that is appropriate for a particular application; one attempt is in [Sew97].
In the reduction case we have the converse problem - a reduction relation does not of itself seem to
support any notion of observation that gives rise to a satisfactory operational congruence. This was
explicitly addressed for CCS and -calculi by Milner and Sangiorgi in [MS92, San93], where barbed
bisimulation equivalences are defined in terms of reductions and observations of barbs. These are
vestigial labelled transitions, similar to the distinguished observable transitions in the tests of De
Nicola and Hennessy [DH84]. The expressive power of their calculi suffices to recover early labelled
transition bisimulations as the induced congruences. Related work of Honda and Yoshida [HY95]
uses insensitivity as the basic observable.
.to labelled transitions Summarizing, definitions of operational congruences, for calculi having
reduction semantics, have generally been based either on observation of terminated states, in the
confluent case, or on observation of some barbs, where a natural definition of these exists. In
either case, characterisations of the congruences in terms of labelled transitions, involving as little
quantification over contexts as possible, are desirable. Moreover, some reasonable calculi may not
have a natural definition of barb that induces an appropriate congruence.
In this paper we show that labelled transitions that give rise to bisimulation congruences can
be defined purely from the reduction semantics of a calculus, without prescribing any additional
observations. It is preliminary work, in that only simple classes of reduction semantics, not involving
name or variable binding, will be considered. As a test of the definitions we show that they recover
the usual bisimulation on the CCS fragment above. We also discuss term rewriting and a fragment of
the Ambient calculus of Cardelli and Gordon. To directly express the semantics of more interesting
calculi requires a richer framework. One must deal with binding, with rewrite rules involving term or
name substitutions, with a structural congruence that allows scope mobility, and with more delicate
sets of reduction contexts. The Action Calculi of Milner [Mil96] are a candidate framework that
allows several of the calculi mentioned above to be defined cleanly; this work can be seen as a step
towards understanding operational congruences for arbitrary action calculi.
Labelled transitions intuitively capture the possible interactions between a term and a surrounding
context. Here this is made explicit - the labels of transitions from a term s will be contexts that,
when applied to s, create an occurrence of a rewrite rule. A similar approach has been followed by
Jensen [Jen98], for a form of graph rewriting that idealizes action calculi. Bisimulation for a particular
action calculus, representing a -calculus, has been studied by Mifsud [Mif96]. In the next three
sections we develop the theory for ground term rewriting, then for left-linear term rewriting, and
then with the addition of an AC1 structural congruence and reduction contexts. Section 5 contains
some concluding remarks. Most proofs are omitted, but can be found in the technical report
[Sew98a].
Ground term rewriting
In this section we consider one of the simplest possible classes of reduction semantics, that of ground
term rewriting. The definitions and proofs are here rather straightforward, but provide a guide to
those in the following two sections.
Reductions We take a signature consisting of a set \Sigma of function symbols, ranged over by oe, and
an arity function j j from \Sigma to N. Context composition and application of contexts to (tuples of)
terms are written A : B and A : s, the identity context as and tupling with +. We say an n-hole
context is linear if it has exactly one occurrence of each of its holes. In this section a; b; l;
range over terms, A; B; C; D;F; H range over linear unary contexts and E ranges over linear binary
contexts.
We take a set R of rewrite rules, each consisting of a pair hl; ri of terms. The reduction relation
is then
Labelled Transitions The transitions of a term s will be labelled by linear unary contexts. Transitions
s\Gamma!t labelled by the identity context are simply reductions (or -transitions). Transitions
\Gamma!t for F 6j indicate that applying F to s creates an instance of a rewrite rule, with target
instance t. For example, given the rule
we will have labelled transitions
for all C and
The labels are f F j 9hl; ri 2 R; s and the contextual labelled transition relations F
\Gamma!
are defined by:
\Gamma!t
ri
Bisimulation Congruence Let - be strong bisimulation with respect to these transitions. The
congruence proof is straightforward. It is given some detail as a guide to the more intricate
corresponding proofs in the following two sections, which have the same structure. Three lemmas
show how contexts in labels and in the sources of transitions interrelate; they are proved by
case analysis using a dissection lemma which is standard folklore.
then one of the following cases holds.
1. (b is in a) There exists D such that a
2. (a is properly in b) There exists D with D 6= such that D :
3. (a and b are disjoint) There exists E such that
then one of the following holds:
1. There exists some H such that
s.
2. There exists some - t, A 1 and A 2 such that
t.
Proof By the definition of reduction
ri
Applying the dissection lemma (Lemma 1) to A : l gives the following cases.
1. (l is in s) There exists B such that
Taking the second clause holds.
2. (s is properly in l) There exists B with B 6= such that
Taking the second clause holds.
3. (s and l are disjoint) There exists E such that
Taking r) the first clause holds.Lemma 3 If A : s F
\Gamma!t and F 6= then s F : A
\Gamma!t.
Proof By the definition of labelled transitions
ri
linear
\Gamma!t. 2
Lemma
\Gamma!t then A : s F
\Gamma!t.
Proof so the conclusion is immediate, otherwise by the definition of
transitions
ri
One then has A : s F
\Gamma!t by the definition of transitions, by cases for F 6= and
Proposition 5 - is a congruence.
Proof We show
is a bisimulation.
1. Suppose A : s\Gamma!t.
By Lemma 2 one of the following holds:
(a) There exists some H such that
s.
Hence
(b) There exists some - t, A 1 and A 2 such that
t.
By s - s 0 there exists - t 0 such that s 0 A2
t.
By the definition of reduction
2. Suppose A : s F
\gamma!t for F 6= .
\gamma!t.
By s - s 0 there exists t 0 such that s
t.
\gamma!t 0 .
alternative approach would be to take transitions
for unary linear contexts F . Note that these are defined using only the reduction relation, whereas
the definition above involved the reduction rules. Let - alt be strong bisimulation with respect to
these transitions. One can show that - alt is a congruence and moreover is unaffected by cutting
down the label set to that considered above. In general - alt is strictly coarser than -. For an
example of the non-inclusion, if the signature consists of constants ff; fi and a unary symbol fl with
reduction rules ff\gamma!ff, fi \gamma!fi and fl(fi)\gamma!fi, then ff 6- fi whereas ff - alt fi. This insensitivity to
the possible interactions of terms that have internal transitions suggests that the analogue of - alt ,
in more expressive settings, is unlikely to coincide with standard bisimulations for particular calculi.
Indeed, one can show that applying the alternative definition to the fragment of CCS
ff
ff
(with its usual reduction relation) gives an equivalence that identifies ff j -
ff with
fi.
Remark In the proofs of Lemmas 2-4 the labelled transition exhibited for the conclusion involves
the same rewrite rule as the transition in the premise. One could therefore take the finer transitions
F
annotated by rewrite rules, and still have a congruence result. In some cases this gives a
finer bisimulation relation.
Remark The labelled transition relation is linear in R, i.e. the labelled transitions generated by a
union of sets of rewrite rules are just the union of the relations generated by R 1 and R 2 .
rewriting with left-linear rules
In this section the definitions are generalised to left-linear term rewriting, as a second step towards
a framework expressive enough for simple process calculi.
Notation In the next two sections we must consider more complex dissections of contexts and
terms. It is convenient to treat contexts and terms uniformly, working with n-tuples of m-hole
contexts for m;n - 0. Concretely, we work in the category C \Sigma that has the natural numbers as
objects and morphisms
The identity on m is id m
composition is substitution, with
an [b strictly
associative binary products, written with +. If a : m! k and b : m! l we write a \Phi b for
l. Angle brackets and domain subscripts will often be
elided. We let a; b; e; q; range over 0 !m morphisms, i.e. m-tuples of terms,
range over m! 1 morphisms, i.e. m-hole contexts, and - over projections and permutations. Say a
morphism linear if it contains exactly one occurrence of each
if it contains at most one occurrence of each. We sometimes abuse notation in examples, writing
Remark Many slight variations of C \Sigma are possible. We have chosen to take the objects to be
natural numbers, instead of finite sets of variables, to give a lighter notation for labels. The concrete
syntax is chosen so that morphisms from 0 to 1 are exactly the standard terms over \Sigma, modulo
elision of the angle brackets and subscript 0.
Reductions The usual notion of left-linear term rewriting is now expressible as follows. We take
a set R of rewrite rules, each consisting of a triple hn; L; Ri where n - 0, linear and
1. The reduction relation over f s is then defined by
Labelled Transitions The labelled transitions of a term s again be of two forms,
s\gamma!t, for internal reductions, and s F
\gamma!T where F 6= is a context that, together with part of s,
makes up the left hand side of a rewrite rule. For example, given the rule
we will have labelled transitions
for all terms s Labelled transitions in which the label contributes the whole of the left hand
side of a rule would be redundant, so the definition will exclude e.g. s ffi(fl(
\gamma! ffl(s). Now consider the
rule
As before there will be labelled transitions
for all s. In addition, one can construct instances of the rule by placing the term ff in contexts
suggesting labelled transitions ff oe( ;fl(t))
\gamma! ffl(t) for any t. Instead, to keep the label sets
small, and to capture the uniformity in t, we allow both labels and targets of transitions to be
parametric in un-instantiated arguments of the rewrite rule. In this case the definition will give
In general, then, the contextual labelled transitions are of the form s F
\gamma!T , for
1. The first argument of F is the hole in which s can be placed to create an instance of a
rule L; the other n arguments are parameters of L that are not thereby instantiated. The transitions
are defined as follows.
, s\gamma!T .
\gamma!T , for linear and not the identity, iff there exist
permutation
linear and not the identity
such that
The definition is illustrated in Figure 1. The restriction to L 1 6= id 1 excludes transitions where the
label contributes the whole of L. The permutation - is required so that the parameters of L can be
divided into the instantiated and uninstantiated. For example the rule
F
R
nn
s
Figure
1: Contextual Labelled Transitions for Left-Linear Term Rewriting. Boxes with m input
wires (on their right) and n output wires (on their left) represent n-tuples of m-hole contexts. Wires
are ordered from top to bottom.
will give rise to transitions
(The last is redundant; it could be excluded by requiring - to be a monotone partition of m into
Bisimulation Congruence A binary relation S over terms f a j a is lifted to a relation
by A [S] A 0 def
Say S is a bisimulation if for any s S s 0
and write - for the largest such. As before the congruence proof requires a simple dissection lemma
and three lemmas relating contexts in sources and labels.
Lemma 6 (Dissection) If A :
then one of the following holds.
1. (a is not in any component of b) There exist
linear and not the identity
such that
i.e. there are m 1 components of b in a and m 2 in A.
2. (a is in a component of b) m - 1 and there exist
partition
such that
linear then one of the following holds.
1. There exists some
2.
Lemma 8 If A : s F
one of the following
holds.
1. There exists H : 1+n! 1 such that
id n ).
2. There exist
such that s F
Lemma 9 If s
\gamma! T for linear then for all
Theorem 1 - is a congruence.
Proof We show S , where
is a bisimulation. First note that for any A : To see this,
take
linear such that
An
Each A i is linear, so A
We now show that if A
\gamma!T then there exists T 0 such that
1. Suppose A : s\gamma!t.
By Lemma 7 one of the following holds:
(a) There exists some
Hence
(b) There exist
By s - s 0 there exists T 0 such that s 0 F
By the definition of reduction A : s
2. Suppose A : s F
linear and F 6= id 1 .
By Lemma 8 one of the following holds.
(a) There exists
Hence
(b) There exist
such that s F
By s - s 0 there exists -
Now if
for A i linear and s
F
then by the above there exists Tn such
that
F
definition reduces to that of Section 2 if all rules are ground. For open rules, instead
of allowing parametric labels, one could simply close up the rewrite rules under instantiation, by
apply the earlier definition. In general
this would give a strictly coarser congruence. For an example of the non-inclusion, take a
signature consisting of a nullary ff and a unary fl, with R consisting of the rules fl( )\gamma!fl( ) and
fl(fl(ff))\gamma!fl(fl(ff)). We have g. The transitions are
for m;n - 1, so fl(ff) 6- R fl(fl(ff)) but fl(ff) - Cl(R) fl(fl(ff)).
Proposition
Comparison Bisimulation as defined here is a congruence for arbitrary left-linear term rewriting
systems. Much work on term rewriting deals with reduction relations that are confluent and ter-
minating. In that setting terms have unique normal forms; the primary equivalence on terms is ',
have the same normal form. This is easily proved to be a congruence. In
general, it is incomparable with -. To see one non-inclusion, note that - is sensitive to atomic
reduction steps; for the other that - is not sensitive to equality of terms - for example, with only
nullary symbols ff; fi; fl, and rewrite rule fl \gamma!fi, we have ff - fi and fi ' fl, whereas ff 6' fi and
fi 6- fl. One might address the second non-inclusion by fiat, adding, for any value v, a unary test
operator H v and reduction rule H v (v)\gamma!v. For the first, one might move to a weak bisimulation,
abstracting from reduction steps. The simplest alternative is to take - to be the largest relation S
such that if s S s 0 then
and symmetric clauses.
Say the set R of rewrite rules is right-affine if the right hand side of each rule is affine. Under
this condition - is a congruence; the result without it is left open.
Theorem 2 If R is right-affine then - is a congruence.
Example - Integer addition For some rewrite systems - coincides with '. Taking a signature
comprising nullary z for each integer z and binary plus and ifzero, and rewrite rules
for all integers x and z gives labelled transitions
x plus( ;z)
together with the reductions \gamma!. Here the normal forms are simply the integers; - and ' both
coincide with integer equality.
In general, however, - is still incomparable with '. For example, with unary ffi; fl, nullary ff, and
rules fl(ff)\gamma!ff, ffi(ff)\gamma!ff, and ffi(fl( ))\gamma! , we have ff 6- fi(ff). This may be a pathological rule set;
one would like to have conditions excluding it under which - and ' coincide.
Example - SKI Combinators Taking a signature \Sigma comprising nullary I ; K;S and binary ffl,
and rewrite rules
gives labelled transitions
I ffl 1
together with some permutation instances of these and the reductions \gamma!. The significance of -
and - here is unclear. Note that the rules are not right-affine, so Theorem 2 does not guarantee
that - is a congruence. It is quite intensional, being sensitive to the number of arguments that can
be consumed immediately by a term. For example, K
rewriting with left-linear rules, parallel and boxing
In this section we extend the setting to one sufficiently expressive to define the reduction relations
of simple process calculi. We suppose the signature \Sigma includes binary and nullary symbols j and 0,
for parallel and nil, and take a structural congruence j generated by associativity, commutativity
and identity axioms. Parallel will be written infix. The reduction rules R are as before. We now
allow symbols to be boxing, i.e. to inhibit reduction in their arguments. For each oe 2 \Sigma we suppose
given a set B(oe) ' defining the argument positions where reduction may take place. We
require 2g. The reduction contexts C ' f C linear are generated by
Formally, structural congruence is defined over all morphisms of C \Sigma as follows. It is a family of
relations indexed by domain and codomain arities; the indexes will usually be elided.
Reductions The reduction relation over f s is defined by s\gamma!t iff
This class of calculi is essentially the same as the class of Action Calculi in which there is no
substantive name binding, i.e. those in which all controls K have arity rules of the form
(here the a i are actions, not morphisms from C \Sigma ). It includes simple process calculi. For example,
the fragment of CCS in Section 1 can be specified by taking a signature \Sigma CCS consisting of unary
ff: and -
ff: for each ff 2 A, with 0 and j, and rewrite rules
Notation For a context f : m!n and i 2 1::m say f is shallow in argument i if all occurrences of
in f are not under any symbol except j. Say f is deep in argument i if any occurrence of i in f
is under some symbol not equal to j. Say f is shallow (deep) if it is shallow (deep) in all i 2 1::m.
Say f is i-separated if there are no occurrences of any j in parallel with an occurrence of i .
Labelled Transitions The labelled transitions will be of the same form as in the previous section,
with transitions s F
non-trivial label F may either
contribute a deep subcontext of the left hand side of a rewrite rule (analogous to the non-identity
labels of the previous section) or a parallel component, respectively with F deep or shallow in its
first argument. The cases must be treated differently. For example, the rule
TERM REWRITING WITH LEFT-LINEAR RULES, PARALLEL AND BOXING
will generate labelled transitions
As before, transitions that contribute the whole of the left hand side of a rule, such
as s j ff j fi
are redundant and will be excluded. It is necessary to take labels to be subcontexts of
left hand sides of rules up to structural congruence, not merely up to equality. For example, given
the rule
we need labelled transitions
Finally, the existence of rules in which arguments occur in parallel with non-trivial terms means
that we must deal with partially instantiated arguments. Consider the rule
The term -) j ae could be placed in any context oe( j s; t) to create an instance of the left hand side,
with - (from the term) instantiating 1 , t (from the context) instantiating 2 , and ae j s (from both)
instantiating 3 . There will be a labelled transition
parametric in two places but partially instantiating the second by ae. The general definition of
transitions is given in Figure 2. It uses additional notation - we write par n for h 1
and ppar n for n!n. Some parts of the definition are illustrated
in
Figure
3, in which rectangles denote contexts and terms, triangles denote instances of par, and
hatched triangles denote instances of ppar.
To a first approximation, the definition for F deep in 1 states that s F
\gamma!T iff there is a rule
L\gamma!R such that L can be factored into L 2 (with m 2 arguments) enclosing L 1 (with m 1 arguments)
in parallel with m 3 arguments. The source s is L 1 instantiated by u, in parallel with e; the label F is
roughly the target T is R with instantiated by u and m 3 partially instantiated by
e. It is worth noting that the non-identity labelled transitions do not depend on the set of reduction
contexts.
The intended intuition is that the labelled transition relations provide just enough information so
that the reductions of a term A : s are determined by the labelled transitions of s and the structure
of A, which is the main property required for a congruence proof. A precise result, showing that
the labelled transitions provide no extraneous information, would be desirable.
Bisimulation Congruence Bisimulation - is defined exactly as in the previous section. As
before, the congruence proof requires dissection lemmas, analogous to Lemmas 1 and 6, lemmas
showing that if A : s has a transition then s has a related transition, analogous to Lemmas 2,3 and
7,8, and partial converses to these, analogous to Lemmas 4 and 9. All except the main dissection
lemma are omitted here, but can be found in the long version.
Lemma 11 (Dissection) If m - 0,
with A and B linear, and A : a one of the following hold
Transitions s F
\gamma!T , for are defined by:
ffl For F j id
ffl For F deep in argument 1: s F
\gamma!T iff there exist
permutation
linear and deep
linear, deep in argument 1 and 1-separated
such that
ffl For F shallow in argument 1 and F 6j id
\gamma!T iff there exist
permutation
linear and deep
linear and deep
such that
Figure
2: Contextual Labelled Transitions
em 3
e
e
R
s
F
e
R
Deep Shallow
Figure
3: Contextual Labelled Transitions Illustrated
1. (a is not deeply in any component of b) There exist
linear and 1-separated
linear and deep
such that
a j par 1+m3
There are m 1 of the b in a, m 2 of the b in A and m 3 of the b potentially overlapping A and a.
The latter are split into e 1 , in a, and e 2 , in A.
2. (a is deeply in a component of b) m - 1 and there exist
partition
linear and deep
such that
The first clause of the lemma is illustrated in Figure 4. For example, consider A : a
Clause 1 of the lemma holds, with
This dissection should give rise to a transition
Theorem 3 - is a congruence.
Remark The definitions allow only rather crude specifications of the set C of reduction contexts.
They ensure that C has a number of closure properties. Some reduction semantics require more
delicate sets of reduction contexts. For example, for a list cons constructor one might want to
allow is taken from some given set of values. This would require a
non-trivial generalisation of the theory.
A a
Figure
4: Clause 1 of Dissection Lemma
Example - CCS synchronization For our CCS fragment the definition gives
\gamma!
\gamma!
together with structurally congruent transitions , i.e. those generated by
and the reductions.
Proposition 12 - coincides with bisimulation over the labelled transitions of Section 1.
Proof Write - std for the standard bisimulation over the labelled transitions of Section 1. To show
- std is a bisimulation for the contextual labelled transitions, suppose
\gamma! T . There
must exist u and r such that P j but then P ff
so there exists Q 0
such that P 0 ff
There must then exist u 0 and r 0 such that
hence
\gamma! . Using the fact that - std is a congruence we have
so
For the converse, suppose
\gamma!Q. There must exist u and r such that P j ff:u j r
and
\gamma! so there exists T 0 such that P
There must then exist u 0 and r 0 such that
the definition of [
The standard transitions coincide (modulo structural congruence) with the contextual labelled transitions
with their parameter instantiated by 0. One might look for general conditions on R under
which bisimulation over such 0-instantiated transitions is already a congruence, and coincides with
-.
Example - Ambient movement The CCS fragment is degenerate in several respects - in the
left hand side of the rewrite rule there are no nested non-parallel symbols and no parameters in
parallel with any non-0 term, so there are no deep transitions and no partial instantiations. As a
less degenerate example we consider a fragment of the Ambient Calculus [CG98] without binding.
The signature \Sigma Amb has unary m[ ] (written outfix), in m:, out m: and open m:, for all m 2 A. Of
these only the m[ ] allow reduction. The rewrite rules RAmb are
open
The definition gives the transitions below, together with structurally congruent transitions, permutation
instances, and the reductions.
in m:s j r n[
out
open
5 Conclusion
We have given general definitions of contextual labelled transitions, and bisimulation congruence
results, for three simple classes of reduction semantics. It is preliminary work - the definitions
may inform work on particular interesting calculi, but to directly apply the results they must be
generalized to more expressive classes of reduction semantics. Several directions suggest themselves.
Higher order rewriting Functional programming languages can generally be equipped with
straightforward definitions of operational congruence, involving quantification over contexts. As
discussed in the introduction, in several cases these have been given tractable characterisations in
terms of bisimulation. One might generalise the term rewriting case of Section 3 to some notion of
higher order rewriting [vR96] equipped with non-trivial sets of reduction contexts, to investigate the
extent to which this can be done uniformly.
Name binding To express calculi with mobile scopes, such as the -calculus and its descendants,
one requires a syntax with name binding, and a structural congruence allowing scope extrusion.
Generalising the definitions of Section 4 to the class of all non-higher-order action calculi would take
in a number of examples, some of which currently lack satisfactory operational congruences, and
should show how the indexed structure of - labelled transitions arises from the rewrite rules and
structural congruence.
Ultimately one would like to treat concurrent functional languages. In particluar cases it has
been shown that one can define labelled transitions that give rise to bisimulation congruences, e.g.
by Ferreira, Hennessy and Jeffrey for Core CML [FHJ96]. To express the reduction semantics of
such languages would require both higher order rules and a rich structural congruence.
Colouring The definition of labelled transitions in Section 4 is rather intricate - for tractable
generalisations, to more expressive settings, one would like a more concise characterisation. A
promising approach seems to be to work with coloured terms, in which each symbol except j and 0
is given a tag from a set of colours. This gives a notion of occurrence of a symbol in a term that is
preserved by structural congruence and context application, and hence provides a different way of
formalising the idea that the label of a transition s F
\gamma!T must be part of a redex within F : s.
Observational congruences We have focussed on strong bisimulation, which is a very intensional
equivalence. It would be interesting to know the extent to which congruence proofs can be
given uniformly for equivalences that abstract from branching time, internal reductions etc. More
particularly, one would like to know whether Theorem 2 holds without the restriction to right-affine
rewrite rules. One can define barbs for an arbitrary calculus by s # () 9F 6j id
\gamma!T , so s #
iff s has some potential interaction with a context. Conditions under which this barbed bisimulation
congruence coincides with - could provide a useful test of the expressiveness of calculi.
Structural operational semantics Our definitions of labelled transition relations are not inductive
on term structure. Several authors have considered calculi equipped with labelled transitions
defined by an SOS in some well-behaved format, e.g. [dS85, BIM95, GV92, GM98, TP97, Ber98].
The relationship between the two is unclear - one would like conditions on rewrite rules that ensure
the labelled transitions of Section 4 are definable by a functorial operational semantics [TP97].
Conversely, one would like conditions on an SOS ensuring that it is characterised by a reduction
semantics.
Acknowledgements
I would like to thank Philippa Gardner, Ole Jensen, S-ren Lassen, Jamey
Leifer, Jean-Jacques L'evy, and Robin Milner, for many interesting discussions and comments on
earlier drafts, and to acknowledge support from EPSRC grant GR/K 38403.
--R
A calculus for cryptographic protocols: The spi calculus.
Full abstraction in the lazy lambda calculus.
The chemical abstract machine.
A congruence theorem for structured operational semantics of higher-order languages
Bisimulation can't be traced.
A new computational model and its discipline of programming.
Mobile ambients.
Testing equivalences for processes.
A calculus of communicating systems with label-passing
Control operators
The reflexive CHAM and the join-calculus
A theory of weak bisimulation for core CML.
The linear time - branching time spectrum
The linear time - branching time spectrum II
The tile model.
Bisimilarity as a theory of functional programming.
Bisimilarity for a first-order calculus of objects with subtyping
Structured operational semantics and bisimulation as a congruence.
Equality in lazy computation systems.
On reduction-based process semantics
PhD thesis
Control Structures.
Functions as processes.
Calculi for interaction.
A calculus of mobile processes
Barbed bisimulation.
Constraints for free in concurrent computation.
A structural approach to operational semantics.
A typed language for distributed mobile processes.
Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms
On implementations and semantics of a concurrent programming language.
From rewrite rules to bisimulation congruences.
Global/local subtyping and capability inference for a distributed
Towards a mathematical operational semantics.
Confluence and Normalisation for Higher-Order Rewriting
--TR
Equality in lazy computation systems
The linear time-branching time spectrum (extended abstract)
The chemical abstract machine
Dynamic congruence vs. progressing bisimulation for CCS
Structured operational semantics and bisimulation as a congruence
A calculus of mobile processes, I
Full abstraction in the lazy lambda calculus
Turning SOS rules into equations
Bisimulation can''t be traced
On reduction-based process semantics
A theory of weak bisimulation for core CML
The reflexive CHAM and the join-calculus
Bisimilarity for a first-order calculus of objects with subtyping
The MYAMPERSANDpgr;-calculus in direct style
A calculus for cryptographic protocols
A typed language for distributed mobile processes (extended abstract)
rewriting and all that
The tile model
Constraints for Free in Concurrent Computation
Barbed Bisimulation
Global/Local Subtyping and Capability Inference for a Distributed pi-calculus
From Rewrite to Bisimulation Congruences
A Categorical Axiomatics for Bisimulation
The Linear Time - Branching Time Spectrum II
On Implementations and Semantics of a Concurrent Programming Language
Mobile Ambients
Towards a Mathematical Operational Semantics
A Congruence Theorem for Structured Operational Semantics of Higher-Order Languages
--CTR
Davide Grohmann , Marino Miculan, Directed Bigraphs, Electronic Notes in Theoretical Computer Science (ENTCS), 173, p.121-137, April, 2007
Henrik Pilegaard , Flemming Nielson , Hanne Riis Nielson, Active Evaluation Contexts for Reaction Semantics, Electronic Notes in Theoretical Computer Science (ENTCS), v.175 n.1, p.57-70, May, 2007
Vladimiro Sassone , Pawe Sobociski, Locating reaction with 2-categories, Theoretical Computer Science, v.333 n.1-2, p.297-327, 1 March 2005
Ole Hgh Jensen , Robin Milner, Bigraphs and transitions, ACM SIGPLAN Notices, v.38 n.1, p.38-49, January
Hartmut Ehrig , Barbara Knig, Deriving bisimulation congruences in the DPO approach to graph rewriting with borrowed contexts, Mathematical Structures in Computer Science, v.16 n.6, p.1133-1163, December 2006
Massimo Merro , Francesco Zappa Nardelli, Behavioral theory for mobile ambients, Journal of the ACM (JACM), v.52 n.6, p.961-1023, November 2005 | term rewriting;bisimuation;operational congruences;labelled transition systems;process calculi;operational sematics |
507274 | Communication complexity method for measuring nondeterminism in finite automata. | While deterministic finite automata seem to be well understood, surprisingly many important problems concerning nondeterministic finite automata (nfa's) remain open. One such problem area is the study of different measures of nondeterminism in finite automata and the estimation of the sizes of minimal nondeterministic finite automata. In this paper the concept of communication complexity is applied in order to achieve progress in this problem area. The main results are as follows:(1) Deterministic communication complexity provides lower bounds on the size of nfa's with bounded unambiguity. Applying this fact, the proofs of several results about nfa's with limited ambiguity can be simplified and presented in a uniform way. (2) There is a family of languages KONk2 with an exponential size gap between nfa's with polynomial leaf number/ambiguity and nfa's with ambiguity k. This partially provides an answer to the open problem posed by B. Ravikumar and O. Ibarra (1989, SIAM J. Comput. 18, 1263-1282) and H. Leung (1998, SIAM J. Comput. 27, 1073-1082). | Introduction
In this paper the classical models of one-way nite automata (dfa's) and their
nondeterministic counterparts (nfa's) [RS59] are investigated. While the
structure and fundamental properties of dfa's are well understood, this is not
the case for nfa's. For instance, we have ecient algorithms for constructing
minimal dfa's, but the complexity of approximating the size of a minimal
nfa is still unresolved (whereas nding a minimal nfa solves a PSPACE
complete problem). Hromkovic, Seibert and Wilke [HSW97] proved that
the gap between the length of regular expressions and the number of edges
of corresponding nfa's is between n log 2 n and n log n, but the exact relation
is unknown. Another principal open question is to determine whether there
is an exponential gap between two-way deterministic nite automata and
two-way nondeterministic ones. The last partially successful attack on this
problem was done in the late seventies by Sipser [S80], who established
an exponential gap between determinism and nondeterminism for so-called
sweeping automata (the property of sweeping is essential [M80]). The largest
known gap for the general case is quadratic [HS99].
Our main goal is to contribute to a better understanding of the power of
nondeterminism in nite automata (see [RS59], [MF71], [Mo71], [Sc78] for
very early papers on this topic). We focus on the following problems:
1. The best known method for proving lower bounds on the size of minimal
nfa's is based on nondeterministic communication complexity
[Hr97]. All other known methods are special cases of this method.
Are there methods that provide better lower bounds at least for some
languages? How can one prove lower bounds on the size of unambiguous
nfa's (unfa's), that is nfa's which have at most one accepting
computation for every word?
2. It is a well known fact [MF71], [Mo71] that there is an exponential gap
between the sizes of minimal dfa's and nfa's for some regular languages.
This is even known for dfa's and unfa's [Sc78], [SH85], [RI89], for unfa's
and nfa's with constant ambiguity [Sc78], [RI89], and for ufa's with
polynomial ambiguity and nfa's [HL98]. 1 But, it is open [RI89], [HL98]
whether there exists an exponential gap between the sizes of minimal
nfa's with constant ambiguity and nfa's with polynomial ambiguity.
3. The degree of nondeterminism is measured in the literature in three different
ways. Let A be an nfa. The rst measure advice A (n) equals the
number of advice bits for inputs of length n, i.e., the maximum number
of nondeterministic guesses in computations for inputs of length
n. The second measure leaf A (n) determines the maximum number of
We apologize for claiming the above results as our contribution in the extended abstract
of this paper [HKK00] instead of referring to [Sc78], [SH85], [RI89], [HL98]
computations for inputs of length n. ambig A (n) as the third measure
equals the maximum number of accepting computations for inputs of
length at most n. Obviously the second and third measure may be
exponential in the rst one. The question is whether the measures are
more specically correlated.
To attack these problems we establish some new bridges between automata
theory and communication complexity. The communication complexity
of two-party protocols was introduced by Yao [Y79] (and implicitly
considered by Abelson [Ab78], too). The initial goal was to develop
a method for proving lower bounds on the complexity of distributive and
parallel computations (see, for instance, [Th79, Th80, Hr97, KN97]). Due
to the well developed, nontrivial mathematical machinery for determining
the communication complexity of concrete problems (see, for instance
[AUY83, DHS96, Hr97, Hr00, KN97, L90, NW95, PS82]), communication
complexity has established itself as a sub-area of complexity theory. The
main contributions of the study of communication complexity lie especially
in proving lower bounds on the complexity of specic problems, and in comparing
the power of dierent modes of computation.
Here, for the rst time, communication complexity is applied for the
study of nondeterministic nite automata, with the emphasis on the tradeo
between the size and the degree of nondeterminism of nfa's. Our procedure
is mainly based on the following facts:
(i) The theory of communication complexity contains deep results about
the nature of nondeterminism (see, e.g. [KNSW94, HS96]) that use the
combinatorial structure of the communication matrix as the computing
problem representation.
(ii) In [DHRS97, Hr97, HS00], the non-uniform model of communication
protocols for computing nite functions was extended to a uniform
model for recognizing languages in such a way that several results
about communication complexity can be successfully applied for uniform
computing models like automata.
Combining (i) and (ii) with building of new bridges between communication
complexity and nfa's we establish the following main results.
1. Let cc(L) resp. ncc(L) denote the deterministic resp. nondeterministic
communication complexity of L. It is well known that 2 cc(L) and
2 ncc(L) are lower bounds on the sizes of the minimal dfa for L and a
minimal nfa for L respectively. First we show that there are regular
languages L for which there is an exponential gap between 2 ncc(L) and
the minimal size of nfa's for L. This means, that the lower bound
method based on communication complexity may be very weak. Then
we show as a somewhat surprising result that 2
cc(L)=k 2 is a lower
bound on the size of nfa's with ambiguity k for L. We furthermore show
that Rank(M) 1=k 1 is a lower bound for the number of states for
nfa's with ambiguity k, where M is a communication matrix associated
with L. It is possible that this lower bound is always better than the
rst one (see [KN97] for a discussion of the quality of the so-called
rank lower bound on communication complexity).
As a corollary we present a sequence of regular languages NIDm such
that the size of a minimal nfa is linear in m, while the size of every
unfa for NIDm is exponential in m. This substantially simplies the
proofs of similar results in [Sc78], [SH85].
2. We establish the relation
advice A (n); ambig(n) A leaf A (n) O(advice A (n) ambig A (n))
for any minimal nfa A. Observe that the upper bound on leaf A (n) implies
that minimal unambiguous nfa's may have at most O(advice A (n))
O(n) dierent computations on any input of size n, and an exponential
gap between advice A (n) and leaf A (n) is possible only if the
degree of ambiguity is exponential in n.
Furthermore we show that leaf A (n) is always either bounded by a
constant, or at least linear but polynomially bounded, or otherwise at
least exponential in the input length.
3. We present another sequence of regular languages than in [HL98] with
an exponential gap between the size of nfa's with exponential ambi-
guity, and nfa's with polynomial ambiguity. This result is obtained
by showing that small nfa's with polynomial ambiguity for the Kleene
closure (L#) imply small unfa's that work correctly on a polynomial
fraction of inputs. Our technique is more general than the proof
method of Hing Leung [HL98] and provides an essentially shorter
proof.
Furthermore we describe a sequence of languages KON k 2 such that
there is an exponential gap between the size of nfa's with polynomial
ambiguity and nfa's with ambiguity k. This provides a partial answer
to the open question [RI89], [HL98] whether there is an exponential
gap between minimal nfa's with constant ambiguity and minimal nfa's
with polynomial ambiguity.
This paper is organized as follows. In section 2 we give the basic deni-
tions and x the notation. In order to increase the readability of this paper
for readers who are not familiar with communication complexity theory,
we give more details about communication protocols and build the basic
intuition of their relation to nite automata. Section 3 is devoted to the
investigation of the relation between the size of nfa's and communication
complexity. Section 4 studies the relation between dierent measures of
nondeterminism in nite automata, and presents the remaining results.
Denitions and Preliminaries
We consider the standard one-way models of nite automata (dfa's) and
nondeterministic nite automata (nfa's). For every automaton A, L(A)
denotes the language accepted by A. The number of states of A is called
the size of A and denoted size A . For every regular language L we denote
the size of the minimal dfa for L by s(L) and the size of minimal nfa's
accepting L by ns(L). For every alphabet , ng and
ng.
For any nfa A and any input x we use the computation tree T A;x to
computations of A on x. Obviously the number of leaves of
T A;x is the number of dierent computations of A on x.
The ambiguity of an nfa A on input x is the number of accepting computations
of A on x, i.e., the number of accepting leaves of T A;x . If the nfa
A has ambiguity one for all inputs, then A is called an unambiguous nfa
(unfa) and uns(L) denotes the size of a minimal unfa accepting L. More
generally, if an nfa A has ambiguity at most k for all inputs, then A is called
a k-ambiguous nfa and ns k (L) denotes the size of a minimal k-ambiguous
nfa accepting L.
For every nfa A we measure the degree of nondeterminism as follows.
Let denote the alphabet of A. For every input x 2 and for every computation
C of A on x we dene advice(C) as the number of nondeterministic
choices during the computation C, i.e., the number of nodes on the path of
C in T A;x which have more than one successor. Then
advice A is a computation of A on xg
and advice A
For every x 2 we dene leaf A (x) as the number of leaves of T A;x and
set
leaf A
For every x 2 we dene ambig A (x) as the number of accepting leaves
of T A;x and set
ambig A
Since a language need not contain words of all lengths we dene ambiguity
over all words of length at most n which makes the measure monotone.
Observe that the leaf and advice measures are monotone as well.
Note that dierent denitions have been used by other authors; see e.g.
[GLW92], where the number of advice bits is maximized over all
inputs and minimized over all accepting computations on those inputs. In
this case there are nfa's which use more than constant but less than linear (in
the input length) advice bits, but this behavior is not known to be possible
for minimal nfa's.
To prove lower bounds on the size of nite automata we shall use two-party
communication complexity. This widely studied measure was introduced
by Yao [Y79] and is the subject of two monographs [Hr97], [KN97].
First, we introduce the standard, non-uniform model of (communica-
tion) protocols for computing nite functions. A (two-party communi-
consists of two computers C I and C II of unbounded
computational power (sometimes called Alice and Bob in the literature)
and a communication link between them. P computes a nite function
in the following way. At the beginning C I gets an input
obtains an input 2 V . Then C I and C II communicate
according to the rules of the protocol by exchanging binary messages until
one of them knows f(; ). C I and C II may be viewed as functions in this
communication, where the arguments of C I (C II ) are its input () and
the whole previous communication history (the sequence c 1 of
all messages exchanged between C I and C II up until now), and the output
is the new message submitted. We also assume that C I (C II ) completely
knows the behavior of C II (C I ) in all situations (for all arguments). Another
important assumption is that every protocol has the prex-freeness
property. This means, that for any ;
any communication
history , the message C I (;
of C I (
prex of C II (
))]. Informally, this means that the messages
are self-delimiting and we do not need any special symbol marking the end
of the message.
Formally, the computation of a protocol (C I ; C II ) on an input is a sequence
are the
messages and
2 Z is the result of the computation. The communication
complexity of the computation of P on an input (; ) is the sum
of the lengths of all messages exchanged in the communication. The communication
complexity of the protocol P , cc(P ), is the maximum of
the communication complexities over all inputs from U V .
Due to the prex-freeness property of messages we have that if, for two
computations
and
then and a protocol
allows m dierent computations, then its communication complexity must
be at least dlog 2 me 1.
The communication complexity of f , cc(f), is the communication
complexity of the best protocol for f , i.e.,
The protocols whose computations consist of one message only (i.e., C I
sends a message to C II and then C II must compute the result) are called
one-way protocols. For every nite function f ,
is a one-way protocol computing fg
is the one-way communication complexity of f .
The representation of a nite function f : UV ! f0; 1g by the so-called
communication matrix is very helpful for investigating the communication
complexity of f . The communication matrix of f is the jU jjV j Boolean
matrix M f [u; v] dened by
for all u 2 U and v 2 V . So, M f [u; v] consists of jU j rows and jV j columns.
If one wants to x this representation (which is not necessary for the relation
to the communication complexity of f ), one can consider some kind
of lexicographical order for elements in U and V . But, the special order of
rows and columns does not matter for our applications.
Figure
1 presents the communication matrix M f for the Boolean function
dened by
where is addition modulo 2.
Denition 1. Let be two sets and
. For every 2 U , the row
of in M f is
row
For every 2 V , the column of in M f is
is the number of dierent rows of M f .
A submatrix of M f is any intersection of a non-empty set of rows with
a non-empty set of columns. A -monochromatic submatrix, 2 f0; 1g
of M f is any submatrix of M f whose elements are all equal to (Figure 1
depicts the 1-monochromatic submatrix that is the intersections of rows 001,
010, 100 and 111 with the columns 000, 011, 101 and 110).
Figure
be a set of monochromatic submatrices of a
Boolean matrix M f . We say that S is a cover of M f if, for every element
a of M f , there exists an m 2 kg such that a is an element of
Mm . We say that S is an exact cover of M f if S is a cover of M f and
kg. The tiling complexity
of M f is
is an exact cover of M f g
ut
The work of a protocol (C I ; C II ) for f can be viewed as a game on the
communication matrix M f . C I with input knows the row row , C II with
input knows the column column , and they have to determine f(; ). 2
A communication message c 1 submitted from C I to C II can be viewed as
the reduction of M f to a submatrix M f consisting of rows for which C I
sends c 1 because C II knows the behavior of C I . Similarly the second message
2 sent from C II to C I restricts M f
of the columns of M f (c 1 ) for which C II with the second argument c 1 sends
2 Note that they do not need to estimate the coordinates of the intersection of row
and column .
knows the result. So, every computation of (C I ; C II ) that nishes
with 1 (0) denes a 1-monochromatic (0-monochromatic) submatrix of M f .
This means that all inputs (; ) contained in this monochromatic submatrix
have the same computation of the protocol C I and C II . So, (C I ; C II )
unambiguously determine an exact cover of M f by monochromatic subma-
trices. More precisely, a protocol with k dierent computations determines
an exact cover of cardinality k. The immediate consequence is:
Fact 1. For every nite function
Another important consequence is the following fact.
Fact 2. For every nite function
(Row (M f ))e:
Proof: For no two dierent rows row and row , a one-way protocol
computing f can send the same message c because C II cannot determine the
result for any such that column has dierent values on the intersections
with row and row . On the other hand, Row dierent messages are
enough (one message for a group of identical rows) to construct a one-way
protocol for f . ut
Since the number of 1-monochromatic matrices in any exact cover of all
ones in M f is a trivial upper bound on the rank of M f , Fact 1 implies:
Fact 3. For every nite function every eld F with
neutral elements
cc(f) dlog 2 (Rank F (M f ))e:
Let Q be the set of rational numbers. Since it is well-known that
is a eld with neutral elements 0 and 1g
we formulate Fact 3 as
for every nite function f .
Now, we consider nondeterministic communication complexity and its
relation to some combinatorial properties of M f . A nondeterministic
protocol P computing a nite function consists of two
nondeterministic computers C I and C II that have a nondeterministic choice
from a nite number of messages for every input argument. For any input
we say that that P accepts (;
there exists a computation of P on (; ) that ends with the result 1. So,
computes 0 for an input (; ) (rejects (; )) if all computations of P
on (; ) end with the result 0. The nondeterministic communication
complexity of P , denoted ncc(P ), is the maximum of the communication
complexities of all accepting computations of P . The nondeterministic
communication complexity of f is
is a nondeterministic protocol computing fg
Let ncc 1 (f) denote the one-way nondeterministic communication
complexity of f .
Similarly as in the deterministic case, every accepting computation of P
for f unambiguously determines a 1-monochromatic submatrix of M f and
the union of all such 1-monochromatic submatrices must cover all the 1's of
M f but no 0 of M f . The dierence to the deterministic case is that these
1-monochromatic submatrices may overlap, which corresponds to the fact
that P may have several dierent accepting computations on a given input.
Denition 2. Let M f be a Boolean matrix, and let
be a set of 1-monochromatic submatrices of M f . We say that S is a 1-cover
of M f if every 1 of M f is contained in at least one of the 1-submatrices of
S. We dene
is a 1-cover of M f g:
ut
Fact 4. For every nite function
Proof: The above consideration showing that a nondeterministic protocol
with m accepting computations determines a 1-cover of M f of cardinality
implies
Since ncc(f) ncc 1 (f) for every f , it is sucient to prove ncc 1 (f)
be a 1-cover of M f . A one-way
nondeterministic protocol (C I ; C II ) can work on an input (; ) as
follows. C I with input nondeterministically chooses one of the matrices
of S with a non-empty intersection with row and sends the binary code
of its index i to C II . If column has a non-empty intersection with M i ,
then C II accepts. Since dlog 2 me message length suces to code m dierent
messages,
The rst trivial bridge [Hr86] between automata and communication
complexity says that
for every regular language L and every positive integer n, where
L. The argument for this lower
bound is very simple. Let A be a dfa (nfa) accepting L with s(L) (ns(L))
states. Then a one-way protocol can compute f 2n;L as follows. For an input
, C I simulates the work of A on and sends the name of the state q reached
by A after reading to C II . C II continues in the simulation of the sux
from the state q. If A accepts , then (C I ; C II ) accepts (; ).
Unfortunately, the lower bound (1) may be arbitrarily bad for both s(L)
and ns(L) because this non-uniform approach cannot completely capture
the complexity of the uniform acceptance of L. We shall overcome this
diculty in the next section.
3 Communication Complexity and Finite
Automata
To improve lower bounds on s(L) and ns(L) by communication complexity,
Duris, Hromkovic, Rolim, and Schnitger [DHRS97] (see also [Hr86, HS00])
introduced uniform protocols and communication matrices of regular languages
as follows. For every regular language L , we dene the innite
Boolean matrix
a
Since every regular language has a nite index (Myhill-Nerode theorem), the
number of dierent rows of ML is nite. So, we can again use the protocols
as nite devices for accepting L.
Denition 3. Let be an alphabet and let L . A one-way uniform
protocol over is a pair (C I ; C II ), where
is a function with the prex freeness property, and
fC I () j 2 g is a nite set, and
rejectg is a function.
We say that
The message complexity of the protocol D is
(i.e., the number of the messages used by D), and the message complexity
of L is
is a one-way uniform protocol accepting Lg:
The communication complexity of D is
and the one-way communication complexity of L is
is a one-way uniform protocol accepting Lg:
ut
If one wants to give a formal denition of a one-way nondeterministic
protocol over , it is sucient to consider C I as a function from to a
nite subset of f0; 1g . The acceptance criterion of L changes to
I () such that accept 2 C II (; c)) , 2 L:
denote the one-way nondeterministic message
[communication] complexity of L. We observe that the main dierence
between uniform protocols and (standard) protocols is the way the input
is partitioned between C I and C II . If a protocol D computes a Boolean
one can view this as the partition of
inputs of f (from f0; 1g r+s ) into the prex of r bits and a sux of s bits (i.e.
assigning the rst r bits to C I and the rest to C II ), and a communication
between C I and C II in order to compute the value of f . A uniform protocol
over considers, for every input partitions of
for each of these partitions it must accept (reject) if 2 L ( 62 L). This
means, that the matrices are special Boolean matrices with
a ;1 ::: and a uniform protocol D for L
must recognize the membership of to L for every partition of between
C I and C II .
The following result from [DHRS97, HS00] shows in fact that one-way
uniform protocols are nothing else but deterministic nite automata.
Fact 5. Let be an alphabet. For every regular language L ,
The idea of Proof: just a reformulation of
the Myhill-Nerode theorem. In Section 2 we have already observed that
Row (ML ) is exactly the number of dierent messages used by an optimal
one-way protocol. 3 ut
Following the idea of the simulation of a nite automaton by a protocol
in the nondeterministic case, we have the following obvious fact [Hr97].
Fact 6. For every alphabet and every regular language L ,
Fact 6 provides the best known lower bound proof technique on the size of
minimal nfa's. All previously known techniques like the fooling set approach
are special cases of this approach. Moreover the fooling set method, which
covers all previous eorts in proving lower bounds on ns(L), can (for some
languages) provide exponentially smaller lower bounds than the method
based on nondeterministic communication complexity [DHS96].
The rst question is therefore whether nmc(L) can be used to approximate
ns(L). Unfortunately this is not possible. Note that a result similar
to Lemma 1 was also independently established by Jiraskova [Ji99].
Lemma 1. There exists a sequence of regular languages fPART n g 1
n=1 such
that
Proof: Let PART
For the next considerations it is important to observe that the condition
equivalent to the condition x 6= z _ First we
describe a nondeterministic uniform protocol (C I ; C II ) for PART n which
uses O(n 2 ) messages.
Players C I and C II compute the lengths l I ; l II of their inputs. C I communicates
l I and C II rejects when l I l II 6= 3n. So we assume that
l I in the following.
Case 1: l I n.
C I chooses a position 1 i l I and communicates I . C II accepts,
accepts if and only if
Observe that if x 6= z, then there is an accepting computation because
there exists i such that x i 6= z i . If however
that is i y.
Case 2: n < l I 2n.
C I chooses a position 1 i n and communicates I . Furthermore,
C I compares x I n and sends the bit 1, if the
strings are equal and the bit 0 if the strings are dierent. C II accepts if x i 6=
z i . Otherwise (if x
3 The fact that ML is innite does not matter because ML has a nite number of
dierent rows. Moreover, it would work for an innite number of dierent rows (i.e., for
automata with an innite number of states), too [Eil74].
If the two strings are equal and the bit 1 was received, then C II accepts and
rejects otherwise.
Note that if x 6= z then there is an accepting computation. If not, then
C II accepts if and only if
Case 3: 2n < l I 3n.
C I chooses a position l I 2n < i n and communicates I . Furthermore
C I compares x with y. If
then C I accepts. Otherwise C II accepts if and only if x i 6= z i .
The protocol uses O(n 2 ) messages, so nmc(PART n
Now, we prove that ns(PART N
. Obviously, every nfa B accepting
must have the following properties:
there is an accepting computation
of B on every word xxx or x 2 f0; 1g n , and
i.e. there is
no accepting computation of B on any word xyx with x 6=
We prove that every nfa satisfying (i) and (ii) must have at least 2 nstates. Let us assume the opposite. Let A be a nfa with fewer than 2 nstates that satises (i) and (ii). Since L 1 L(B), there exists an accepting
computation C x on xxx for every x 2 f0; 1g n . Let P attern(C x
where p is the state of C x after reading x and q is the state of C x after
reading xx. Since the number of states is smaller than 2 n
2 , the number of
dierent patterns is smaller than 2 . So, there exist two words
v, such that P attern(C u
some states This means that starting to work from r on u as well as on
v one can reach s after reading u or v. The immediate consequence is that
there are accepting computations of B on uvu and vuv as well. Since u 6= v,
uvu and vuv belong to L 2 , a contradiction with condition (ii). ut
To nd lower bound methods for ns(L) that provide results at most
polynomially smaller than ns(L) is one of the central open problems on
nite automata. In the following, we concentrate on lower bounds for nfa's
with constant ambiguity. Even for unambiguous automata no nontrivial
general method for proving lower bounds has been known up to now.
To introduce our method for proving lower bounds on nfa's with bounded
ambiguity we have to work with the communication matrices for regular
languages. In Fact 5 we have observed that every matrix ML has a -
nite number of dierent rows, which is the index s(L) of the regular language
L (this means that there exists a s(L) s(L) (nite) submatrix M
of ML such that Row
every eld F with neutral elements
and Thus, instead of introducing the general
two-way uniform communication protocols), we dene the communication
complexity of L, denoted cc(L), as the communication complexity of the
best protocol for the communication matrix ML . Because of the denition
of ML , this approach covers the requirement that the protocol correctly decides
membership of any input to L for any prex-sux partition of the
input.
Before formulating the main result of this section we build our intuition
about the connection between cc(L) and uns(L). If one simulates an unambiguous
automaton by a nondeterministic one-way protocol in the standard
way described above, then the resulting protocol is unambiguous, too. This
means that every 1 in ML is covered by exactly one accepting computation,
i.e., the unfa A determines an exact cover of all 1's in ML of cardinality
size A . The similarity to the deterministic communication complexity is that
any such protocol determines an exact cover of all elements of the communication
matrix by monochromatic submatrices. Some nontrivial results from
communication complexity theory [KNSW94] are needed to relate cc(L) and
uns(L) via the outlined connection.
Theorem 1. For every regular language L ,
a) uns(L) RankQ (ML ),
ns k (L) RankQ (ML ) 1=k 1,
c) ns k (L) 2
cc(L))=k 2.
Proof: Let A be an optimal unfa for L. A can be simulated by a
one-way nondeterministic protocol as follows: C I simulates A on its input
and communicates the obtained state. C II continues the simulation and
accepts/rejects accordingly. Obviously the number of messages is equal to
size A and the protocol works with unambiguous nondeterminism.
It is easy to see that the messages of the protocol correspond to size A
many submatrices of the matrix ML covering all ones exactly once. Hence
the rank is at most size A and we have shown a), which is the rank lower
bound on communication complexity [MS82] (see Fact 3 in Section 2).
For b) observe that the above simulation induces a cover of the ones in
ML so that each one is covered at most k times. By the following fact from
[KNSW94] we are done:
Fact 7. Let r (M) denote the minimal size of a set of submatrices covering
the ones of a Boolean matrix M so that each is covered at most r times.
Then
For the other claim again simulate A by a one-way k-ambiguous nondeterministic
protocol with size A messages.
The results of [KNSW94] (see also [L90], [Y91]) imply that a k-ambigu-
ous nondeterministic one-way protocol with m messages can be simulated
by a deterministic two-way protocol with communication log(m
log(m
cc(L) log(size k
and c) follow. ut
Before giving an application of the lower bound method we point out
that neither 2
nor RankQ (ML ) is a lower bound method capable of
proving polynomially tight lower bounds on the minimal size of unfa's for all
languages. In the rst case this is trivial, in the second case it follows from a
modication of a result separating rank from communication complexity (see
[KN97]). But the gap between RankQ (ML ) and uns(L) may be bounded
by a pseudo-polynomial function.
Now we apply Theorem 1 in order to present an exponential gap between
ns(L) and uns(L) for a specic regular language. Let, for every positive
integer m; g.
Theorem 2. For every positive integer m
(i) NIDm can be recognized by an nfa A with ambiguity O(m) and size
O(m)
(ii) Any nfa with ambiguity k for NIDm has size at least 2 m=k 1, and
in particular any unfa for NIDm must have states.
log m) for NIDm has polynomial size
in m.
Proof:
(i) First the nfa guesses a residue i modulo m, and then checks whether
there is a position p
(ii) Observe that the submatrix spanned by all words u and v with
is the \complement" of the 2 m 2 m identity matrix. The
result now follows from the assertions a) and b) of Theorem 1.
(iii) is an immediate consequence of (ii). ut
We see that the proof of Theorem 2 is a substantial simplication of the
proofs of similar results presented in [Sc78], [SH85].
4 Degrees of Nondeterminism in Finite Automata
It is easy to see that advice A (n) leaf A (n) 2 O(advice A (n)) and also that
ambig A (n) leaf A (n) for every nfa A. The aim of this section is to investigate
whether stronger relations between these measures hold.
Lemma 2. For all nfa A either
a) advice A (n) size A and leaf A (n) size size A
A or
advice A (n) n=size A 1 and leaf A (n) n=size A 1.
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
Figure
Proof: If some reachable state q of A belongs to a cycle in A and if
q has two edges with the same label originating from it such that one of
these edges belongs to the cycle, then advice A (n) (n size A )=size A
n=size A 1. Otherwise for all words all states with a nondeterministic
decision are traversed at most once. ut
Our next lemma relates the leaf function to ambiguity. The initial idea
is that a computation tree of any minimal unfa A on any input w could
look like the tree from Figure 2. There is exactly one path P from the
root to a leaf (a computation) with several nondeterministic guesses and
all paths having only one vertex in common with P do not contain any
nondeterministic branching. In other words, if a computation branches into
two computations P 1 and P 2 , then at least one of P 1 and P 2 should be
completely deterministic. We are not able to verify this nice structure, but
the next result shows that any computation tree of a minimal unfa A is very
thin because every level of this tree can contain at most size A + 1 dierent
computations.
In what follows a state q of an nfa A is called terminally rejecting, if there
is no word and no computation of A, such that A accepts when starting in
q, i.e., (q; v) contains no accepting state for any word v. Clearly there
is at most one terminally rejecting state in a minimal automaton, because
otherwise these states can be joined reducing the size. Call all other states
of A undecided.
Lemma 3. Every nfa A with at most one terminally rejecting state satises
leaf A (x) ambig A (jxj
for all x.
size A ). If the computation tree consists
only of nodes marked with the terminally rejecting state, then the tree has
just one leaf and the claim is trivial. For the general case, consider a level
of the computation tree of A on x that is not the root level. Assume that
the level contains more that k size A nodes labeled with undecided states
(called undecided nodes). Then one undecided state q must appear at least
times on this level. There are k computations of A on a prex of
x such that q is reached. If q is accepting, then the prex of x is accepted
a contradiction, since ambig A is monotone. If q is
rejecting, but undecided, then there is a word v of length at most size A such
that v is accepted by some computation of A starting in q. But then the
prex of x concatenated with v is accepted by at least k
a contradiction.
Thus each level of the tree that is not the root level contains at most
k size A undecided nodes. Overall there are at most jxj k size A
undecided nodes.
Observe that each node has at most one terminally rejecting child. Thus
the number of terminally rejecting leaves is equal to the number of undecided
nodes that have a terminally rejecting child. Hence the number of terminally
rejecting leaves is at most the number of undecided nodes minus the number
of undecided leaves. Thus the overall number of leaves is at most the number
of terminally rejecting leaves plus the number of undecided leaves which
is at most the number of undecided nodes. So overall there are at most
leaves. ut
Theorem 3. Every nfa A with at most one terminally rejecting state sat-
ises
advice A (n); ambig A (n) leaf A (n) O(ambig A (n) advice A (n)):
Especially for any such unfa: advice A
Proof: Observe that for all n: ambig A
ambig A (n
since ambig A is monotone and at most exponential. ut
Next we further investigate the growth of the leaf function. Lemma 4 is
a variation of a result in [IR86].
Lemma 4. For every nfa A, either leaf A (n) (nsize A ) size A or leaf A (n) n) .
Proof: Assume that an nfa A contains some state q, such that q can
be reentered on two dierent paths starting in q, where each path is labeled
with the same word w. It is not hard to show that in this case there are
two dierent paths from q to q labeled with a word w of length size 2
A 1.
Then the computation tree of uw m (where u leads from the starting state
to q) has at least 2 m 2 (n size A )=size 2
A leaves, where
Now assume that A does not contain such a state. Then, for each non-deterministic
state q (i.e., a state with more than one successor for the same
letter) and any computation tree, the following holds: If q is the label of a
vertex v, then q appears in each level of the subtree of v at most once.
We prove by induction on the number k (k size A ) of dierent nondeterministic
states in a computation tree that the number of leaves is at most
(n size A ) k . The claim is certainly true if there are no nondeterministic
states.
Assume that there are k nondeterministic states, with some state q 1
appearing rst in the tree. Observe that no level in the entire computation
tree contains q 1 more than once.
For each occurrence of q 1 in the computation tree x some child, so
that the overall number of leaves is maximized. We get a tree with one
nondeterministic state less, and by the inductive hypothesis this tree has at
most (n size A ) k 1 leaves.
appears at most once on each level and since there are at most
size A children of q 1 on each level, there are at most (n size A ) k leaves. ut
Lemmas 2 and 4 give us
Theorem 4. For every nfa A: leaf A (n) is either bounded by a constant, or
in between linear and polynomial in n, or otherwise 2 (n) .
Now, we consider the dierence between polynomial and exponential
ambiguity resp. polynomial and exponential leaf number. We show that
languages which have small automata of polynomial ambiguity are related
to the concatenation of languages having small unfa's. If the language is
a Kleene closure, then one unfa accepts a large subset. Compare this to
closures are shown to be recognizable as ecient
by nfa's with constant advice as by dfa's.
Theorem 5. a) Let L be an innite regular language and A some nfa for
L with polynomial ambiguity. Then there are d size A languages L i such
that L 1 L d L, L i is recognizable by an unfa with O(size A ) states, and
=for innitely many n.
for a regular language K not using the letter # and
let A be some nfa for L with polynomial ambiguity. Then, for all m, there is
an unfa A 0 with O(size A ) states that decides L 0 L such that for innitely
many n
Proof: a) Dene the ambiguity graph of A in the following way: the
nodes are the (reachable) states of A and there is an edge from q i to q j if
there are two paths from q i to q j in A, with the same label sequence. Note
that the ambiguity graph is acyclic i the ambiguity of A is polynomially
bounded as we have seen in the proof of Lemma 4.
Now we construct a unfa A i;j;k which accepts those words that lead in
A from q i to q j and then via one edge to q k . Here, we assume that the
longest path from q i to q k in the ambiguity graph consists of one edge and
q j is reachable from q i in A, but not in the ambiguity graph. Moreover, we
demand that there is an edge in A from q j to q k .
The states of A i;j;k are the states reachable in A from q i , but not reachable
in the ambiguity graph from q i , plus the state q k . The edges are as in
A except that the only edges to q k come from q j . q i is the start. Accepting
state is q k . L i;j;k is the language accepted by A i;j;k .
Now consider the words w 2 L \ n . Each such word is accepted on
some path in A leading from q 0 to some accepting state q a . Fix one such
accepting state so that a constant fraction of all words w is accepted and
make the other accepting states rejecting. On an accepting path for w
the states appear without violating the topological ordering of the ambiguity
graph. So, we may x a sequence of states q
a such that
. Since there are only nitely many such
sequences we are done.
b) Similar to a), we get k languages L decidable by small unfa's
A i , such that
=for innitely many n.
A partition of the letters of words in ( m #) n is given by mapping the
nm letters to the k unfa's. There are at most n
possible
partitions. So some partition must be consistent with accepting paths for
a fraction of 1=poly (n) of (( m \ K)#) n . Fix one such partition. Then for
each words w an unfa is responsible for some prex u, followed
by a concatenation of words of the form # m , and nally a word of the
form #v. For all i we x a prex u i , a sux v i , and states q
entered
when reading the rst and nal occurrence of #, such that as many words
from (( m \ K)#) n as possible are accepted under this xing. At least a
fraction of size k =2 1=poly (n) of (( m \K)#) n has accepting paths
consistent with this xing.
If any A i accepts less than a polynomial fraction (compared to the projection
of (( m \K)#) n to the responsibility region of A i ) then overall less
than a polynomial fraction is accepted. Hence one A i can be found, where
from q i a polynomial fraction of words in ( m \ K)#) n=k leads to non-
terminally rejecting states in A i . Making one non-terminally rejecting state
reached by a # edge accepting and removing the original accepting states
yields an unfa that accepts the desired subset for innitely many n. ut
Applying Theorem 5 we can prove an exponential gap between nfa's and
nfa's with polynomial ambiguity. This proof is also substantially simpler 4
than the proof of an exponential gap between polynomial ambiguity and
exponential ambiguity for the language (0
Theorem 6. There is a family of languages KLm such that KLm can be
recognized by an nfa with advice (n), leaf 2 (n) and size poly(m), while
every nfa with polynomial leaf number/ambiguity needs size at least 2
m)
to recognize KLm .
Proof: Let LNDISJ
from a size m 32 universe and the sets [ non-triviallyg.
Moreover, let
Given a polynomial ambiguity nfa for KLm , we get an unfa accepting
a fraction of 1=poly(n) of (LNDISJ m#) n for innitely many n by Theorem
4b). Then we simulate the unfa by a nondeterministic communication
protocol, where player C I receives all x and player C II all y inputs. The protocol
needs O(n log size A ) bits to work correctly on a 1=poly(n) fraction of
(LNDISJ m#) n and has unambiguous nondeterminism. A result from [HS96]
implies that this task needs
communication
nm) and thus size A 2
m) .
ut
Thus, we have another strong separation between the size of automata
with polynomial ambiguity and the size of automata with exponential ambi-
guity. The situation seems to be more complicated, if one compares constant
and polynomial ambiguity. Ravikumar and Ibarra [RI89] and Hing Leung
[HL98] considered it as the central open problem related to the degree of
ambiguity of nfa's. Here, we can only show that there is a family KON m of
languages with small size nfa's of polynomial ambiguity, while nfa's of am-
biguity
are exponentially larger. In the following theorem we describe a
candidate for a language that has ecient nfa's only when ambiguity is poly-
nomial. Furthermore the language exhibits an almost optimal gap between
the size of unfa's and polynomial ambiguity nfa's. In the proof the rank of
the communication matrix of KON m is shown to be large by a reduction
from the disjointness problem.
Theorem 7. Let KON contains all
words in f0; 1g with a number of 1's that is divisible by m. KON m can be
recognized by an nfa A with ambig A (n); leaf A
while any nfa with ambiguity k for KON m needs at least 2 (m 1)=k 2 states.
Proof: Since the upper bound of theorem 7 is obvious, we focus on
proving the lower bound.
Consider the communication problem for the complement of the disjointness
predicate NDISJ l . The inputs are of the form x; y 2 f0; 1g l , where x
4 If the known results about communication complexity are for free (i.e., not included
in the measurement of the proof diculty).
and y are interpreted as incidence vectors of subsets of a size l universe.
The goal is to nd out, whether the two sets have a nontrivial intersection.
Note that the rank of the communication matrix M NDISJ l
is 2 l 1. We
reduce NDISJ m 1 to KON m , i.e., identify a submatrix of MKONm that is
the communication matrix M NDISJ m 1
Consider inputs to KON m of the form 01
and addition over
ZZ m . For any subset s 1g one can nd such an input x s .
These inputs correspond to the rows of our submatrix.
For each subset x an input y s of the
g. These 2 m 1 inputs correspond to the columns of our
submatrix.
Now consider the obtained submatrix: if s and r intersect non-trivially,
then x s y r 2 KON m . On the other hand, if s and r are disjoint, then there
is no sub-word which has a number of 1's divisible by m.
r is not in KON m . We have identied a submatrix of rank 2 m 1 1.
Applying Theorem 1(b) we obtain our lower bound. ut
For every constant m, the language KON m 2 of Theorem 7 can be recognized
with size O(m 2 ), leaf number and ambiguity (n), and advice (n),
while every m ambiguous nfa has size 2
m) . Jurdzinski [Ju00] observed
that KON m 2 can be computed by nfa's with constant ambiguity and size
poly(m). Therefore the analysis of Theorem 7 cannot be improved substan-
tially. Jurdzinski's observation also applies to the language f0; 1g 0 k f0; 1g
which was proposed in [RI89] for separating constant from polynomial ambiguity
5 Conclusions and Open Problems
We have shown that communication complexity can be used to prove lower
bounds on the size of nfa's with small ambiguity. This approach is limited,
because for nontrivial bounds ambiguity has to be smaller than the size
of a minimal nfa. Is it possible to prove lower bounds for automata with
arbitrarily large, but constant ambiguity, when equivalent automata of small
size and polynomial ambiguity exist?
In this context it would be also of interest to investigate the ne structure
of languages with regard to constant ambiguity. At best one could show
exponential dierences between the number of states for ambiguity k and
the number of states for ambiguity k + 1. Observe however, that such an
increase in power is impossible provided that the size of unfa's does not
increase substantially under complementation [K00]. Analogous questions
apply to polynomial and exponential ambiguity.
Are there automata with non-constant but sub-linear ambiguity? A
negative answer establishes Theorem 3 also for ambiguity as complexity
measure.
Other questions concern the quality of communication as a lower bound
method. How far can Rank resp. 2
cc(L) be from the actual size of minimal
unfa's? Note that the bounds are not polynomially tight. Are there
alternative lower bound methods?
Finally, what is the complexity of approximating the minimal number of
states of an nfa?
--R
Lower bounds on information transfer in distributed computations.
On notions of informations transfer in VLSI circuits.
On measuring nondeterminism in regular languages.
On the relation between ambiguity and nondeterminism in
Separating exponentially amgigous
On sparseness
personal communication.
Lower bounds for computation with limited nonde- terminism
On automata with constant ambiguity.
Communication Complexity.
Las Vegas is better than determinism in VLSI and distributed computing.
Economy of description by au- tomata
On the bounds for state-set size in the proofs of equivalence between deterministic
On ranks vs. communication com- plexity
Communication complexity.
Communication Complexity.
Relating the type of ambiguity of
Finite automata and their decision prob- lems
Lower Bounds on the Size of Sweeping Automata.
Succinctness of descriptions of context-free
On the equivalence and containment problems for unambiguous regular expressions
A complexity theory for VLSI.
Expressing combinatorial optimization problems by linear programs.
Some complexity questions related to distributed com- puting
--TR
On sparseness, ambiguity and other decision problems for acceptors and transducers
Communication complexity hierarchy
Relating the type of ambiguity of finite automata to the succinctness of their representation
On measuring nondeterminism in regular languages
On the relation between ambiguity and nondeterminism in finite automata
Non-deterministic communication complexity with few witnesses
Nondeterministic communication with a limited number of advice bits
A comparison of two lower-bound methods for communication complexity
Communication complexity and parallel computing
Communication complexity
Separating Exponentially Ambiguous Finite Automata from Polynomially Ambiguous Finite Automata
On the power of Las Vegas for one-way communication complexity, OBDDs, and finite automata
Automata, Languages, and Machines
On the Power of Las Vegas II. Two-Way Finite Automata
Measures of Nondeterminism in Finite Automata
Las Vegas Versus Determinism for One-way Communication Complexity, Finite Automata, and Polynomial-time Computations
Translating Regular Expressions into Small epsilon-Free Nondeterministic Finite Automata
Communication complexity
Las Vegas is better than determinism in VLSI and distributed computing (Extended Abstract)
Area-time complexity for VLSI
Some complexity questions related to distributive computing(Preliminary Report)
On notions of information transfer in VLSI circuits
Succinctness of descriptions of context-free, regular and finite languages.
A complexity theory for vlsi
--CTR
Martin Kutrib , Andreas Malcher, Context-dependent nondeterminism for pushdown automata, Theoretical Computer Science, v.376 n.1-2, p.101-111, May, 2007
Galina Jirskov, State complexity of some operations on binary regular languages, Theoretical Computer Science, v.330 n.2, p.287-298, 2 February 2005 | nondeterminism;limited ambiguity;descriptional complexity;communication complexity;finite automata |
507384 | On first-order topological queries. | One important class of spatial database queries is the class of topological queries, that is, queries invariant under homeomorphisms. We study topological queries expressible in the standard query language on spatial databases, first-order logic with various amounts of arithmetic. Our main technical result is a combinatorial characterization of the expressive power of topological first-order logic on regular spatial databases. | Introduction
The expressive power of first-order logic over finite relational
databases is now well understood [AHV95, EF95].
Much less is known in spatial databases (also called constraint
databases), where the relations are no longer finite
but finitely represented [KLP99].
The notion of genericity (invariance of queries under
isomorphisms), fundamental for the relational database
model, can be generalized to spatial databases in various
ways [PVV94]. Given a group G of transformations (trans-
lations, affinities, isometries, similarities, homeomorphism-
s, etc.), a query Q is G-generic if for all database instances
I and each transformation
FOG we denote the set of G-generic first-order queries. The
genericity of a first-order query is undecidable [PVV94],
but the expressive power of FO G can be understood via
sound and complete (decidable) languages. A language is
said to be sound for G if it contains only FO G queries. It is
complete for G if it expresses all FO G queries. The choice of
the group G depends on which information one is interested
in. [GVV97] gives sound and complete languages for several
natural groups of transformations (translations, affinities,
isometries, similarities). The case of the group of homeomorphisms
was left open.
To appear in Proceedings of the 15th IEEE Symposium on Logic in
Computer Science. c
IEEE 2000.
Queries invariant under homeomorphisms, which are also
called topological queries, are of fundamental importance
in various applications of spatial databases. For ex-
ample, in geographical databases, queries like "Is region A
adjacent to region B?", "Is there a road from A to B?", or "Is
A an island?" come up very naturally. Therefore, topological
queries have received a lot of attention in the literature
(e.g. [KPV97, PSV99, SV98, KV99]). A basic result known
about topological queries is that connectivity of a region is
not expressible in first-order logic [GS99, GS97, BDLW96].
Thinking of geographical databases again, planar (or 2-
dimensional) database instances, where all relations are embedded
in the plane R 2 , are of particular importance. In
[PSV99] it has been proven that all topological properties
of a planar spatial database can be represented in a finite
structure called the topological invariant of the instance.
In [SV98] it has been shown how this topological invariant
can be used to answer topological queries. In particular,
[SV98] have proven that first-order topological queries on
a spatial database can be automatically translated into fixpoint
queries on the topological invariant. The translation
of first-order topological queries on the spatial database into
first-order queries on the topological invariant was proven
possible only in the special case of a single relation representing
a closed region. It was left open in [SV98] whether
this translation could be extended to the case of several re-
gions. We answer this question negatively.
The idea of representing the topological information of
a spatial database instance by the topological invariant has
two important drawbacks: In a sense, the topological invariant
contains too much information; ideally we would
just want to store the information that is actually accessible
by the query language (which is usually FO). Furthermore,
the topological invariant has no straightforward generalization
to higher dimensions. The issue of finding an invariant
more suitable for FO (and computable in any dimension)
was raised in [KPV97].
In the special case of one single relation representing a
closed planar region, a cone structure was given in [KPV97]
capturing precisely the first-order topological information.
Intuitively, the cone structure is a finite set containing all
the possible small neighborhoods of a point. The results of
[KPV97] show that, in this context, first-order topological
queries could express only local properties, which is a situation
known to be true in the finite case. [KPV97] asked
whether their results generalize to database instances with
a region that is not necessarily closed; we give a negative
answer to this question. For instances with one closed region
that satisfy the additional technical condition of being
fully two dimensional, [KV99] introduced a cone logic CL
and proved that it is sound and complete for topological FO.
They asked if their results generalize to instances with not
necessarily closed regions or several regions, again we give
a negative answer.
[KPV97] introduced two local operations on spatial
database instances that preserve the equivalence under first-order
topological queries (called topological elementary e-
quivalence). We call two
instances
-equivalent if they can
be transformed into instances homeomorphic to each other
by applying the operations of [KPV97] finitely often. Our
main technical result, from which all the rest easily follows,
is that on especially simple instances that we call regular,
-equivalence and topological first-order equivalence coincide
The paper is organized as follows: After recalling a few
basic definitions on spatial databases in Section 2, in Section
3 we discuss the topology of planar spatial databases
and the topological invariant in detail. In Section 4 we introduce
topological first-order queries and review some results
of [KPV97]. In Section 5, we prove
that
-equivalence is
decidable in PSPACE. Our main result on regular instances
is proved in Section 6. In Section 7, we derive that not
all first-order topological queries can be translated to first-order
queries on the topological invariant, and in Section 8
we briefly discuss the problem of finding a language that is
sound and complete for topological FO.
2. Preliminaries
Spatial databases. We fix an underlying structure R over
the reals; either we let
(R; <; +; 0; 1) or R = R poly
be the vocabulary of R (i.e. either
For a point an
jja bjj < rg be the open ball with radius r around a.
1 As a matter of fact, we could let R be any o-minimal structure over
the reals, and the main results would remain true.
poly we may also let
a 2
n , and we will
assume we have done so in our figures - it just looks better. Since we are
only interested in topological queries, this makes no difference.
A subset S R n , for some n 1, is R-definable, if
there is a first-order formula '(x) of vocabulary R such
that
A schema is a finite collection of region names. Let
n 1. An n-dimensional spatial database instance I over
associates an R-definable set R I R n with every R 2
. The sets R I are called the regions of I . Formally, we
may interpret the region names as n-ary relation symbols
and view an instance I over as a first-order structure of
vocabulary R [ obtained by expanding the underlying
structure R by the relations R I , for I 2 .
In this paper, we only consider 2-dimensional (or pla-
spatial database instances. For convenience, we also
assume that all regions are bounded, i.e. that for every instance
I over a schema and for every R 2 there exists a
R such that jjajj b for all a 2 R I . The boundedness
assumption is inessential and can easily be removed, but it
occasionally simplifies matters.
Queries. An n-ary query (for an n 0) of schema is
a mapping Q that associates an R-definable subset Q(I)
R n with every instance I over . Here we consider R 0 as
a one point space and let R
query is usually called Boolean query.
As our basic query language we take first-order logic FO.
of vocabulary R [
defines the n-ary query
I 7! '(I) := f(a an an )g
of schema .
3. The topology of planar instances
R 2 is equipped with the usual topology.
The interior of a set S R 2 is denoted by int(S), the
closure by cl(S), and the boundary by bd(S). We say that
a set S R 2 touches a point a 2 R 2 if
a 2 cl(S). Two
sets touch if S 2 touches a point a 2 S 1 or vice
versa.
Stratifications. Stratification is the fundamental fact that
makes the topology of our instances easy to handle.
A stratification of an instance I over is a finite partition
S of R 2 such that
(1) For all S 2 S, either S is a one point set, or S is homeomorphic
to the open interval (0; 1), or S is homeomorphic
to the open disk
(2) For all
cl(S) \ cl(S 0 ) is the union of elements of S.
(3) For all R 2 and S 2 S we either have S R I or
The following lemma follows from the fact that all regions
of an instance are R-definable. A proof can be found
in [vdD98].
Lemma 3.1. For every instance I there exists a stratification
of I .
Colors and cones. Let I be an instance over . The pre-
color of a point a 2 R 2 is the mapping (a)
fint; bdi; bde; extg defined by
int if a 2 int(R I );
bde if a 2 bd(R I ) n R I ;
ext if a 2 R 2 n cl(R I
A pre-cell is a maximal connected set of points of the same
pre-color. The cone of a point a 2 R 2 , denoted by cone(a),
is the circular list of the pre-colors of all pre-cells touching
a. Lemma 3.1 implies that cones are well-defined and finite.
A point a 2 R 2 is regular if for every neighborhood U of
a there is a point a 0 2 U such that
Otherwise a is singular. It follows from Lemma 3.1 that
an instance has only finitely many singular points. We call
an instance regular if it has no singular points. The cones
Figure
1. Two singular and two regular cones
of regular (singular) points are also called regular (singular,
resp.) (cf. Figure 1).
The cone-type of I , denoted by ct(I), is a list of all cones
appearing in I . Furthermore, for every singular cone this list
also records how often it occurs.
The color
of a point a 2 R 2 is the pair
.
Cells. A cell of color
of I is a maximal connected set
of points of color
. The color of a cell C is denoted by
(C). Lemma 3.1 implies that there are only finitely many
cells. Our assumption that all regions are bounded implies
that there is precisely one unbounded cell, which we call
the exterior of I . Lemma 3.1 implies that every cell has a
well defined dimension, which is either 0, 1, or 2. The 0-
dimensional cells are precisely the sets fag, where
a is a
singular point.
Let C I be the set of all cells of an instance I . We define
a binary adjacency relation E I on C I by letting two cells
be adjacent if, and only if, they touch. We call the graph
the cell graph of I . We can partition C I into
three subsets C I
1 , and C I
2 consisting of the 0, 1, and 2-
dimensional cells, respectively. Observe that the graph G I is
tri-partite with partition (C I
I is planar.
Lemma 3.2. Let I be an instance and C 2 C I .
(1) If C 2 C I
either C is homeomorphic to the open
disk (0), or there exists an m 1 such that C
is homeomorphic to the open disk Dm with m holes.
To be definite, we let
Dm := D n
cl
then C is homeomorphic to the sphere
or to the open interval (0; 1).
Proof: This follows easily from Lemma 3.1. 2
The skeleton S I of an instance I is the set of all 0-
dimensional cells and all 1-dimensional cells homeomorphic
to (0; 1). Note that the skeleton of a regular instance is
empty.
Lemma 3.3. Let I be an instance. Then every connected
component of the graph G I n S I is a tree.
In particular, if I is regular then G I is a tree.
Proof: Follows from the Jordan Curve Theorem and Lemma
3.2. 2
Figure
2 illustrates a typical connected component of an
instance I after removing the skeleton. Note that every con-
Figure
2.
nected component of the graph G I n S I has a unique "ex-
terior" cell which we may consider as the root of the tree.
Having the tree directed by fixing this root, we may speak
of the parent and the children of a node.
The following observation will be useful later.
Lemma 3.4. Let I ; I 0 be instances and C 2 C I
I 0such that
2 that is
adjacent to C there exists a
2 that is adjacent to C 0
such that
The topological invariant. Two instances I ; J over a
schema are homeomorphic if there is a homeomorphism
such that for all a 2 R 2 and R 2 we
have a 2 R I () h(a) 2 R J .
The topological invariant of an instance I over is an
expansion Y I of the cell graph that carries enough information
to characterize an instance up to homeomorphism. The
vocabulary of Y I is ^
R O is 8-ary, and
R for R 2 are unary. The
restriction of Y I to fEg is the cell graph G I of I . dim i
consists of the i-dimensional cells, for 2. X only
contains the exterior of I (the unique unbounded cell). For
every R 2 , the unary relation ^
R consists of all cells that
are subsets of R I .
O gives the orientation. It is an equivalence relation on
the quadruples (C;
are adjacent to C. Two such quadruples
are equivalent if either
i in the clockwise order of
the cells adjacent to C i for both i 2 f1; 2g or B 0
appears
i in the anti-clockwise order of the cells
adjacent to C i for both i 2 f1; 2g . 3 Note that O is empty
in regular instances.
It is proven in [PSV99] that I and J are homeomorphic
iff Y I and Y J are isomorphic and that Y I is computable
from I in time polynomial in the size of I . Since G I is
a planar graph and canonization of planar graphs is in P-
TIME, we can actually assume that Y I is canonical in the
sense that for homeomorphic instances I and J we have
In [SV98] it is proven that FO-queries over I can be
translated in linear time into fixpoint+counting queries over
Y I . Furthermore, if FO-queries over I can
be translated to FO-queries over Y I on instances with just
one closed region. (More precisely, this means that there is
a recursive mapping that associates with every ' 2 FO of
vocabulary f<; Rg a ' 0 2 FO of vocabulary d
fRg such that
for all instances I over fRg, where R I is a closed set, we
have I
The question was left open whether this result extends to
instances with one arbitrary region or with several regions.
In Section 6, we give a negative answer to this question.
3 There are various ways to define the orientation, ours is equivalent to
[SV98].
4 More precisely, there is a PTIME algorithm that, given an instance
I, computes Y I and a one-to-one mapping I : C I
(a canonical numbering) such that for homoemorphic instances I; J the
mapping is an isomorphism from Y I to Y J .
4. Topological queries and topological elementary
equivalence
Topological queries. A query is topological if for every
homeomorphism h of R 2 and for all instances I we have
denotes the set of all first-oder
formulas defining a topological query.
It is well-known (and easy to see) that the set FO top is not
decidable.
The following lemma collects a few basic FO-queries.
Its proof is an easy exercise.
Lemma 4.1. (1) For every color
there is a first-order
(x) such that for every instance I and for
every a 2 R 2 we have I
(a) ()
(2) For every (y) 2 FO there is a formula ' bd(
FO such that for every instance I we have ' bd(
(3) There is a formula '1 (x) 2 FO such that for every
instance I we have
Note that for every color
the formula '
is in FO top .
Moreover, for 2 FO top the formula ' bd( ) is in FO top . In
particular, this is the case for for an R 2 .
On the other hand, the formula '1 is not in FO top .
Topological elementary equivalence. Two instances I ; J
are (topologically) elementary equivalent (denoted I t J)
if they satisfy the same topological first-order sentences.
It is proven in [KPV97] that if = fRg then for all
instances I ; J in which R I , R J are closed sets we have:
I t J
We will see in the next section that this equivalence cannot
be extended to instances with one arbitrary region or with
several regions.
To prove this result, [KPV97] introduced two simple local
operations transforming an instance into an elementary
equivalent one. Their straightforward extension to several
regions is depicted in Figure 3, which is to be read as fol-
lows: Suppose we have an instance I that contains an open
subset O R 2 homeomorphic to one of the left hand sides
of
Figure
3. The different shades of grey display different
colors. Then it can be replaced by the corresponding subset
on the right hand side (cf. [KPV97] for details). Note that
both operations are symmetric, we can go from the right to
the left by applying the same operation again.
the first and ! 2 the second of the two
operations in Figure 3. For instances I and J we write
I !! i J if I can be transformed into an instance homeomorphic
to J by an application of ! i (for i 2 f1; 2g). We
Figure
3. Operations preserving t .
I
J if I and J can be transformed into each other
by a finite sequence of operations !!1 and !!2 .
Then the proof of [KPV97] easily yields:
Lemma 4.2. For all instances I , J we have: I
I t J .
It is an open question whether the converse of Lemma
4.2 holds. In particular, this is interesting because is not
known whether t is decidable or not, whereas
is decidable
in PSPACE (this is Proposition 5.5 of the next sec-
tion). [KPV97] have shown that that t and
coincide
on instances with only one closed region. We can extend
their result to several regions, but only on instances with
only regular cones.
5. Minimal instances
Definition 5.1. An instance I is minimal if it satisfies the
following two conditions:
1 is homeomorphic to S 1 and Iare adjacent to C and homeomorphic to Dm , for some
are adjacent to B and
homeomorphic to S 1 , then
Lemma 5.2. There is a PTIME algorithm that associates
with every instance I a minimal instance M(I) such that
I
M(I).
This can be done in such a way that for homeomorphic
instances I ; J we have
Proof: Suppose first that I does not satisfy (M1). We show
that I can be transformed to instance J with fewer cells
violating (M1), by two applications of ! 1 .
Let C be a 1-dimensional cell homeomorphic to S 1 such
that both neighbors of C have the same color, but
neither is homeomorphic to D. Then instance I locally
looks like Figure 4(1). We apply ! 1 twice (to the dashed
boxes) and obtain an instance that locally looks like Figure
4(3). We have obviously reduced the number of cells
violating (M1).
(1) (2) (3)
Figure
4.
Suppose now that I does not satisfy (M2), then it can
be transformed to an instance J with fewer cells violating
(M2) by an application of ! 1 without violating (M1).
To see this, let B be a 2-dimensional cell in I that is adjacent
to the cells C homeomorphic to S 1 of the same
color. A cell homeomorphic to S 1 is adjacent to two 2-
dimensional cells. Let be the other neighbors of
the same color. We have to distinguish between two cases:
Case 1: Both C 1 and C 2 are children of B. Then we can
reduce the number of cells violating (M2) by an application
of
Figure
5).
(1) (2)
Figure
5.
Case 2: C 1 is the parent of B and C 2 its child. Figure 6
shows how to proceed.
(1) (2)
Figure
6.
A PTIME algorithm transforming a given instance I to a
minimal instance M(I) may proceed as follows: Given I ,
the algorithm first computes the invariant Y I ; this is possible
in PTIME [PSV99]. The operations ! 1 and ! 2 translate
to simple local operations on Y I . Our algorithm first applies
pairs of ! 1 until (M1) holds (as in figure 4), and then
holds. This can be done by a simple greedy
strategy. The result is a structure Y that is the toplogical invariant
of a minimal instance M(I). It is shown in [PSV99]
that, given an invariant Y , an instance J such that Y
can be computed in polynomial time.
Because Y I is canonical (cf. Page 4), this algorithm also
guarantees that for homeomorphic instances I ; J we have
The language of an instance. A fundamental curve
in an instance I is an R-definable continuous mapping
lim a!1 jjf (a)jj = 1. If f is a fundamental curve in I ,
then for every 2-dimensional cell the set f 1 (C) is a finite
union of open intervals (one of which is of the form
a) and one of the form (b; 1)), and for every 0- and
1-dimensional cell C the set f 1 (C) is a finite union of
closed intervals (some of which may just be single points).
This follows from the fact that f is R-definable.
We are interested in the finite sequence of colors appearing
on a fundamental curve, i.e. in a finite word W (f; I)
over the alphabet I consisting of all colors appearing in I .
We say that a word W 2
I is realized in I if there is a fundamental
curve f such that W (f; . The language
L(I) is the set of all words realized by I .
Example 5.3. Let I be the instance with
R I := fa 2 R 2 j (1=2) < jjajj 1g.
I has five cells C 1 := fa
- be the colors of C
respectively, and note that C 5 has the same color as C 1 .
Figure
7.
a
b a
a
d
d
Figure
8.
Then for example the words
and
are realized in I (cf. Figure 7). It is not hard
to see that L(I) can be described by the finite automaton
displayed in Figure 8.
-From this example it is easy to see that for every I the
language L(I) is regular; it is accepted by an automaton
that is essentially the cell graph.
More formally, let I be an instance. We define a finite
automaton A I
(where 1 is a new symbol that does
not denote a cell).
:= f(C;
denotes the
exterior.
Note that the graph underlying A I is the cell graph G I extended
by one additional vertex 1 that is only adjacent to
the exterior. The proof of the following lemma is straight-forward
Lemma 5.4. For every instance I , the automaton A I accepts
L(I). Thus L(I) is a regular language.
A walk in a graph E) is a sequence
for 1 i < n. For a mapping
. Then it is almost immediate
that for every instance I we have
wn walk in G I
with
where as usual EXT denotes the exterior.
Proposition 5.5.
is decidable in PSPACE.
Proof:
Given two instances I and J , we want to check in
PSPACE whether it is possible to go from I to J using
only homeomorphisms and operations
in
. Given an instance
I it is possible to compute its invariant Y I in PTIME
[PSV99]. Thus the problem reduces to checking in PSPACE
whether Y J can be derived from Y I using operations from
. There are at most polynomially many different ways to
apply operators
from
to Y I (one just has to consider tuples
of at most 5 cells and check if an ! i can be applied
to this tuple).
Let
I) be the set of invariants that can be
obtained from Y I by applying one operation
from
. The
previous remarks show
that
I) contains at most polynomially
many elements and can be computed in PTIME. It
is therefore possible to enumerate all topological invariants
that can be derived from Y I by applying operations
in
and check at each step whether it corresponds to Y J or not.
The latter can be done in PTIME, because this is deciding
whether two planar graphs are isomorphic.
We now give a strategy that ensures that this process will
stop. Because M(I) and M(J) are computable in PTIME
(Lemma 5.2), we can assume I and J minimal.
It is easy to see that operations
in
do not change the
cones of the instances involved. Therefore I
J implies
that I and J have the same cone-type (cf. Page 3) and the
same orientation relation O. This can be checked in PTIME.
As the cones determine the number of 1-dimensional cells
homeomorphic to (0; 1), this implies that all instances J
such that I
J are such that their respective skeletons
verify jS I
We may view the skeletons S I and S J as embedded planar
graphs with the 0-dimensional cells as vertices and the
1-dimensional cells homeomorphic to (0; 1) as edges. Let
F I and F J , resepectively, denote the faces of these embedded
graphs. Note that these faces correspond precisely
to the connected components of G I n S I and G J n S J , re-
spectively. The number of connected components of S J is
bounded by c I < jI j, the number of cones of I . Therefore
the number of faces of S J is given by the Euler formula,
we have jF J is the number of
connected components of S J , and e; v its number of edges
and vertices. As is bounded by jI j, so is jF J j. This gives
a linear (in jI j) bound on the number of connected components
of G J n S J . We would also like to bound the size of
each connected component of G J n S J (or equivalently the
number of 1-dimensional cells homeomorphic to S 1 ). Un-
fortunately, repeated application of the
operations
1 and
may produce arbitrary large such components.
Nevertheless, it is possible to decide in PSPACE whether
I ! K for an instance K such that there is an isomorphism
from S K to S J that preserves the coloring
and
orientation O. Furthermore, if there is such a K, it can
be computed in PSPACE. The complexity is PSPACE because
there is no need to produce extra 1-dimensional cells
homeomorphic to S 1 during this process. This would only
restrict the number of possible connections between the
connected components of the skeleton.
So without loss of generality we can now assume that I
and J are minimal and have isomorphic skeletons. Recall
that every face f 2 F I (f 2 F J ) corresponds to a connected
component T f of G I n S I (G J n S J , respectively).
By Lemma 3.3, T f is a tree with a canonical root
exterior cell.
To find out whether I
J , for each isomorphism
that preserves the coloring and orientation we
check whether there are instances I 0
I and J 0
such that S I 0
for every face f 2 F I 0
we have T f
denotes the face
corresponding to f under the isomorphism i).
It suffices to prove the following for every f 2 F I :
Claim 1. Either I
6
J , or I and J can be transformed to
instances I 0 and J 0 , respectively, with S I 0
such that in I 0 the component T f is isomorphic to T i(f) in
ffg, the component T g in I 0 has
remained the same as in I , and for all h 2 F J n fi(f)g, the
component T h in J 0 has remained the same as in J .
It can be decided in PSPACE which of the two alternatives
holds, for the latter I 0 ; J 0 can also be computed in
PSPACE.
We can go through all isomorphism i in PSPACE, so we
fix one. We also fix a face f 2 F I .
Before going on we need the following definitions. A
branch of a tree T is a minimal walk in T going from the
root of T to one of its leaves. A walk in G I is said to be
regular if it never goes through a singular cell. Let be the
set of colors of I and J . For a word W 2 , we define
M(W ) as the word of computed using rules in the same
spirit as in Lemma 5.2. More precisely this means rewriting
the word W using the rules ! for all colors ; .
Fact 1. Let
x be a regular walk in G I which starts in
. Then it is possible to transform I in such a way that T f
contains a branch t
x , such that
(x).
This can be proved by a single loop on the length of
starting from the end. Without loss of generality
we can assume that xn is a 2-dimensional cell. We
start by constructing in xn 2 a new ball vn 1 vn of colo
This is easily done by applying ! 1 or ! 2
once. Assume next that we have constructed a subinstance
whose cell graph is a path attached to x i . Again
we apply ! 1 or ! 2 once in order to get v
surrounding v i+1 vn .
The same kind of induction (but starting from the beginning
of the word this time) shows that the converse also
holds:
Fact 2. If it is possible to construct using operations
in
a
new branch t in T f then there exists a walk
x t in G I starting
from E f such that M(
(t)).
We say that a walk x realizes a word W in I if
It can be checked in PSPACE whether
a given word is realized by a walk in G I starting in E f ; one
way to see this is to reduce the problem to the question of
whether two regular languages have a non-empty intersection
Now we can prove Claim 1. We first transform I to I 0 .
For each branch
x of T i(f) starting from E i(f) , let W
(x) and check whether there is a walk x 0 that realizes W
x .
If this is not the case, Fact 2 shows that I
6
J . If it is the
case, use Fact 1 to construct the corresponding branch in I .
To construct J 0 , do the same after reversing the role of I
and J . It is clear that if the algorithm does not find out that
J on its way, after minimizing the resulting instances
they satisfy the claim.In the next section, it will be convenient to work with
a slight simplification of the cell graph. We call a 2-
dimensional cell B 6= EXT of an instance I inessential if
B is homeomorphic to a disk D (and thus has precisely one
neighbor in G I ), and the neighbor C of B in G I has another
neighbor
(B). Let H I be the graph
obtained from G I by deleting all vertices that are inessential
cells. We call H I the reduced cell graph of I . Then (5.1)
actually holds with H I instead of G I :
wn walk in H I
with
6. Regular instances
Recall that an instance I is regular if all points a 2 R 2
are regular. The main result of this section is that
and
t coincide on regular instances. As a corollary, we will see
that the equivalence (4.1) does not extend beyond instances
with one closed region.
To illustrate where the problems are, let us start with a
simple example:
Example 6.1. Let consider the two instances
I ; J with R I := fa
3 jjajj 1g, R J := S I , and S J := R I
(cf.
Figure
9). Obviously, I and J have the same cone-type.
Figure
9.
Let us try to find a sentence ' 2 FO top such that I
and J 6j= '. At first glance this looks easy, just take the
sentence saying "every horizontal line that intersects region
R intersects region S before". Then every instance homeomorphic
to I satisfies ', and every instance homeomorphic
to J does not satisfy '.
Unfortunately, ' 62 FO top . Figure 10 shows why: All
three instances displayed are homeomorphic, but only the
last one satisfies '.
We will see later that there is a sentence ' 2 FO top that
distinguishes I from J , but such a ' is quite complicated.
For now, let us just note that I
6
J .
Figure
10.
Recall that by Lemma 3.3, the cell graph and thus the
reduced cell graph of a regular instance is a tree. We think
of this tree as being directed with the exterior as its root.
The leaves are the 2-dimensional cells homeomorphic to the
disk D.
For instances I ; J we write H I
H J if there is an
embedding h of H I into H J that preserves
. We define
accordingly.
Lemma 6.2. For all regular instances I ; J we have H I
H J if, and only if, I and J are homeomorphic.
Proof: The backward direction is trivial. For the forward
direction, let I and J be regular instances with H I
H J .
Then it follows from Lemma 3.4 that G I
G J . Since for
regular instances the orientation O is empty, this implies
Y I Thus I and J are homeomorphic by [PSV99].2
Recall the definition of the minimal instance M(I) associated
with an instance I . An inspection the proof of Lemma
5.2 shows that for a regular instance I we have
If M is a minimal regular instance, then the reduced cell
graph H M has the following nice property:
If C is a vertex of H M , then all neighbors
of C in H M have different colors. (6.2)
Lemma 6.3. Let M;N be minimal regular instances. Then
Proof: This is an easy consequence of (5.2) and (6.2). 2
Recall that a regular language L is aperiodic
if there exists an n 2 N such that for all u; v; w 2
such that uv n w 2 L we also have uv n+1 w 2 L. By a
well-known theorem of McNaughton, Papert [MP71] and
Sch-utzenberger [Sch65], the aperiodic languages are precisely
the languages that are definable in first-order logic. 5
The following lemmas will be useful later. They are all
based on (6.2) and the fact that the reduced cell graphs of
regular instances are trees.
5 Furthermore, these are precisely the star-free regular languages.
Lemma 6.4. Let M be a minimal regular instance. Then
L(M) is aperiodic.
Proof:
Let M be a minimal regular instance.
The crucial step is to prove the following claim:
Let
a walk in H M such that y
Suppose for contradiction that (6.3) is wrong. Choose
l minimal such that there is a v 2 l , an n 2,
and a walk y
Since adjacent vertices in a cell graph (and thus in a
reduced cell graph) have different colors and
(y l+1 ), we have l 2.
For notational convenience, we let y 0 := y nl , y
We choose an
such that there is no nlg such that y j is in the
subtree below y i . Then y is the parent of y i .
Let :=
Then for 0 k n we have
(y
By (6.2), this implies y
y kl+j+1 . If l = 2, this implies y
tion. If l 3 we can define a walk y 0 from
y by deleting
y kl+j and y kl+j+1 , for 0 k n 1. Then (6.3) holds
with in contradiction to the minimality
of l.
This proves (6.3).
Now let n > jH M j and u; v; w 2 such that
We shall prove that uv n+1 w 2 L(M).
Let k; l; m be the length of u; v; w, respectively, and x :=
a walk in H M with
there exist
such that x Applying (6.3) to the walk
But then
is a walk with
be a minimal regular instance and
instances such that J
Proof: Recall the definitions of the operations ! 1 and ! 2
from
Figure
3. It suffices to prove the statement for instances
L(M). Note that there is a word
obtained from W by replacing some letters
by subwords (if J !!1 J 0 ) or
(if
Suppose for contradiction that W 0 2 L(M). Then
there is a walk
x 0 in H M such that
. By (6.2), whenever
Hence
n is a walk in H M with
Similarly, if
, we have x
that
a contradiction.
This shows that W 0
Lemma 6.6. Let M be a minimal regular instance and J
a regular instance with L(J) 6 L(M). Then there is a
R such that for the curve f b : x 7! (x; b) we have
Proof: We assume that J and M are instances over the
same schema. Then, denoting by EXT and EXT 0 the exterior
of J , M , respectively, we have
For every walk
define a sequence
n as follows: We let x 0
i be the (by (6.2) unique)
neighbor of x 0
1 that has the same color as x i , if x 0
and such a neighbor exists, and x 0
H J is a tree and H M satisfies (6.2), x
j or x 0
m). It follows
that for any walk
or y 0
.
Note that actually the sequence x 0 only depends on the
word
(x). This if
then
only if, x 0
Now suppose that
with x 0
that xm+1 must
be a child of xm in the tree H J .
Choose b 2 R such that the curve f b intersects the cell
xm+1 . Let
l be a walk in H J that corresponds
to the curve f b and let j < l be minimal such that y
xm+1 . Then y because xm+1 is a child of xm .
Then either y 0
m . In both case we have
Theorem 6.7. Let I be a regular instance of schema .
Then there is a sentence ' I 2 FO top of vocabulary f<g[
such that an instance J of the same schema satisfies ' I
if, and only if, J
I .
Proof: Let be the set of colors appearing in I . For every
aperiodic language L there is a formula L (y) 2
FO of vocabulary f<g [ such that a regular instance I
satisfies L (b) for a b 2 R if, any only if, W (f b ; I) 2 L.
This is an easy consequence of the theorem of McNaughton,
Papert and Sch-utzenberger that the aperiodic languages are
precisely the first-order definable languages.
Let M := M(I) and L := L(M). By Lemma 6.4, L is
aperiodic. Let
Clearly, M satisfies M .
We claim that for all instances J we have:
To prove this claim, note first that every instance satisfying
M realizes the same cones as I and thus is regular.
Let J be a regular instance. Assume first that J
Then by Lemma 6.6, L(J) L(M). Because J
M(J), by Lemma 6.5 we have L(M(J)) L(M). Thus
Conversely, if M(J)
and thus J
This proves (6.4).
It follows easily that M 2 FO top . Indeed, assume that
. For every J 0 homeomorphic to J we have
We let
N minimal regular instance
with H N
We shall prove that for any instance J of schema we
have
Suppose that J I . Then in M(J) I by (6.4).
By Lemma 6.6, this implies L(M(J)) L(M) and
minimal regular N with H N
H M . By Lemma 6.3, this implies M(J)
M . By Lemma
6.2, it follows that M and M(J) are homeomorphic.
Thus J
I .
Conversely, suppose that J
I . Then by Lemma 4.2,
I . Thus J I , because ' I 2 FO top and I
(the later follows easily from the previous paragraph and
Corollary 6.8. For all regular instances I ; J we have I
Finally, we are ready to prove that the equivalence (4.1)
does not extend beyond instances with one closed region.
Corollary 6.9. The two instances of Example 6.1 are not
elementary equivalent. Neither are the instances I ; J over
fRg defined by:
R I := fa
7. Translating sentences to the topological in-
variant
Recall that it is proven in [SV98] that there is a recursive
mapping that associates with every ' 2 FO of vocabulary
f<; Rg a ' 0 2 FO of vocabulary d
fRg such that for
all instances I over fRg, where R I is a closed set, we have
I
The purpose of this section is to prove that this does not
extend to arbitrary instances.
Proposition 7.1. There is a sentence ' 2 FO top of vocabulary
f<; Rg such that for every sentence ' 0 2 FO of vocabulary
d
fRg there is an instance I such that I
G I 6j= ' 0 .
Proof: Let I 0 ; J 0 be the instances defined by
For n 1, let I n ; Jn be defined by
R In := R I0 [
R In := R I0 [
a
(cf.
Figure
11).
Figure
11. The instances I 3 and J 3 .
Note that for all n; m 1 we have I n
I m and Jn
Jm , but I n
6
Jn . Corollary 6.8 implies that there is a
sentence ' 2 FO top such that for all n 1 we have I n
and Jn 6j= '.
Let n 1. The graph G In is just a path with 5
tices, say, C . Denoting the colors by
the colors on this path form the following sequence:
| {z }
times
| {z }
times
G Jn is the same, except that
. The other relations of the topological invariant
are identical in Y In and Y Jn . dim 0 is empty, dim 1
consists of all C i with even i, and dim 2 consists of the C i
with odd i. The orientation O is empty. Finally, g.
Standard Ehrenfeucht-Fra-ss-e techniques show that for
every sentence ' 0 2 FO there is an n 1 such that Y In
' if, and only if, Y Jn
The statement of the proposition follows. 2
8. On completeness of languages
An open problem that we have not considered so far is
to find a (recursive) language that expresses precisely the
first-order topological queries. Although this is certainly
an interesting question, we doubt that, even if theoretically
such a language exists, it would be a natural language that
may serve as a practical query language. Our results show
that first-order topological queries are not local; on the other
hand, it is known that first-order logic fails to express the
most natural non-local topological queries such as connectivity
of a region.
[KV99] have introduced a topological query language,
the cone-logic CL, only expressing local properties. This
language is a two tier language that allows to build first-order
expressions whose atoms are again first-order expressions
talking about the cones and colors of points. [KV99]
have proven that CL captures precisely the first-order topological
properties of instances with only one closed region
that is "fully 2-dimensional". If the underlying structure is
R< , it follows from [SV98] that the condition of being full
2-dimensional is not needed. Corollary 6.9 shows that this
result does not extend to instances with several closed regions
or one arbitrary region.
We propose to extend CL by a path operator, as it has
been introduced in [BGLS99]. Let us call the resulting
topological query language PCL. The results of [BGLS99]
show that this language has the basic properties expected
from a reasonable spatial database query language. In addi-
tion, it admits efficient query evaluation (the cost of evaluating
a PCL-query is not substantially higher than the cost
of evaluating a first-order query).
An example of a PCL-query not expressible in FO is
connectivity of regions. We conjecture that conversely every
FO top -query is expressible in PCL. The idea behind this
conjecture is that local properties (expressible in CL) together
with the language of an instance, which can be described
by the path-operator, seem to capture the first-order
topological properties of an instance. As a first step towards
proving this conjecture, let us remark that Corollary 6.8
implies that on regular instances every FO top -sentence is equivalent
to a set of PCL-sentences.
9. Conclusions
The results of this paper give a good understanding of
first-order topological queries on regular instances. Of
course one may argue that regular instances are completely
irrelevant - look at any map and you will find singular
points. However, we could use our results to answer several
open questions concerning topological queries on arbitrary
instances.
As a matter of fact, we have shown that the previous understanding
of first-order topological queries, viewing them
as "local" in the sense that they can only speak about the
colors of points, is insufficient; this may be our main contribution
The problem of a characterization of topological elementary
equivalence on arbitrary planar instances remains open.
We conjecture that Corollary 6.8 generalizes, i.e. that t is
the same as
on arbitrary instances. If this was true, by
Proposition 5.5 topological elementary equivalence would
be decidable in PSPACE. Let us remark that we do not believe
that the PSPACE-bound of Proposition 5.5 is optimal,
we see no reason why
should not be decidable in NP or
even PTIME.
--R
Foundations of Databases.
Relational expressive power of constraint query lan- guages
Reachability and Connectivity Queries in Constraint Databases.
Languages for relational databases over interpreted structures.
Safe constraint queries.
Finite Model Theory.
Queries with arithmetical constraints.
Finitely representable databases.
Constraint Databases.
Bart Kuijpers and Jan Van den Bussche.
Topological queries in spatial databases.
On finite monoids having only trivial subgroups.
Querying spatial databases via topological invariants.
--TR
Towards a theory of spatial database queries (extended abstract)
Queries with arithmetical constraints
Finitely representable databases
Relational expressive power of constraint query languages
Topological queries in spatial databases
Complete geometric query languages
Reachability and connectivity queries in constraint databases
Querying spatial databases via topological invariants
Foundations of Databases
Constraint Databases
On Capturing First-Order Topological Properties of Planar Spatial Databases
Fixed-Point Logics on Planar Graphs
--CTR
Michael Benedikt , Jan Van den Bussche , Christof Lding , Thomas Wilke, A characterization of first-order topological properties of planar spatial data, Proceedings of the twenty-third ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 14-16, 2004, Paris, France
Michael Benedikt , Bart Kuijpers , Christof Lding , Jan Van den Bussche , Thomas Wilke, A characterization of first-order topological properties of planar spatial data, Journal of the ACM (JACM), v.53 n.2, p.273-305, March 2006 | constraint databases;first-order logic;topological queries |
507385 | Probabilistic game semantics. | A category of HO/N-style games and probabilistic strategies is developed where the possible choices of a strategy are quantified so as to give a measure of the likelihood of seeing a given play. A two-sided die is shown to be universal in this category, in the sense that any strategy breaks down into a composition between some deterministic strategy and that die. The interpretative power of the category is then demonstrated by delineating a Cartesian closed subcategory that provides a fully abstract model of a probabilistic extension of Idealized Algol. | Introduction
Consider the fixpoint expression Y(x:(1 or x)). This can
have two very different meanings attached. If, by or, it is
meant that both sides can happen with, say, equal chances
almost surely, we'll get the result 1. If, on the other
hand, it is meant only that either can happen-but we don't
know which-then the only consistent answer is to say that
we may get 1 but we also may diverge. This latter meaning
was investigated in a recent games model for nondeterminism
by Harmer & McCusker [12]. In this paper, we construct
a games model of the former by developing a suitable
notion of probabilistic strategy.
One motivation for doing this is to give a means of understanding
quantitative aspects of uncertainty that might
arise in the description of a process. Dually, inherently
probabilistic processes-such as randomized optimization
procedures, optimal strategies in classical game theory or
more general adaptive behavior-can be accounted for in
this framework.
Another motivation is purely mathematical. Extending
games semantics to the probabilistic case is a natural step to
take, just as probabilistic strategies are extremely natural in
VonNeumann matrix games. Indeed, our category smoothly
extends the deterministic games world charted out by Hyland
Ong [13] and Nickau [20] and further developed by
McCusker [2] and Laird [15].
The category also fits in very nicely with the basic concepts
of probability theory. The interaction of a strategy and
a counter-strategy on a game gives rise to a Borelian sub-
probability on the infinite plays. The category has a tensor
product which corresponds to the classical product measure
construction, and composition of strategies is defined using
the image measure under the composition function (which
in our case is always measurable). A factorization result is
shown, saying that any probabilistic strategy can be broken
down into the composition of a deterministic one and the 2-
sided die. In that sense, the 2-sided die is shown to be uni-
versal. This is a discrete version of the basic fact that any
probability on [0; 1] can be reconstructed as the law of some
random variable defined on the Lebesgue probability triple.
In short, we give what seems the fundamentals of a higher-order
probability theory in "interactive" form.
A case study We eventually zero in on a Cartesian closed
subcategory which we show provides a fully abstract model
of a probabilistic extension of Idealized Algol. The exact
choice of probabilistic primitive used to extend Algol is not
overly important since, by our factorization result, any probabilistic
process can be programmed using only a fair coin.
The recent shifting of conditions from plays to strategies and
other advances in the field allow, at this point, the use of
a pretty standard machinery: we restrict to single-threaded
strategies-those that are copyable and discardable with respect
to the tensor product-and show that homsets admit
a !-continuous CPO structure of which the basis is easily
seen to be Algol-definable thanks to the factorization theorem
and a previous theorem of Abramsky & McCusker [3].
For general reasons, this suffices to show full abstraction.
Generality Our construction in no way depends on a particular
games model: we give it in sufficient generality so
that it can be adapted to whatever particular language we
already know a deterministic games model of. And that is
already quite a lot since, apart from the purely functional
PCF, there are models for control, ground and general ref-
erences. That games semantics concepts adapt in this way
gives a sense of the homogeneity of this semantic universe,
in due contrast with traditional denotational semantics.
Related work The most obvious precursor to this work is
the study of probabilistic powerdomains and their application
to denotational semantics as in the early work of Saheb-
Djahromi [22, 23] and of Jones & Plotkin [14] amongst oth-
ers. More recent work [7, 8] has investigated bisimulation
in a probabilistic context. It would be interesting to consider
more closely the connection between that work and our own
as this remains rather unclear for now.
Another possible connection is with exact real arithmetic
[9]. Intriguingly, several of the issues raised in our work,
particularly those surrounding the factorization result, have
a similar flavour to those raised in real arithmetic [21]. It
would be interesting to pursue this line of investigation further
and, indeed, to consider using game semantics to model
a language such as Real PCF [10].
Future work One challenging application could be in
probabilistic program analysis. Hankin and Malacaria's
static analysis techniques derived from games semantics
[16, 17, 18] could get on well with probabilities, and give
tools in code optimization for instance. Less an application
maybe, but still sort of practical, is the tempting question of
whether the probabilistic observational equivalence can be
shown to be decidable for our extension of Idealized Algol
since, in that particular case, the so-called intrinsic equivalence
in the intensional model has a simple concrete charac-
terization. If solved to the positive, that could relate to formal
methods in programming.
A more theoretical question is that of relating our model
to Harmer & McCusker's model [12]. There is an obvious
embedding of theirs into ours, based on an equal chances interpretation
of non-determinism, but it isn't fully abstract,
as the opening example clearly shows. Another possibility
is the investigation of a "timeless" version of our model
in the spirit of the Baillot, Danos, Ehrhard & Regnier [6]
construction starting from a symmetrized games model of
Linear Logic and ending in a polarized variant of relational
semantics. The concurrent games of Abramsky & Mel-
lies [1] and their polarized version by Mellies could prove
useful here, since they are an intermediate step between
games and hypercoherences. Current investigation with
Ehrhard shows that there are probabilistic coherence spaces
that could bridge the gap, at least in the intuitionistic case.
Game semantics represents computation with "execution
traces" that describe the interaction between a System and
its Environment. The System and Environment are modelled
as the protagonists, respectively Player and Opponent,
in a game; an execution trace by a sequence of moves played
by the two protagonists.
To make these ideas precise, we need a few bits of nota-
tion. The set of strings over an alphabet \Sigma is written \Sigma ? . If
s; t 2 \Sigma ? , we write s v t if s is a prefix of t, denote the
length of s by jsj and, for 1 i jsj, denote the ith symbol
of s by s i . The unique string of length 0 is written ". If
for the restriction of s to \Sigma 0 , i.e. the
subsequence obtained by erasing all symbols of s not in \Sigma 0 .
2.1 Arenas & legal plays
The basic ground rules of a game are determined by the
arena in which it is played. We formally define an arena
as a triple hMA ; A ; 'A i where:
ffl MA is a countable set of moves.
Ag \Theta fI; ng is a labelling
function designating each move m as an Opponent or a
Player move, a Question or an Answer and as an Initial
or non-initial move.
We define OP
A , QA
A and In
A as the first, second and
third projections respectively from A .
ffl 'A is a binary relation, known as enabling, on MA
such
A (m) 6= OP
A (n) and In
A (n) 6=
I.
In
These conditions say that the protagonists enable each
other's moves, not their own; that initial moves can't be en-
abled; that only Opponent can have initial moves and, more-
over, they must be Questions; and that Answers must be enabled
by Questions.
Flat arenas If X is a countable set, we define the flat
arena over X by setting MX to be X
making
q 'X x for every x 2 X . We denote by C, B and N respectively
the flat arenas over the one-point set fag, the two-point
set ftt; ffg and the countably infinite set f0;
Legal plays A string s
A can be endowed with a "jus-
tification structure" by equipping each non-initial move in s
with a "pointer" pointing back to an earlier enabling move.
This can be formalized with a function f s
In
then we say that
then we call s i an initial occurrence. If f s
then we say that s i is
hereditarily justified by s jn . A string equipped with such
an f s is called a justified string. A legal play is a justified
string that strictly alternates between Opponent and Player:
A (a) 6= OP
A (b).
We write LA for the set of all legal plays in the arena A;
even
A and L odd
A denote the obvious subsets so that Player
made the last move in s 2 L even
A , etc.
If s 2 LA , we write ie(s) for the set of all immediate
extensions of s, i.e. those legal plays t such that s v t and
1. The current thread of sa 2 L odd
A , written
dsae, is defined as the subsequence of sa consisting of all
moves hereditarily justified by the hereditary justifier of a.
Product & arrow If A and B are arenas, we define their
product A \Theta B by:
the disjoint union.
the copairing.
This places A and B "side by side" with no chance of any
interaction between them. The empty arena
with sole legal play " is the unit for this constructor.
Our other constructor is the arrow, defined by:
OP
In
- In
or In
In
A (n).
In other words, the initial moves of A ) B are the initial
moves of B, the r"oles of Opponent and Player are reversed
in A and the (formerly) initial moves of A are now enabled
by the (still) initial moves of B.
2.2 Strategies
A strategy is a kind of "rule book" saying which moves may
be made by Player and with what probability. First of all, a
prestrategy oe on an arena A is just a function oe : L even
The traces of oe, which we denote by T oe , are those
even-length legal plays assigned non-zero weight by oe, i.e.
The domain of oe, written dom(oe), is those odd-length
plays that are "reachable" by oe, i.e. fsa 2 L odd
g.
Finally, given s 2 dom(oe), we define the range of oe at s,
written rng oe (s), to be those immediate extensions of s that
are in T oe , i.e. this can be empty.
A prestrategy oe for A is a strategy iff
A then oe(s)
t2ie(sa) oe(t).
Note that (p2) implies that oe is order reversing with respect
to the prefix ordering on T oe and the usual ordering on [0; 1].
This means that T oe is even-length prefix-closed. Further-
more, by (p1), we get oe(s) 1 for all s 2 LA .
Basic constraints A strategy oe is deterministic iff for all
Equivalently, oe takes values in f0; 1g.
Note that (p2) asserts only an inequality. If oe is a strategy
for which this is always in fact an equality, we say that it's a
total strategy. This point is further discussed below.
Local probabilities Given sa 2 dom(oe), define the conditional
probability of sab given sa by:
well-defined. By
(p2), this gives a subprobability on rng oe (sa) for each sa 2
dom(oe). Intuitively speaking, oe(sab=sa) is the "die" that oe
rolls for each sa 2 dom(oe).
2.3 Examples of strategies
The fair coin The simplest example of a probabilistic
strategy is probably the "fair coin" coin on the Boolean arena
B. Its trace set is just the set of even-length sequences of
the form qb 1 \Delta \Delta \Delta qb n where each b i is in ftt; ff g and is justified
by the immediately preceding move. We assign a probability
to such an s 2 T coin by e.g.
etc. All local probabilities, such as oe(qffqtt=qffq), are 1=2.
The polydie Another useful probabilistic strategy is
polydie on N ) N. A typical play of polydie is:
and the probability assigned to this play is
1=n. In other words, polydie asks for a number n and then
rolls an n-sided die, returning something between 0 and
1 with uniform probability.
Strategies on flat arenas as probability measures Plays
in B of the form are in a 1-1 correspon-
dence, denoted by B(\Delta), with finite binary strings over
1g. Given such a binary string w, we set
xg. These Vw s are the basic open sets of
the Cantor topology on infinite binary strings.
Given a total strategy oe on B, we define p oe (V
oe(s), which standardly extends to a unique probability measure
on the Borelian oe-algebra B generated by the Vw s. We
can now see that (p1) says that the measure of the whole
space is 1, while (p2) says that p is countably additive (or
subadditive if oe isn't assumed total). Conversely, any probability
measure on B defines a total stategy on B.
Such a probability measure could be any discrete-time
stochastic process with values in f0; 1g and, in particular,
it needn't be a Markov process. An example of this is
the damped deviation die, denoted 3d, a strategy where the
larger the deviation between trues and falses so far, the likelier
it is to return that value which diminishes the deviation.
For any play such as be the difference
between the number of trues and falses in s, suppose
d(s)+2 . Hence, 3d must
be able to look arbitrarily far back in time.
2.4 Composing strategies
Let u be a finite string of moves from arenas A, B
and C equipped with candidate "justification pointers",
i.e. a strictly decreasing function f
1g. We define uB;C to be the subsequence
of u where we delete all moves from A and suitably "renor-
f u to preserve the "pointers". We define u
similarly. We define u A; C by removing all moves from
and "pointers" to B except in the case where a 2 MA points
to b 2 MB which, in turn, points to c 2 MC , whence we
make a point directly to c in uA; C.
A legal interaction of arenas A, B and C is such a u
satisfying
We write int(A; B; C) for the set of
all legal interactions of A, B and C.
If s 2 L even
A)C , we define its set of witnesses by:
i.e. all legal interactions playing s "on the outside". Given
strategies oe and for A ) B and B ) C respectively,
we define oe ; for A ) C by assigning a probability to
with the following composition formula.
We sometimes write oe k (u) for oe(u A; B) \Delta (uB; C).
Note that oe and are "rolling" independent dice and that the
above sum is always countable just because int(A; B; C) is.
Well-defined composition It's clear that the composite of
two prestrategies is itself a prestrategy (with the convention
that also easy to see that (p1) is preserved.
The lemma below does most of the rest of the work.
Lemma 2.1 (probabilistic flow) If sa 2 L odd
A)C and ua 2
C) such that u 2 wit(s), let W ua be the set of
all
ie(sa), i.e. all legal interactions starting like ua and witnessing
an immediate extension of sa. If oe : A ) B and
Proof We prove by induction that the interactions obtained
by truncating W ua at any depth d, i.e. W ua
dg, satisfy the inequality. Applying
(p2) to a's component gives the base case. Other-
wise, by the inductive hypothesis,
oe k (u). Each u 0 2 W ua =d which hasn't yet reached the
"outside" can have many extensions in W ua =d
(p2) again, their sum cannot exceed the probability of u 0 .
Hence
Proposition 2.2 If oe and are strategies for A ) B and
respectively then oe ; is a strategy for A ) C.
Proof We just check (p2). If sa 2 L odd
t2ie(sa) wit(t). For each u 2 wit(s), we have W ua as
defined in lemma 2.1; clearly W ua ' W sa . Moreover, if
W sa . We now apply lemma 2.1 to each witness u and its associated
W ua so that
and, since
the desired
result follows. \Xi
Remark If both strategies are deterministic, i.e. with values
in f0; 1g, then oe ; (s) is an integer which, by (p1) and
(p2), can only be 0 or 1, so there can be at most one witness
of s. In general, a play can have (infinitely) many witnesses.
A category of arenas & strategies The above result tells
us that we have a good notion of composition and it's routine
to verify that this is associative. The usual copycat strategies
are our identities:
even s: sA
where id A
In summary, we have a category where objects are arenas
and an arrow from A to B is a strategy for A ) B. We
also have a subcategory of deterministic strategies since this
class of strategies is closed under composition.
2.5 Examples of composing strategies
Consider composing coin with the "wait for true" strategy
defined as the semantics of
Y(f: x: if x then skip else f(x)):
Some typical interactions are:
a
ff
a
ff
ff
a
The first witness of qa has probability 1=2, the second has
probability 1=4, etc., so the overall probability of qa is 1=2+
If we use a biased coin instead-where the more time
that's passed, the less likely it is to return true-the probability
of becoming "trapped" in the loop can be non-zero.
For instance, set
coin
1+jsj=2
1. The overall probability of qa
is now
which can be anything between 0
and 1. Of course, for coin 0 to satisfy (p2) we must have, for
all
but this is the case since, by
assumption,
We can easily make coin 0 into a total strategy so that the
class of total strategies can't be closed under composition:
pieces of chance can get lost in infinite chatters. This is the
reason why (p2) is an inequality.
Another variation on this example uses the "count for
true" strategy on B ) N defined as the semantics of
Y(f: x: if x then 0 else 1+f(x)):
In the composite with (the usual) coin, we get q0 with probability
1=2, q1 with probability 1=4, q2 with probability 1=8
and so on. In other words, the result is a geometric distribution
on the natural numbers: we output n with probability
Variations on a -scheme Consider the strategy mu 2 defined
as the semantics of
of type (Nat Com. This term takes an input f
and feeds it successive arguments, beginning with 2, until it
gives an output 0, at which point it just terminates.
If we compose polydie with mu 2 , the behaviour is as fol-
lows. Initially, a 2-sided die is rolled. If it turns up a 0, we're
done; otherwise, a 3-sided die is rolled. If a 0 turns up this
time, we're done; otherwise a 4-sided die is rolled, etc. The
trace qa appears in the composite strategy with probability
An interesting variation on this is to stop once a 0 has
been rolled, but only continue with the larger die if a 1 was
rolled; otherwise, we just give up. This can be done by replacing
the 'if0' subterm with
case f(x) of 0 7! skip
If we compose polydie with the strategy rr associated with
this modified -scheme term, the probability of qa in the
composite is 1=2
0:718. This shows that the class of strategies where all probabilities
are rational isn't closed under composition.
2.6 SMCC structure
If oe and are strategies for A ) C and B ) D respectively
then we define oe \Theta for (A\ThetaB) ) (C \ThetaD) in the following
manner. Given s 2 T oe\Theta , we define its probability simply
as the product of its probabilities in the two components:
(oe \Theta
Note that there is no interaction between oe and : they each
have their own die. This makes \Theta into a bifunctor on our cat-
egory. In fact, the evident copycat strategies provide us with
a symmetric monoidal structure. In the usual way, a simple
relabelling of moves gives rise to a natural isomorphism between
oe on
we have a symmetric monoidal closed category. The tensor
is the terminal object.
Finally, for an arena A, the unique strategy ! A on A ) 1
and the diagonal strategy \Delta A on A )
A with a cocommutative comonoid structure: for s 2
even
(with probability 1) iff
s: s
means the subsequence of s 0 consisting of all
moves hereditarily justified by an initial occurrence in A '
and s 0 r is defined similarly.
2.7 Ordering strategies
Given strategies oe and for some arena A, we set oe 6
iff, for all s 2 LA , we have oe(s) (s). This is clearly a
partial order. It's straightforward to show that composition
is monotone with respect to this.
CPO-enrichment In fact, we can show that each homset
of our category is a CPO and that composition is continuous,
i.e. our category "is" CPO-enriched.
Proposition 2.3 Given a 6-directed set of strategies \Delta, if
we define
F \Delta by
G
\Deltag
then it's the least upper bound (lub) of \Delta.
Proof We show only that
F \Delta is a well-defined strategy;
that it is the lub follows easily. It's clear that
F \Delta is a pre-
strategy satisfying (p1). For (p2), it suffices to consider
F \Delta). If rng F \Delta
any n 2 N and ffl ? 0, we can find strategies
such that i (sab
the i s have an upper bound in \Delta so that
G
G
G
and the result follows. \Xi
An argument with a similar flavour establishes that composition
respects lubs of directed sets, i.e. if oe : A ) B,
and \Delta is a directed set of strategies for
then oe ;
A countable basis For any arena A, the collection of
strategies with finite trace set and where all probabilities are
rational forms a basis for the CPO of all strategies. i.e. for
any oe on A, the set \Delta oe of such strategies approximating oe
is directed and has oe as its least upper bound.
The only tricky point to check is that \Delta oe is indeed di-
rected. Given oe with trace set
the union of T oe 1
and T oe 2
, assigning probabilities as follows.
If s is a maximal trace, we set oe 3 (s)g.
Otherwise, set oe 3 (s) to be the maximum of oe 1 (s), oe 2 (s) and
f
g. Then, by con-
struction, oe 3 is a valid strategy and is an upper bound for oe 1
and oe 2 in \Delta oe .
This basis is always countable so that the set of all strategies
for A is an !-continuous CPO. This contrasts with all
previous HO-games models, including [12], where strategies
form !-algebraic CPOs, the compact strategies being those
with a finite number of traces.
2.8 Deterministic factorization
The technique of factorization has become standard in game
semantics. The underlying idea is similar to that of [12]
where a nondeterministic strategy is reduced to a deterministic
strategy that has access to a nondeterministic "oracle".
In our case, the "oracle" strategy is a die. It suffices to use a
2-sided die but, for convenience, we use the polydie defined
above.
We first present a simplified factorization that works for
basis strategies. Afterwards, we sketch a more general factorization
that allows us to simulate any probabilistic strategy
with a deterministic strategy and the polydie.
Basis factorization Let oe be a basis strategy for arena A
and consider sa 2 dom(oe). Since T oe is finite, we know
that rng oe (sa) must be finite; we'll write it as rng oe
g. Since all probabilities in oe are rational,
the local probability oe(sab i =sa), for each sab i 2 rng oe (sa),
is too.
Theorem 2.4 (deterministic factorization) If oe is a basis
strategy for the arena A then there exists a deterministic
strategy Det(oe) for (N ) N) ) A such that polydie ;
and Det(oe) is compact.
Proof The above remarks imply that we can find a common
denominator d for the local probabilities oe(sab i =sa).
be the numerators such that p i
=sa). The strategy Det(oe) proceeds in the following
fashion.
a
d
so on. In other words,
we "slice up" the interval from 0 to d \Gamma 1 according to the
numerators
This clearly defines a deterministic strategy and it's easy
to check that supplying the polydie as input achieves a correct
simulation of the original strategy so that polydie ;
obvious that, since oe has a finite trace
set, Det(oe) must have too. \Xi
Conversely, a small calculation with the composition formula
shows that any strategy that can be factorized as
polydie ; oe where oe is deterministic and compact must belong
to the basis.
General factorization To extend this idea to the general
case, consider first a strategy oe all of whose probabilities
are rational (but not necessarily having a finite trace set). If
the local probabilities. If oe(sab 1 we roll a
die. If the result is less than n 1 , we play b 1 ; other-
wise, we renormalize the remaining probabilities by dividing
through by then repeat this process for
For example, if the local probabilities of b 1 and b 2 are 2=3
and 1=6 respectively, the following play produces b 2 .
a
We roll a 2-sided die the second time since the renormalized
probability of b 2 is
Finally, if a play in oe has irrational probability p, we can
always find a sequence of rationals whose series of partial
sums converges to p. So, for each point sab i in the range,
we either have a single rational or a sequence of rationals.
Using a standard trick (such as the one used to enumerate
N \Theta N) we can reduce this to the previous case. The essential
point of this is that a play with irrational probability
is not factorized in "one go". In fact, it will have an infinite
number of witnesses: the desired probability gradually
builds up as we progress further and further down its
sequence of rationals.
3 A fully abstract model of PA
3.1 Probabilistic Algol
Our starting point is the language IA as defined in [3]. All we
need to do is add in some probabilistic primitive. But should
we add a coin or should we rather choose the polydie? Well,
the polydie might be more handy to program with-indeed,
the programminglanguageCAML has a "probabilistic func-
tion" Random:int which exactly implements the polydie-
but, for the contextual equivalence, it makes no difference
whatsoever.
In fact, the polydie can even be programmed inside Algol
using only a coin. Informally it goes this way: given an n,
choose a k such that 2 k n, then roll for a first round the
2-sided die k times. If the answer, which is a binary string
of length k, codes for some integer in
it; if not, go for another round, and so on, until you succeed.
This idea can be programmed in a purely functional style.
We give an example using CAML:
let rec roll
if k=0
then 0
else if coin ()
then (2* roll (k-1))+1
else (2* roll (k-1));;
let rec polydie
if res<n then res
else polydie n;;
where we assume 'findk n' returns dlog 2 ne and `coin ()'
returns true or false with 50-50 odds.
This coin can, in fact, also be written in CAML using
the Random:int primitive mentioned above. The thunking
is just because CAML is call-by-value.
let coin
if Random.int
then true
else false;;
So, for the sake of simplicity, we just add a single term
coin of type Bool. We call this extended language PA for
probabilistic Algol.
To complete the definition of the language, we need to
give its operational interpretation. We do this in the usual
"big step" style except that derivations must be decorated
with probabilities.
We have the obvious rules for the coin:
The other rules follow naturally; we just give a few exam-
ples. (For the sake of readability, we're eliding the use of
stores; adding them poses no problem.)
if M N L
Because of coin, there might be (countably) many evaluations
of M t be the sequence of their respective
probabilities, it is easy to see that
Finally, let M and N be closed terms of the same type.
They are contextually equivalent iff, for all contexts C[\Gamma]
of ground type, we have C[M
. We denote this by M ' N .
As an example, let Mn be the nth "Church numeral"
'h:h n (tt)' and N be `x: if coin ff x' of types (Bool !
We observe that
\Gamman
tt so that the context (\Gamma)N allows us to distinguish
between all the Mn s.
This demonstrates the discriminating power, even in the
purely functional world, of probabilistic contexts, for it is
easily seen that Mn and Mn+2 are contextually equivalent
(for n 2) in PCF.
3.2 The model
We now begin the process of building a model of PA by introducing
the class of single-threaded strategies.
Recall that the current thread of sa 2 L odd
A is written
as dsae. If we have some tab 2 L even
A such that
dtae and the justifier of b occurs in dtae, we denote by
Match(tab; sa) the play sab where b is justified in the same
way as in tab.
We say that a strategy oe is single-threaded iff
ffl for all sab 2 T oe , the justifier of b occurs in dsae.
Note that this condition is expressed in terms of the local
probabilities of oe. It is now a lengthy but routine matter to
verify the following result.
Proposition 3.1 A strategy oe for A ) B is single-threaded
if, and only if, it's a comonoid homomorphism, i.e.
For general reasons [11], this shows that the single-threaded
strategies form a Cartesian closed category C. Fur-
thermore, the CPO-enriched structure all restricts down to
C, i.e. we have a CPO-enriched CCC.
Visibility & bracketing The remaining constraints required
to cut the model down to match well with Algol both
rely on the following definition of the view, written psq, of a
non-empty legal play s.
is an O-move and
justifies m;
is a P-move.
A strategy oe satisfies the visibility condition iff, for all sab 2
oe , the justifier of b occurs in psaq; and oe satisfies the bracketing
condition iff, for all sab 2 T oe where b is an Answer, b
is justified by the pending Question of psaq, i.e. the most recently
asked but, as yet, unanswered Question in psaq. Both
these constraints are preserved by composition.
Our interpretation of PA is in the sub-CCC of C, denoted
by C vb , of (single-threaded) strategies subject to both these
constraints. The interpretation of IA, i.e. the deterministic
fragment of the language, follows that of Abramsky & McCusker
[3]; our probabilistic primitive coin is interpreted
by the strategy coin.
We have the following soundness result for our model of
PA in C vb as a corollary to the usual consistency and adequacy
results.
Theorem 3.2 If M and N are closed terms of PA of type T
such that [[M
3.3 Full abstraction
With the soundness result in place, the path to full abstraction
is relatively standard. First of all, we prove a definabil-
ity result for basis strategies in the model.
Theorem 3.3 If T is some type of PA and oe is a basis strategy
on of PA.
Proof By the factorization theorem,
where Det(oe) is a compact deterministic strategy. By the
definability theorem of Abramsky & McCusker [3], this
strategy is definable in IA. But, since polydie is definable
in PA, we're done. \Xi
We define an equivalence relation on each homset of C vb
as follows. Given f; iff for all "test
where is the name of f , defined by currying
The so-called intrinsic quotient category E vb obtained by
quotienting C vb with this relation is also a CCC and, more-
over, the above soundness result survives the quotient. The
converse to soundness also holds there.
Theorem 3.4 (full abstraction) If M and N are closed
terms of type T then E
Proof We only need to prove the right-to-left direction.
Suppose that E [[M test ff,
homsets in C vb are continuous CPOs,
we can WLOG assume ff to be a basis element. Hence, by
definability,
it follows that this context distinguishes M and N . \Xi
--R
Concurrent games and full completeness.
A fully abstract game semantics for general references.
Linearity, sharing and state: a fully abstract game semantics for Idealized Algol with active expressions.
Full abstraction for Idealized Algol with passive expressions.
games.
Bisimulation for labeled Markov processes.
A logical characterization of bisimulation for labeled Markov processes.
Domains for computation in mathematics
PCF extended with real numbers.
A fully abstract game semantics for finite nondeterminism.
On full abstraction for PCF: I
A probabilistic power- domain of evaluations
Full abstraction for functional languages with control.
Generalised flowcharts and games.
A new approach to control flow analysis.
Games and full abstraction for FPC.
Hereditarily sequential functionals.
Probabilistic LCF.
CPOs of measures for nondeter- minism
--TR
Rational probability measures
A probabilistic powerdomain of evaluations
PCF extended with real numbers
Full abstraction for idealized Algol with passive expressions
On full abstraction for PCF
Hereditarily Sequential Functionals
Generalised Flowcharts and Games
A New Approach to Control Flow Analysis
Call-by-Value Games
Reasoning about Idealized ALGOL Using Regular Languages
Games and Full Abstraction for FPC
Bisimulation for Labelled Markov Processes
Full abstraction for functional languages with control
Semantics of Exact Real Arithmetic
A Logical Characterization of Bisimulation for Labeled Markov Processes
A Fully Abstract Game Semantics for General References
A Fully Abstract Game Semantics for Finite Nondeterminism
Non-Deterministic Games and Program Analysis
--CTR
Pierre-Louis Curien, Definability and Full Abstraction, Electronic Notes in Theoretical Computer Science (ENTCS), 172, p.301-310, April, 2007 | probabilistic Idealized Algol;games semantics |
507388 | Back and forth between guarded and modal logics. | Guarded fixed-point logic GF extends the guarded fragment by means of least and greatest fixed points, and thus plays the same role within the domain of guarded logics as the modal -calculus plays within the modal domain. We provide a semantic characterization of GF within an appropriate fragment of second-order logic, in terms of invariance under guarded bisimulation. The corresponding characterization of the modal -calculus, due to Janin and Walukiewicz, is lifted from the modal to the guarded domain by means of model theoretic translations. Guarded second-order logic, the fragment of second-order logic which is introduced in the context of our characterization theorem, captures a natural and robust level of expressiveness with several equivalent characterizations. For a wide range of issues in guarded logics it may take up a role similar to that of monadic second-order in relation to modal logics. At the more general methodological level, the translations between the guarded and modal domains make the intuitive analogy between guarded and modal logics available as a tool in the further analysis of the model theory of guarded logics. | Introduction
Guarded logics generalise certain features and desirable
model theoretic properties of modal logics to a much wider
context. The concept of guarded quantification was introduced
by Andr-eka, van Benthem, and N-emeti [1], who
proposed and analysed the guarded fragment of first-order
logic, GF. This fragment GF provides a very satisfactory basis
for the explanation of quite some of the good behaviour
of modal logics at the level of a rich fragment of classical
first-order logic. Moreover, the robust decidability properties
of modal logics with respect to natural extension mech-
anisms, are also reflected in GF. Most notably, not only is
GF decidable itself [1], but so is its canonical fixed point
extension, -GF, [6]. -GF extends GF so as to render definable
least (and greatest) fixed points of guardedly definable
positive operations in arbitrary arities. In particular it extends
the modal -calculus to the guarded domain. Unlike
its modal companion, however, it no longer shares the finite
model property, though it remains decidable. Another interesting
feature, which again highlights the role of -GF as
a high-level analogue of the -calculus, concerns its potential
for model checking applications. The alternation-free
fragment of -GF admits linear time model checking algorithms
[4]. These and other results of recent research into
guarded logics indicate that GF and its relatives provide a
very interesting domain of logics, combining rich levels of
expressiveness with a very good balance towards algorithmical
issues.
Recall how modal logics, in the broad model theoretic
sense including extensions like CTL and the modal
calculus, are characterised by their invariance under bisim-
ulation: they cannot distinguish between bisimilar struc-
tures. Moreover, invariance under bisimulation is at the root
of many of the successful model theoretic tools available for
modal logics, like e.g. the tree model property which paves
the way towards the use of automata theory. The eminent
role that bisimulation plays in the domain of modal log-
ics, is in the guarded domain taken by a similar but much
more wide-ranging and finer notion of equivalence induced
by guarded bisimulation.
The characteristic feature of modal quantification, which
is also at the heart of bisimulation, is that one can only
directly access nodes along basic edge relations. In the
guarded scenario, this is generalised to simultaneous direct
accessibility of all tuples that are covered (guarded)
by some ground atom. These are what are called guarded
tuples, and guarded quantification is quantification over
guarded tuples. Unlike the modal case, the notion of
guardedness does not impose any arity restrictions, so that
guarded logics are just as meaningful over structures with
higher-arity relations and can address properties of tuples.
One of the underlying technical themes of this paper,
also concerning a methodological point of wider interest,
is the potential of having certain reductions from the richer
scenario of guarded logics to the simpler and well understood
scenario of modal logics, or from guarded bisimulation
to ordinary bisimulation. Corresponding ideas have
been initially explored in [5] in the context of the satisfiability
problem for guarded fixed point logic. This method-
# Mathematische Grundlagen der Informatik, RWTH Aachen, D-52065 Aachen, {graedel,hirsch}@informatik.rwth-aachen.de
Department of Computer Science, University of Wales Swansea, SA2 8PP, United Kingdom, m.otto@swan.ac.uk
ology is carried much further here and applied to a characterisation
issue, concerning the semantic characterisation
of guarded fixed point logic within a suitable second-order
framework. Characterisation theorems of this type have a
strong tradition in the field. They are of particular interest
since they tighten the close connection between logics in a
certain family and characteristic equivalences. At the level
of basic modal logic and the guarded fragment of first-order
logic, the corresponding characterisation theorems are the
see [2] and [1].
Theorem 1.1 (van Benthem). A property of transition systems
is definable in propositional modal logic if and only if
it is first-order definable and invariant under bisimulation.
Theorem 1.2 (Andr- eka, van Benthem, N- emeti). The
guarded fragment GF can define precisely the model
classes that are first-order definable and invariant under
guarded bisimulation.
For the modal scenario, a similar and highly non-trivial
analogous characterisation was given for the associated
fixed point logic in [7].
Theorem 1.3 (Janin, Walukiewicz). A property of transition
systems is definable in the modal -calculus if and only
if it is definable in monadic second-order logic and invariant
under bisimulation.
Johan van Benthem has raised the question whether
-GF admits a similar characterisation, in terms of guarded
bisimulation invariance, within some suitable framework of
second-order logic. It should be noted that MSO is clearly
not the right framework, since monadic second-order quantification
does not suffice to simulate the fixed point definitions
in -GF. Full second-order logic, on the other hand,
is obviously too strong; there are simple examples showing
that even for bisimulation invariant properties, full second-order
logic goes far beyond the expressive power of -GF.
The resulting fragment is, for instance, no longer decid-
able. It turns out, however, that there is a natural analogue
for MSO, which we call guarded second-order logic
GSO. GSO is best characterised in semantic terms, as full
second-order logic with a semantics that restricts second-order
quantifiers to range over sets of guarded tuples, rather
than over arbitrary relations. The precise definition will be
given in section 3, where we also discuss several syntactic
variants which turn out to have exactly the expressive power
of GSO. And indeed we find the following.
Main Theorem. Guarded fixed point logic -GF can define
precisely the model classes that are definable in guarded
second-order logic and are invariant under guarded bisimulation
2. Preliminaries
Transition systems and bisimulation. Transition systems
(or Kripke structures) are structures whose universe is
a set of states (worlds), labelled by unary predicates (atomic
propositions), and carrying binary transition relations labelled
by actions (accessibility relations). We typically
with state set V , based on a set B of atomic propositions
and a set A of actions.
Definition 2.1. A bisimulation between two transition
systems
respecting the P b in the sense that
for all b # B and (v, v # Z, and satisfying the following
back and forth conditions.
for all (v, v # Z, a # A and every w such that
(v, w) # E a , there exists a w # such that (v # , w # E # a and
(w, w # Z.
Back: for all (v, v # Z, a # A and every w # such that
a , there exists a w such that (v, w) # E a and
(w, w # Z.
Two transition systems with distinguished nodes are
bisimilar, K, u # K # , u # , if there is a bisimulation Z between
them with (u, u # Z. We say that two trees are
bisimilar if they are bisimilar at their roots # and # , and
just
Definition 2.2. The unravelling T (K, u) of a transition
system K from node u is the tree of all paths through
K that start at u. More formally, given
(V, the unravelling is
is the
set of all sequences
A such that v
b contains the sequences v 0 a
a contains the pairs (v, vav) in V T - V T .
It is easy to see that each pair (K, u) is bisimilar to its
unravelling: K, u # T (K, u), u. Hence, as far as bisimulation
invariant properties are concerned, we can restrict our
attention to trees.
Bisimilar trees admit special minimal bisimulations,
which may be constructed inductively, level by level starting
from (# ) in such a way that at each new level we add
a tuple (w, w # ) to Z only where this is required by a back
or forth requirement from the previous level. It is clear that
the resulting bisimulation is minimal in the sense that no
proper subset would still be a bisimulation. Note that minimal
bisimulations are in general not unique. They have,
however, the following useful properties.
Lemma 2.3. Let Z be a minimal bisimulation between T
and T # . If (v, v # Z and if u and u # are the parent
nodes of v and v # , respectively, then also (u, u # Z
and (u, v) # E a iff (u # , v # E a . Therefore, a minimal
bisimulation satisfies the back and forth requirements also
with respect to the converses of the E a . The converse forth
property w.r.t. E a , for instance, is the following. For every
(v, v # Z and every u such that (u, v) # E a , there exists
a u # such that (u # , v # E # a and (u, u # Z.
Proof. It is straightforward to show that otherwise Z -
would still be a bisimulation.
It follows that w.r.t. a minimal bisimulation Z between
trees T and T # and for (v, v # Z, any path from v in T
can be lifted to a bisimilar path from v # in T # . In particular
consider for any w in T the unique minimal connecting path
(from v up to the
first common ancestor of v and w, and from there down to
w). Then there is a path (z #
r labelled exactly
the same as the original path, and such that (z i , z
all i.
2.1. Modal logics
We recall the definitions of propositional modal logic
and the -calculus. The formulae of these logics are evaluated
on Kripke structures at a particular state. Given a formula
and a transition system K with state v, we write
to denote that the formula holds in K at state v.
Propositional modal logic. We describe propositional
(multi-)modal logic ML for several transition rela-
tions, i.e., for reasoning about transition systems
(V, may have more
than one element. In the literature on modal logic, this system
is sometimes called Kn (where is the number
of actions or 'modalities').
Syntax of ML. The formulae of ML are defined by the following
rules.
. Each atomic proposition P b is a formula.
. If and # are formulae of ML, then so are ( #),
( #) and - .
. If is a formula of ML and a # A is an action, then
#a# and [a] are formulae of ML.
If there is only one transition relation,
simply writes # and # for [a] and #a#, respectively.
Semantics of ML. Let # be a formula of ML,
(V,
state. In the case of atomic propositions,
Boolean connectives are treated in the
natural way. Finally for the semantics of the modal operators
we put
The -calculus L- . The propositional -calculus L - is
propositional modal logic augmented with least and greatest
fixed points. It subsumes almost all of the commonly
used modal logics, in particular LTL, CTL, CTL # , PDL and
also many logics used in other areas of computer science,
for instance description logics.
Syntax of L- . The -calculus extends propositional modal
logic ML (including propositional variables X,Y, . , also
viewed as monadic second-order variables) by the following
rule for building fixed point formulae.
. If # is a formula in L- , and X is a propositional variable
that does not occur negatively in #, then -X.#
and #X.# are L- formulae.
Semantics of L- . The semantics of the -calculus is given
as follows. A formula #(X) with propositional variable X
defines on every transition system K (with state set V , and
with interpretations for other free second-order variables
that # may have besides X) an operator
P(V ) on the powerset P(V ) of V assigning to every set
the set
As X occurs only positively in # K is monotone for
every K, and therefore has a least and a greatest fixed point.
Now we put
is an element of the least fixed point
of the operator # K . Similarly K, v |= #X.# iff v is an
element of the greatest fixed point of # K .
Remark. By the well known duality between least
and greatest fixed points, #X.# is equivalent to
Hence we could eliminate greatest fixed
points. However, it will be more convenient to keep least
and greatest fixed points and to work with formulae in
negation normal form, where negations are applied only to
atomic propositions.
There is a variant of L - which admits systems of simultaneous
fixed points. These do not increase the expressive
power but sometimes allow for more straightforward
formalisations. Here one associates with any tuple
all positive in the X i , a new formula -X. The semantics
of # is induced by the least fixed point of the monotone
operator # K mapping X to X # where X #
precisely, K, v |= # iff v is an element
of the first component of the least fixed point of the
above operator. Similar conventions apply w.r.t. simultaneous
greatest fixed point. It is well known that simultaneous
fixed points can be uniformly eliminated in favour of indi-
vidual, nested fixed points.
It is easy to see that all formulae of ML and L - are invariant
under bisimulation.
3. Guarded first and second-order logics
Definition 3.1. Let B be a structure with universe B and
vocabulary # .
(i) A set X # B is guarded in B if there exists a ground
atom
is guarded in B if
(b 1 , . , b n
is a guarded list in B if its
components are pairwise distinct and {b 1 , . , b k } is
a guarded set. We admit the empty list # as a guarded
list.
is guarded if it only consists of
guarded tuples.
Note that a singleton set guarded
by the atom b. The cardinality of guarded sets in B
is bounded by the maximal arity of the relations in # , the
width of # . Guarded tuples, however, can have any length.
Guarded lists will be of technical interest later as succinct
tuple representations of guarded subsets.
The guarded fragment GF. The guarded fragment extends
modal logic to a richer fragment of first-order logic. Its
characteristic feature is a relativised pattern of quantifica-
tion, which generalises modal quantification.
Syntax of GF. The formulae of GF (in vocabulary # ) are
defined by the following rules.
are formulae of
GF.
(ii) If # and # are formulae of GF, then so are (#),
(#) and -#.
(iii) If #(x, y), with free variables among those listed, is
a formula of GF and #(x, y) is a #-atom in which all
displayed variables actually occur, then #y #(x, y) #
are formulae of
GF.
The atoms # relativising GF quantifications are called
guards. We shall often also use the more intuitive notation
(#y . # and (#y . # as shorthand for correspondingly
relativised first-order quantification.
Semantics of GF. The semantics of GF is the usual one for
first-order formulae.
As one simple example of a formula in GF, which will
be useful later, consider the formula guarded(x) in variables
#-structures the set of
all guarded k-tuples, cf. Definition 3.1. For any complete
equality type on {1, . , k} specified by a quantifier-free
formula #(x) in the language of just = , let x # be the sub-
tuple of x comprising precisely one variable from each =-
class specified by #. Let #(y, x # ) be a #-atom in which all
variables in x # actually occur, the y new, i.e. disjoint form
x. Put # (x) := #(x) #y#(y, x # ). For the degenerate
case of # 0 , specifying the equality type of a singleton tuple
It is easily
checked that the disjunction over all these formulae # (x)
is as desired.
Note that first-order quantification over an individual single
free variable is always admissible in GF, since singletons
are guarded (by an =-atom):
Guarded fixed point logic -GF. Guarded fixed point logic
-GF as introduced in [6] is the natural extension of GF by
means of least and greatest fixed points (or corresponding
systems of simultaneous fixed points).
Syntax of -GF. Starting from GF, with second-order variables
X,Y, Z, . that are treated like predicates in # but
may not be used in guards, we augment the syntax of GF by
the following rule for least and greatest fixed points.
. If #(X, x) is a formula of -GF, which is positive
in X , X k-ary and
(first-order) variables of # are among these x i , then
-X.# and #X.# are also formulae of -GF.
Semantics of -GF. The semantics of -X.# is the natural
one associated with the least fixed point of the monotone
. More precisely,
is an element of the least fixed
point of the operator # B .
Similarly for #X.# and the greatest fixed point of # B .
One may also admit simultaneous least (and great-
est) fixed points w.r.t. tuples of formulae
the X i have only positive occurrences, and the x i contain
match the arities of the X i .
Then we obtain new
whose semantics is the natural one associated with the first
component X 1 of the least or greatest fixed point of the
corresponding monotone operator. As with the -calculus,
one finds that simultaneous fixed points can be eliminated
in -GF, too.
Guarded second-order logic. We introduce the natural
second-order extension of the guarded fragment, or the
guarded fragment of second-order logic, which relative to
GF and -GF occupies a role analogous to that of MSO relative
to ML and L - . The naturalness of this logic is further
demonstrated below, where we show that the corresponding
level of expressiveness is surprisingly robust under a
number of changes in the actual formalisation and syntax.
Indeed, we shall show that three natural candidates for a
second-order guarded logic all have the same expressive
power. It should be stressed that for all considerations,
guardedness (of sets, tuples, or relations) always refers to
guardedness w.r.t. the underlying vocabulary # ; at no point
will second-order variables be admitted as guards.
In our preferred definition of guarded second-order logic
we simply use the syntax of ordinary second-order logic,
but restrict it semantically by the stipulation that all second-order
quantifiers range just over guarded relations, rather
than over arbitrary relations. We refer to this semantic restriction
as guarded semantics for the second-order quanti-
fiers. It is clear, however, that this stipulation may alternatively
be captured purely syntactically, by only allowing occurrences
of atoms Xx in conjunction with the GF-formula
guarded(x), which says that x is a guarded tuple, thus effectively
restricting X to its guarded part.
Definition 3.2. Guarded second-order logic GSO is
second-order logic with guarded semantics for the second-order
quantifiers.
Note that GSO includes full first-order logic. Hence
GSO is undecidable and, unlike GF and -GF, not invariant
under guarded bisimulation (cf. Definition 4.2). Also
note that, as singletons are always guarded, the monadic
version of guarded second-order logic coincides with full
MSO. Consequently, since MSO is strictly more expressive
than FO, the same is true for GSO. Furthermore, we shall
see in Lemma 3.5 below that GSO collapses to MSO over
words. The robustness of GSO, and its place properly in between
MSO and full second-order SO, is underlined by the
following.
Lemma 3.3. The following fragments of second-order
logic are equally expressive (with respect to sentences):
(1) The extension of GF by full second-order quantification
(2) The extension of GF by second-order quantification
with guarded semantics.
(3) Guarded second-order logic GSO.
Proof. It suffices to argue for translations from (1) and (3)
into (2).
For (1) # (2) consider a second-order variable X in a
formula according to (1), which is meant to range over arbitrary
rather than guarded relations. Any atom Xx occurring
in a GF sentence necessarily is in the scope of a guarded
quantification (Qy . #(y, z))# where the occurrence of x in
Xx is free in #, whence the x all occur in #. It follows that
the truth value of Xx for non-guarded tuples has no impact
on the truth value of #.
For (3) # (2) it suffices to show that unrestricted first-order
quantification can be simulated by guarded (in fact:
monadic) second-order quantification over GF. To this end,
each element variable x is replaced by a set variable X , and
we use the following rules for translating formulae.
Rx #x . Rx) # i
where singleton(X) is a formula stating that X contains exactly
one element:
Note that these translations and in particular
singleton(X) are in GF, since first-order quantification over
a single free first-order variable is always guarded.
The following two lemmas show that GSO lies strictly
between MSO and SO.
Lemma 3.4. GSO is strictly more expressive than MSO.
Proof. We show that the existence of a Hamiltonian cycle in
an undirected graph can be expressed in GSO. It is known,
however, that Hamiltonicity is not expressible in MSO, see
e.g. [3]. Here is a GSO-sentence expressing Hamiltonicity:
(#x#y(Hxy #zHxz #
Lemma 3.5. SO is strictly more expressive than GSO. In
particular GSO collapses to MSO over words.
Proof. We can show that every guarded set over words can
be encoded into a series of monadic predicates. The edge or
successor relation is the predicate of maximal arity occuring
in structures encoding words. Hence any guarded set
contains at most two subsequent nodes.
Due to the directedness of the edges we can simply
choose the element a to represent the guarded set {a, b},
where b is the successor of a.
The encoding uses one monadic predicate for each
second-order predicate and each possibility of forming a tuple
of appropriate arity out of two variables. Hence GSO
is not more expressive than MSO over words, i.e. able to
define exactly the regular languages. On the other hand full
second-order logic is known to capture the polynomial-time
hierarchy.
To summarise, we have the following hierarchy of logics
3.1. A normal form for guarded logics.
We present a normal form for GF and GSO that will
be useful in the following. Let
. } be two disjoint sets of variables. Let
Z stand for either X or Y . GFX and GF Y are defined inductively
as follows.
(1) Every relational atomic formula # with
belongs to GFZ .
(2) A boolean combination of formulae in GFZ also belongs
to GFZ .
be any partial bijection between {1, . , n} and
{1, . , m}. Then, for every guard 1 #(y 1 , . , yn )
and #(y 1 , . , yn
#y. #(i)=j y
is in GFX . By interchanging x- and y-variables we
obtain an analogous rule for GF Y .
It should be noted that the formulae in GFX and GF Y
are syntactically not in GF. It is clear, however, that these
are logically equivalent to guarded ones. Let
These syntactic stipulations extend
from GF to GSO in the obvious way. We let GSO 0 be
the extension of GF 0 by second-order quantification over
guarded relations.
In the sequel, relativised quantifications of the type
(#y. #(i)=j y as used in
GF 0 will be abbreviated as (#= y . #(x, y) #(y)).
Proposition 3.6. Every sentence in GF is equivalent to a
sentence in GF 0 .
Corollary 3.7. Every sentence in GSO is equivalent to a
sentence in GSO 0 .
The proof is not difficult but a bit technical. We
just explain the idea. First, it is well known, that
first-order sentences can be reformulated so that in all
subformulae distinct variables are always to be interpreted
by distinct elements. One way to make this precise
is to use the quantifier #= rather than # where
means that there exists an x that is
distinct from z 1 , . , z n such that #(x, z 1 , . , z n ) holds.
Obviously, this does not change the expressive power of
first-order sentences, since #x #(x, z 1 , . , z n ) is equivalent
to # n
combine this with the idea that any quantification over a
part of the free variables in a formula should come with
a complete renaming of all variables, so that we move,
say, from x-variables to y-variables. Consider a formula
of the form We then replace this
by the equivalent formula (#= y 1 - y 4 . y
(Here the notation #= y means that the
quantified y-variables must assume distinct values, but the
values of a y-variable can be the same as for an x-variable.)
To see why this might be useful (beyond the applications
in this paper actually) let us consider another simple exam-
ple. In the evaluation game for the sentence
the verifier first has to pick distinct elements a 1 , . , a 4 and
is then challenged by the falsifier to justify one of the three
conjuncts. Suppose that she has to justify the subformula
This means that she has to keep a 2
and extend it by some new a 1 and a 5 so that she can win the
evaluation game for #(a 1 , a 5 , a 2 ) holds. Similarly, if she is
challenged to verify the third conjunct she has to produce
a new a 3 so that she wins the game for #(a 3 , a 4 , a 3 ). This
becomes more transparent if we reformulate in GF 0 as
Indeed this formulation makes it apparent that the verifier
moves from a tuple to a new tuple (y 1 , y 2 , y 3
or (y 1 , y 2 ) subject to explicitly given equality constraints.
4. Guarded bisimulation and tree representation
Guarded bisimulation is for GF what bisimulation is for
ML.
Definition 4.1. A guarded bisimulation between two # -
structures A and B is a non-empty set I of finite partial
from A to B, where X # A and
are guarded sets, such that the following back and
forth conditions are satisfied. For every
We require that all y i occur in #, however order and multiplicity is arbitrary!
for every guarded set X # A there exists a partial
in I such that f and g agree on
Back: for every guarded set Y # B there exists a partial
in I such that f -1 and g -1
agree on Y # Y # .
Two #-structures A and B are guarded bisimilar (in sym-
bols: A # g B) if there exists a guarded bisimulation between
them. We refer to the relation # g , which obviously
is an equivalence relation between structures, as guarded
bisimulation equivalence.
Definition 4.2. We say that a sentence # is invariant under
guarded bisimulation if it does not distinguish between
guarded bisimilar structures, i.e. if A # g B and A |= #
then also B |= #. A logic L is invariant under guarded
bisimulation if all its sentences are.
Proposition 4.3. GF and -GF are invariant under
guarded bisimulation.
The guarded Ehrenfeucht-Fra-ss- e game. It is well understood
how guarded bisimulation equivalence may be
described by means of an associated Ehrenfeucht-Fra-ss-e
game. We indicate the characteristic features of the guarded
game, in a version directly relating to our GF 0 normal form.
Two players, player I and player II, take turns to place and
relocate two groups of corresponding, labelled pebbles on
elements of the underlying structures A and B, respectively.
The rules of the game are such that after each round the
groups of pebbles positioned in A and B, respectively, label
guarded lists a and b in such a way that the correspondence
a # b is a partial isomorphism. In each round, player I
has the choice in which of the two structures to play. In
the chosen structure, player I then may leave some pebbles
fixed, remove some from the board, and re-locate some oth-
ers. The only restriction is that the new pebble positions
again have to label a guarded list. Player II then has to re-position
the corresponding pebbles in the other structure so
as to produce a locally isomorphic configuration. Player II
loses if no such response is available. Now A and B are
guarded bisimilar iff player II can always respond indefi-
nitely, i.e. iff there is a winning strategy for player II in the
infinite guarded game. It is apparent that a guarded bisimulation
in the sense of Definition 4.1 is a formalisation of a
(non-deterministic) strategy for player II.
We now use the intuition behind the game to introduce
the following structural transformations. One which abstracts
from a given #-structure B a tree representation
T (B), which fully describes as a transition system the behaviour
of B in the guarded game, and thus characterises
B up to guarded bisimulation equivalence. Another one
which, conversely, associates with a tree T a #-structure
D(T ), such that the game behaviour specified by T is realised
in the guarded game on D(T ).
The guiding idea behind these transformations is that
guarded bisimulations at the level of the #-structures lift to
ordinary bisimulations at the level of the abstracted transition
systems. In particular, B # := D(T (B)) # g B will
be a tree-like variant of B, intuitively corresponding to a
guarded unravelling of B, as considered e.g. in [1] and [5].
From structures to trees. Recall from Definition 3.1 that a
guarded list in B is a tuple (b 1 , . , b k ) of distinct elements
such that {b 1 , . , b k } is a guarded set. We regard such
guarded lists as descriptions of positions over B in the restricted
guarded game. The nodes of the induced tree are associated
with guarded lists over B, and we symmetrise the
description of the game so as to allow re-labellings (permu-
tations) of the guarded list in conjunction with every move
in the game. The information to be recorded in each node
precisely the isomorphism type of the
induced substructure on {b 1 , . , b k } in B. The information
to be recorded concerning possible moves describes the
constraints imposed on a move from l ) to
some choice of elements that remain
fixed (according to player I's choice).
For a #-structure B we work with the following vocabulary
- # for the associated tree. If m is the width (maximal
arity of predicates) of # , let S be the set of all #-structures A
with universe {1, . , k}, admitted
F be the set of all pairs (k, #), where
is a partial bijection from {1, . , k}
to {1, . , m}. Then -
# has monadic predicates PA for
all A # S, and binary predicates E k
# for all (k, # F .
Let G be the set of all guarded lists over B. Define V as
the set of all sequences
all
have | and for all j # dom(#) the j-th element
at g i+1 is the same as the #(j)-th element at g i . A
node . gn of T (B) is naturally associated with
the guarded list in B, in which the sequence
terminates. In particular we set |v| := |g n
and let A v be the unique A # S that is isomorphic with
. Then the tree associated with
B is
# .
The following is a direct consequence of the fact that the
transition system T (B) is precisely set up to capture the
behaviour of B w.r.t. the guarded game.
Lemma 4.4. For all #-structures B and
From trees to structures. Conversely, we would like to
associate with a tree T of type - # a #-structure D for which
. This is clearly not possible for arbitrary T .
Definition 4.5. Let T be any -tree. We call T consistent
if the following are satisfied.
(a) For each node v there is a unique A # S such that
T |= PA (v), denoted A v . A #.
(b) If (u, v)
# then the partial bijection # is an isomorphism
between A v # dom(#) and A u # im(#), and for
each A v from some structure
A # S to A v there exists a node w such that
and (u, w)
# .
Lemma 4.6. Within the class of -trees, the class of consistent
trees is bisimulation invariant and first-order definable.
Every tree T (B) is consistent. Conversely, if
consistent -tree, then we
may define an associated #-structure D(T ) as follows. Let
# be the reflexive and
symmetric transitive closure of the following relation on U :
(v,
The universe of D(T ) is D := U/# , the set of #-
equivalence classes [v, i] in U . We say that an equivalence
class lives at node u if [v,
We observe that if an equivalence class d lives at nodes u
and v in T then it must also live at every node of the unique
shortest path connecting u to v in T . It follows from consistency
condition (b), that we may consistently interpret every
k-ary predicate in # over D(T ) by putting
Note that guarded list in D(T ) if
and only if
of size |w| = k. We say that v represents the guarded list d.
Lemma 4.7. For all #-structures B: D(T (B)) # g B.
Proof. Let
that T is the unravelling of K, whence any node v of T may
be associated with a unique node v 0 in K (the last node in
the unravelled path that gives rise to v). Further recall that
being a node in K, is a guarded list in B.
For any guarded list d in D and any node v of T representing
this guarded list, let f d,v
be the bijection which
maps d to the guarded list v 0 in B. Note that as a map f d,v
only depends on the guarded set {d 1 , . , d k } and not on
the order of components or on the chosen representative v.
If v and v # represent the same guarded set, they are linked
in T by a path along which all components always live to-
gether. The corresponding path in K similarly links v 0 to
matching permutations along that path.
We claim that the set of all these f d,v
is a guarded bisimulation
between D and B. It is obvious from the construction
that the f d,v
are partial isomorphisms whose domains
are guarded sets. The forth property is immediate from the
construction. We indicate the argument for the back prop-
erty. Let d and d # be guarded lists in D, f d,v : d # v 0 . Let
c be the tuple of common components in d and d # . Let v # be
any representative of the guarded list d # . The tuple c lives
along the unique shortest path from v to v # in T . Projecting
this path down to K we see that f d # ,v # and f d,v agree on
their common domain.
may be regarded as a guarded
unravelling of B, analogous to the standard unravelling of
transition systems. Indeed, the resulting guarded bisimilar
structure B # is also tree-like in that it has a tree decomposition
of width m- 1, where m is the width of # . The naked
tree T (B), stripped of its labels and regarded as a directed
graph, together with the natural association of a node v of
with the guarded set it represents in B # , induces a
tree decomposition of B # in the usual sense. Tree unrav-
ellings, and the corresponding generalised tree model property
for GF and some of its relatives have been put to use
e.g. in [1] and [5].
The following proposition extends the intuition that the
bisimulation type of T (B) captures the guarded bisimulation
type of B to the setting of all consistent trees. The
proof is via a canonical lift of tree bisimulations.
Proposition 4.8. For any consistent -
-trees T and
Proof. Let Z # V -V # be a minimal bisimulation between
T and T # , cf. Lemma 2.3.
For each pair (v, v # Z let f vv # be the function that
maps the guarded list represented by v to that represented by
v # . Clearly, f vv # is a partial isomorphism, since A
For
# and
# , the
maps f uu # and f vv # agree on their common domain. This
follows from the construction of D(T ) and D(T # ), because
elements represented at u and v are identified in D(T ) via
# in exactly the same way as the corresponding elements at
are identified in D(T # ).
We claim that the set of all f vv # , for (v, v # Z, is a
guarded bisimulation between D(T ) and D(T # ).
To verify, for instance, the forth condition, let c and d be
guarded lists in D(T ), represented by u and v in T , respec-
tively. Let (u, u # Z, f We need to find
such that (v, v # Z and such that f vv # agrees with
f uu # on their common domain. Let X be the set of common
elements in c and d. Consider the unique shortest path from
u to v in T . As Z is minimal, this path gives rise to a bisimilar
path from u # to some v # in T . Now f vv # is as desired:
the elements of X live at all nodes on the path from u to v,
whence all the intermediate mappings fww # along the path,
and in particular f vv # respect f uu #X .
5. Back and forth
Recall the characterisation of the modal -calculus from
Theorem 1.3. We want to apply this characterisation in restriction
to trees, and therefore refer to the following vari-
ant, which is proved en route to Theorem 1.3 in [7].
Theorem 5.1 (Janin, Walukiewicz). A class of trees is definable
in the modal -calculus if and only if it is definable
in monadic second-order logic and closed under bisimulation
within the class of all trees.
Towards a reduction of our main theorem to Theorem
5.1, we define a "forth" translation that maps every
sentence # GSO[# ] to a formula # (x) # MSO[-# ] with
translation that maps every
to a sentence # -GF[# ]. These
translation will be such that
(1) If T is a consistent tree with root #, then T |=
#) iff D(T ) |= #.
(2) If B is a #-structure and # is the root of T (B), then
It follows from (1) and Proposition 4.8 that GSO sentences
that are invariant under guarded bisimulation are
mapped to MSO formulae that are bisimulation invariant on
consistent trees.
Before giving the formal definitions, we informally discuss
the main problems arising from the differences between
the guarded and the modal viewpoint.
We wish to translate GSO sentences to MSO formulae
and L- formulae back to -GF sentences. We want a modal
formula to hold at some node v if and only if the corresponding
guarded formula holds of the guarded list represented
by v. For second-order variables we need to map
sets of guarded tuples to sets of nodes and vice versa. For
each node v of a consistent tree T , the associated structure
A v has in general many different guarded tuples that
can occur in any single guarded set. Therefore a second-order
variable Z in a GSO sentence will be translated into
a sequence Z # of set variables Z i 1 ,.,i r , one for each (lo-
cal) choice of elements. In the other direction we have to
deal with monadic second-order variables, which in general
range over sets of nodes of arbitrary size. Consequently
a monadic second-order variable X is translated into a sequence
that each X i ranges over guarded tuples of length i.
5.1. From guarded to monadic second-order logic.
Without loss of generality we restrict attention to GSO
sentences in GSO 0 , cf. Proposition 3.7. Let m be the width
of # , i.e. the maximal arity of relations in # .
Definition 5.2. Let T be a consistent -tree, and let D(T )
be the associated #-structure. If Z is an r-ary second-order
variable, then Z # := (Z
is the
corresponding sequence of monadic predicates. A tuple
J # of monadic predicates J
on T encodes an r-ary
guarded relation J on D(T ) iff J
Not all sequences J of monadic predicates over T do indeed
encode a guarded relation over D(T ). To do so, they
have to satisfy the following correctness conditions.
(a)
only contains nodes v where all i j are in A v .
(b) J is consistent on tuples living at different nodes,
i.e. if in D(T ) a tuple (d 1 , . , d r ) is represented by
(i 1 , . , i r ) at node u and by (j 1 , . , j r ) at node v,
then
Lemma 5.3. For each r # m there exists a first-order
that expresses the correctness conditions
These conditions are necessary and sufficient in the
sense that a tuple J encodes a guarded relation on D(T ),
if and only if T |= correct(J).
Proof. Note that it suffices to express condition (b) for adjacent
nodes to enforce it globally. Thus the consistency requirement
for Z # can be expressed by a first order formula
states condition (a) in an obvious way and con-
tains, for each pair (k, # F and each tuple (i 1 , . , i r ) a
clause
The proof of the adequacy claim is straightforward.
First-order quantifiers also require a special treatment.
Let us first consider the case where T is a tree representation
(B). By construction such trees satisfy a strong
homogeneity condition, namely for all nodes u, u # and all
successors v of u there is a successor v # of u # such that
the subtrees with roots v and v # are isomorphic. To put
it differently, an indistinguishable copy of any guarded list
anywhere in D(T ) is available locally, at some child of the
current node. Therefore guarded first-order quantifications
over D(T ) can be simulated over T by moving to an immediate
successor of the current node (i.e. by a modal quantifier
# or #). However, if T is an arbitrary consistent -
# -tree,
this is no longer the case. To verify a formula of the form
(#= y . #(x, y) #(y))#(y) we want to move from the current
tuple x to a new tuple y, guarded by #, such that #(y)
is true and the overlap conditions for x and y as stated by
#(x, y) are satisfied. In arbitrary consistent trees, such a witness
need not exist locally, but may only occur as a remote
node, which is linked to the current node by a path along
which the common components according to # are kept.
To capture this situation in MSO we use a sequence W
of monadic predicates W k
# , one for each (k, # F , that
partition the set of nodes according to their size and their
overlap with the current node. The proof of the following is
then straightforward.
Lemma 5.4. There is a first-order formula F-part(z, W )
expressing the following correctness conditions on partitions
with respect to node z. For every consistent tree T ,
every sequence
predicates on T , we have T |= F-part(u, W ) iff W k
|Aw
Proof. As in the previous lemma, it suffices to express the
required condition at the current node and for every edge.
The formula F-part(z, W ) states that
(a) The sets W form a partition of the universe.
(b) The node z itself belongs to W k
id , where
(c) If y # W k
(d) For all (k, #) and
then x # W #
# .
By induction on the distance from z one easily shows
that F-part(z, W ) expresses the right property.
The translation. Recall that formulae in GSO 0 either
belong to GSOX and have all their free first-order variables
in X , or they belong to GSO Y and have all their
are two disjoint
sets of variables. Further, formulae in GSOX that start
with a quantifier are of the form #(x) := (#= y . #(x, y) #
#(y))#(y) with #(y), #(y) # GSO Y .
We inductively translate every GSOX [#
into an MSO[-#
r ), with a single free first-order variable
x and sequences of monadic second-order variables
i that correspond to the second-order variables Z i . Sim-
ilarly, formulae in GSO Y are translated into formulae #
y. We just present the translation
for formulae in GSOX :
r-ary relation variable
(x).
(#W . F-part(x, W ))(#y . W |y|
(5) For #Z# with r-ary relation variable Z, let
Theorem 5.5. Let T be a consistent -tree with root #, let
D(T ) be the associated #-structure, and let # be a sentence
in GSO[# ]. Then D(T
Proof. This theorem is a consequence of the following
more general statement. Consider any formula
second-order variables as displayed. Its translation into
MSO[-#
guarded list in D(T ), and let v be a node of T such that
be sets of guarded
tuples in D(T ), and let J #
s be their representations
according to Definition 5.2.
Note that for sentences (i.e.
the claim implies the theorem. The claim itself is established
inductively. The cases corresponding to (1) - (3)
are immediate.
(#W . F-part(x, W ))(#y . W |y|
As second-order variables play no role for this case, we
suppress them. Suppose that D(T ) |= #(d). Then there
exists a guarded list
that D(T ) |= #(e) #(e) and e
e is guarded, there exists a node w such that all e i live together
at w. Actually, due to the last of the conditions for
consistent trees, we can assume that
and |Aw | = #. We know that there exists a W satisfying
that for all (i,
[v, j]. It follows that T |= W #
# (w) and, by induction hy-
pothesis, T |= (# (w). Therefore T |= # (v).
Conversely, suppose that T |= # (v). Hence for the
(unique) tuple W satisfying F-part(v, W ), there exists a
node w # W #
# such that T |= (# (w). For e i := [w, i]
we find that e and, by induction hy-
pothesis, D(T ) |= (#)(e). Therefore D(T ) |= #(d).
For (5), the claim is immediate from the induction hypothesis
and from Lemma 5.3.
Corollary 5.6. If # GSO[# ] is a sentence that is invariant
under guarded bisimulation, then # (x) is bisimulation
invariant on consistent -trees.
Proof. Let T , T # be two bisimilar, consistent -trees. Then
Proposition 4.8. It follows that T |=
5.2. From the -calculus to guarded fixed point
logic.
The translation back from the modal into the guarded
world again requires some preliminary discussion. Every
formula # of the -calculus, evaluated on a tree
defines the set # T of all nodes v such that T , v |= #. Recall
how each node v of T represents a guarded list g v in
B. So the idea is to translate # into a guarded formula #(x)
defining in B the set #
should be equal to {g v #}. The main problem
is that guarded lists do not have a fixed length, so that
must actually be translated into a tuple
Definition 5.7. Let X be a monadic second-order variable.
is an i-
ary second-order variable. Let N be a set of nodes in
a tree T (B). The representation of N in B is N :=
Note that, since the root # is the only node of size 0, either
we say that
or N
In the sequel we assume w.l.o.g. that L - formulae are
written without #-operators, that in any formula -X.# the
fixed point variable X occurs in # only inside the scope of
modal operators, and even that in each # L - , the fixed
point formulae -X.#(X) themselves only occur inside the
scope of modal operators, cf. [8].
The translation. For every formula # L -# ] we now
define
one for each
m, in which each monadic second-order variable X of
# is represented by a tuple of second-order variables X #
(of arities 0, 1, . , m).
:= false if |A| #= k, and
be the conjuction over
all atomic and negated atomic #-formulae # such that
that for
is the following simultaneous
least fixed point, which can be simulated
by nested individual fixed points in the standard way:
Theorem 5.8. Let B be a #-structure, let # be the root of
# be a formula in L -#
T (B), #.
Proof. Again we prove a more general statement involving
variables. W.l.o.g., consider the case of just one
monadic second-order variable Y . Let #(Y ) be a formula
in L-# ], N a set of nodes in
Claim. is the representation
of #(N) T in B.
The claim is proved inductively. The cases corresponding
to (1) - (3) are trivial.
Consider (4) and let #. Suppose that T , v |=
#. Then there is a node w such that (v, w) # E #
# and
for all (i, and, by induction hypothesis, B |=
Conversely, suppose that B |= # k (b 1 , . , b k ). This
means that there exists a guarded list
By the construction of T (B), there exists a node w such
that (v, w) # E #
# and The induction hypothesis
implies that T , w |= #. It follows that T , v |= #.
For (5) finally let Consider the stages
of the fixed point induction on # 0 := false , #+1 :=
# for limit ordinals #. Let
similarly, for the simultaneous fixed point induction on
wise) in limits.
By induction hypothesis, if M # is the representation of
M in B, then the
representation of #(M) T . By induction on # (together
with the representation of the
set defined by # in T . Hence the same is true for the least
fixed points.
We are now in a position to prove our main theorem.
Theorem 5.9. Every sentence in GSO that is invariant under
guarded bisimulation is equivalent to a sentence in
-GF.
Proof. Let # GSO[# ] be invariant under guarded bisimulation
and let # (z) be its translation into MSO[-# ]. By
Lemma 5.6 # (z) is bisimulation-invariant on consistent
trees. Recall that the consistency condition for trees can be
formulated by a monadic second-order sentence #, which is
bisimulation invariant with respect to all trees. As a conse-
quence, the formula (# )(x) is bisimulation invariant
on arbitrary trees. By the Janin-Walukiewicz Theorem,
Theorem 5.1 above, there exists an equivalent formula #
in the -calculus. Let # 0 be its translation into -GF[# ].
Putting everything together, we have
The first equivalence uses guarded bisimulation invariance
of # and Lemma 4.7; the second one is an application
of Theorem 5.5; the third equivalence reflects the input from
the Janin Walukiewicz Theorem; the fourth is an application
of Theorem 5.8.
--R
Modal languages and bounded fragments of predicate logic.
Modal Logic and Classical Logic.
Finite Model Theory.
lite.
The decidability of guarded fixed point logic.
Guarded fixed point logic.
On the expressive completeness of the propositional mu-calculus with respect to monadic second order logic
Games for the mu-calculus
--TR
Automata on infinite objects
Games for the MYAMPERSANDmgr;-calculus
Reasoning in description logics
Why are modal logics so robustly decidable?
Dynamic Logic
Modal Logic over Finite Structures
Games and Model Checking for Guarded Logics
Modal and Guarded Characterisation Theorems over Finite Transition Systems
On the Expressive Completeness of the Propositional mu-Calculus with Respect to Monadic Second Order Logic
The monadic second-order logic of graphs XIV
Guarded Fixed Point Logic
Back and Forth between Guarded and Modal Logics
--CTR
Dirk Leinders , Jan Van den Bussche, On the complexity of division and set joins in the relational algebra, Journal of Computer and System Sciences, v.73 n.4, p.538-549, June, 2007
Dirk Leinders , Jan Van den Bussche, On the complexity of division and set joins in the relational algebra, Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 13-15, 2005, Baltimore, Maryland
Dirk Leinders , Maarten Marx , Jerzy Tyszkiewicz , Jan Bussche, The Semijoin Algebra and the Guarded Fragment, Journal of Logic, Language and Information, v.14 n.3, p.331-343, June 2005
Erich Grdel , Wolfgang Thomas , Thomas Wilke, Literature, Automata logics, and infinite games: a guide to current research, Springer-Verlag New York, Inc., New York, NY, 2002 | modal logic;bisimulation;model theory;guarded logic |
507449 | On the Quality of Service of Failure Detectors. | Editor's Note: This paper unfortunately contains some errors which led to the paper being reprinted in the May 2002 issue. Please see IEEE Transactions on Computers, vol. 51, no. 5, May 2002, pp. 561-580 for the correct paper (available without subscription).We study the quality of service (QoS) of failure detectors. By QoS, we mean a specification that quantifies 1) how fast the failure detector detects actual failures and 2) how well it avoids false detections. We first propose a set of QoS metrics to specify failure detectors for systems with probabilistic behaviors, i.e., for systems where message delays and message losses follow some probability distributions. We then give a new failure detector algorithm and analyze its QoS in terms of the proposed metrics. We show that, among a large class of failure detectors, the new algorithm is optimal with respect to some of these QoS metrics. Given a set of failure detector QoS requirements, we show how to compute the parameters of our algorithm so that it satisfies these requirements and we show how this can be done even if the probabilistic behavior of the system is not known. We then present some simulation results that show that the new failure detector algorithm provides a better QoS than an algorithm that is commonly used in practice. Finally, we suggest some ways to make our failure detector adaptiveto changes in the probabilistic behavior of the network. | Introduction
Fault-tolerant distributed systems are designed to provide reliable and continuous service despite the
failures of some of their components. A basic building block of such systems is the failure detector.
Failure detectors are used in a wide variety of settings, such as network communication protocols [10],
computer cluster management [23], group membership protocols [5, 9, 7, 27, 22, 21], etc.
Roughly speaking, a failure detector provides some information on which processes have crashed.
This information, typically given in the form of a list of suspects, is not always up-to-date or correct:
Research partially supported by NSF grant CCR-9711403 and an Olin Fellowship.
y Oracle Corporation, One Oracle Drive, Nashua, NH 03062, USA. Email: Wei.Chen@oracle.com
z DIX Departement d'Informatique, Ecole Polytechnique, 91128 Palaiseau Cedex, France. Email:
sam@dix.polytechnique.fr
x Compaq Systems Research Center, 130 Lytton Avenue, Palo Alto, CA 94301-1044, USA. Email:aguilera@pa.dec.com
a failure detector may take a long time to start suspecting a process that has crashed, and it may erroneously
suspect a process that has not crashed (in practice this can be due to message losses and
delays).
Chandra and Toueg [12] provide the first formal specification of unreliable failure detectors and
show that they can be used to solve some fundamental problems in distributed computing, namely,
consensus and atomic broadcast. This approach was later used and generalized in other works, e.g.,
[20, 16, 17, 1, 3, 2].
In all of the above works, failure detectors are specified in terms of their eventual behavior (e.g.,
a process that crashes is eventually suspected). Such specifications are appropriate for asynchronous
systems, in which there is no timing assumption whatsoever. 1 Many applications, however, have some
timing constraints, and for such applications, failure detectors with eventual guarantees are not suf-
ficient. For example, a failure detector that starts suspecting a process one hour after it crashed can
be used to solve asynchronous consensus, but it is useless to an application that needs to solve many
instances of consensus per minute. Applications that have timing constraints require failure detectors
that provide a quality of service (QoS) with some quantitative timeliness guarantees.
In this paper, we study the QoS of failure detectors in systems where message delays and message
losses follow some probability distributions. We first propose a set of metrics that can be used to specify
the QoS of a failure detector; these QoS metrics quantify (a) how fast it detects actual failures, and (b)
how well it avoids false detections. We then give a new failure detector algorithm and analyze its QoS in
terms of the proposed metrics. We show that, among a large class of failure detectors, the new algorithm
is optimal with respect to some of these QoS metrics. Given a set of failure detector QoS requirements,
we show how to compute the parameters of our algorithm so that it satisfies these requirements, and
we show how this can be done even if the probabilistic behavior of the system is not known. Finally,
we give simulation results showing that the new failure detector algorithm provides a better QoS than
an algorithm that is commonly used in practice. The QoS specification and the analysis of our failure
detector algorithm is based on the theory of stochastic processes. To the best of our knowledge, this
work is the first comprehensive and systematic study of the QoS of failure detectors using probability
theory.
1.1 On the QoS Specification of Failure Detectors
We consider message-passing distributed systems in which processes may fail by crashing, and messages
may be delayed or dropped by communication links. 2 A failure detector can be slow, i.e., it may
take a long time to suspect a process that has crashed, and it can make mistakes, i.e., it may erroneously
suspect some processes that are actually up (such a mistake is not necessarily permanent: the failure
Even though the fail-aware failure detector of [17] is implemented in the "timed asynchronous" model, its specification
is for the asynchronous model.
We assume that process crashes are permanent, or, equivalently, that a process that recovers from a crash assumes a
new identity.
up
trust
suspect
trust
suspect
FD at q
Figure
1: Detection time T D
detector may later stop suspecting this process). To be useful, a failure detector has to be reasonably
fast and accurate.
In this paper, we propose a set of metrics for the QoS specification of failure detectors. In general,
these QoS metrics should be able to describe the failure detector's speed (how fast it detects crashes)
and its accuracy (how well it avoids mistakes). Note that speed is with respect to processes that crash,
while accuracy is with respect to processes that do not crash.
A failure detector's speed is easy to measure: this is simply the time that elapses from the moment
when a process p crashes to the time when the failure detector starts suspecting p permanently. This
QoS metric, called detection time, is illustrated in Fig. 1.
How do we measure a failure detector's accuracy? It turns out that determining a good set of accuracy
metrics is a delicate task. To illustrate some of the subtleties involved, consider a system of two
processes p and q connected by a lossy communication link, and suppose that the failure detector at q
monitors process p. The output of the failure detector at q is either "I suspect that p has crashed" or "I
trust that p is up", and it may alternate between these two outputs from time to time. For the purpose
of measuring the accuracy of the failure detector at q, suppose that p does not crash.
Consider an application that queries q's failure detector at random times. For such an application, a
natural measure of accuracy is the probability that, when queried at a random time, the failure detector
at q indicates correctly that p is up. This QoS metric is the query accuracy probability. For example, in
Fig. 2, the query accuracy probability of FD 1 at q is 12=(12
The query accuracy probability, however, is not sufficient to fully describe the accuracy of a failure
detector. To see this, we show in Fig. 2 two failure detectors FD 1 and FD 2 such that (a) they have the
same query accuracy probability, but (b) FD 2 makes mistakes more frequently than FD 1 . 3 In some
applications, every mistake causes a costly interrupt, and for such applications the mistake rate is an
important accuracy metric.
Note, however, that the mistake rate alone is not sufficient to characterize accuracy: as shown in
Fig. 3, two failure detectors can have the same mistake rate, but different query accuracy probabilities.
3 The failure detector makes a mistake each time its output changes from "trust" to "suspect" while p is actually up.
1 .
up
Figure
2: FD 1 and FD 2 have the same query accuracy probability of :75, but the mistake rate of FD 2
is four times that of FD 1
up
Figure
3: FD 1 and FD 2 have the same mistake rate 1=16, but the query accuracy probabilities of FD 1
and FD 2 are :75 and :50, respectively
Even when used together, the above two accuracy metrics are still not sufficient. In fact, it is easy to
find two failure detectors FD 1 and FD 2 , such that (a) FD 1 is better than FD 2 in both measures (i.e.,
it has a higher query accuracy probability and a lower mistake rate), but (b) FD 2 is better than FD 1 in
another respect: specifically, whenever FD 2 makes a mistake, it corrects this mistake faster than FD
in other words, the mistake durations in FD 2 are smaller than in FD 1 . Having small mistake durations
may be important to some applications.
As it can be seen from the above, there are several different aspects of accuracy that may be important
to different applications, and each aspect has a corresponding accuracy metric.
In this paper, we identify six accuracy metrics (since the behavior of a failure detector is probabilistic,
most of these metrics are random variables). We then use the theory of stochastic processes to quantify
the relation between these metrics. This analysis allows us to select two accuracy metrics as the primary
ones in the sense that: (a) they are not redundant (one cannot be derived from the other), and (b)
together, they can be used to derive the other four accuracy metrics.
In summary, we show that the QoS specification of failure detectors can be given in terms of three
basic metrics, namely, the detection time and the two primary accuracy metrics that we identified.
Taken together, these metrics can be used to characterize and compare the QoS of failure detectors.
1.2 The Design and Analysis of a New Failure Detector Algorithm
In this paper, we consider a simple system of two processes p and q, connected through a communication
link. Process p may fail by crashing, and the link between p and q may delay or drop messages.
Message delays and message losses follow some probabilistic distributions. Process q has a failure detector
that monitors p and outputs either "I suspect that p has crashed" or "I trust that p is up" ("suspect
p" and "trust p" in short, respectively).
A Common Failure Detection Algorithm and its Drawbacks. A simple failure detection algorithm,
commonly used in practice, works as follows: at regular time intervals, process p sends a heartbeat message
to q; when q receives a heartbeat message, it trusts p and starts a timer with a fixed timeout value
if the timer expires before q receives a newer heartbeat message from p, then q starts suspecting p.
This algorithm has two undesirable characteristics; one regards its accuracy and the other its detection
time, as we now explain. Consider the i-th heartbeat message m i . Intuitively, the probability of a
premature timeout on m i should depend solely on m i , and in particular on m i 's delay. With the simple
algorithm, however, the probability of a premature timeout on m i also depends on the heartbeat m
that precedes In fact, the timer for m i is started upon the receipt of m i\Gamma1 , and so if m i\Gamma1 is "fast",
the timer for m i starts early and this increases the probability of a premature timeout on m i . This
dependency on past heartbeats is undesirable.
To see the second problem, suppose p sends a heartbeat just before it crashes, and let d be the delay of
this last heartbeat. In the simple algorithm, q would permanently suspect p only d+TO time units after
crashes. Thus, the worst-case detection time for this algorithm is the maximum message delay plus
TO . This is impractical because in many systems the maximum message delay is orders of magnitude
larger than the average message delay.
The source of the above problems is that even though the heartbeats are sent at regular intervals, the
timers to "catch" them expire at irregular times, namely the receipt times of the heartbeats plus a fixed
TO . The algorithm that we propose eliminates this problem. As a result, the probability of a premature
timeout on heartbeat m i does not depend on the behavior of the heartbeats that precede m i , and the
detection time does not depend on the maximum message delay.
A New Algorithm and its QoS Analysis. In the new algorithm, process p sends heartbeat messages
periodically every j time units (just as in the simple algorithm). To determine whether
to suspect p, q uses a sequence - called freshness points, obtained by
shifting the sending time of the heartbeat messages by a fixed parameter ffi. More precisely, -
where oe i is the time when m i is sent. For any time t, let i be so that t 2 [- trusts p at time
t if and only if q has received heartbeat m i or higher.
Given the probabilistic behavior of the system (i.e., the probability of message losses and the distribution
of message delays), and the parameters j and ffi of the algorithm, we determine the QoS of the
new algorithm using the theory of stochastic processes. Simulation results given in Section 7 are consistent
with our QoS analysis, and they show that the new algorithm performs better than the common
one.
In contrast to the common algorithm, the new algorithm guarantees an upper bound on the detection
time. Moreover, the new algorithm is optimal in the sense that it has the best possible query accuracy
probability with respect to any given bound on the detection time. More precisely, we show that among
all failure detectors that send heartbeats at the same rate (they use the same network bandwidth) and
satisfy the same upper bound on the detection time, the new algorithm has the best query accuracy
probability.
The first version of our algorithm (described above) assumes that p and q have synchronized clocks.
This assumption is not unrealistic, even in large networks. For example, GPS and Cesium clocks are
becoming accessible, and they can provide clocks that are very closely synchronized (see, e.g., [29]).
When synchronized clocks are not available, we propose a modification to this algorithm that performs
equally well in practice, as shown by our simulations. The basic idea is to use past heartbeat messages to
obtain accurate estimates of the expected arrival times of future heartbeats, and then use these estimates
to find the freshness points. This is explained in Section 6.
Configuring our Algorithm to Meet the Failure Detector Requirements of an Application. Given
a set of failure detector QoS requirements (provided by an application), we show how to compute the
parameters of our algorithm to achieve these requirements. We first do so assuming that one knows the
probabilistic behavior of the system (i.e., the probability distributions of message delays and message
losses). We then drop this assumption, and show how to configure the failure detector to meet the QoS
requirements of an application even when the probabilistic behavior of the system is not known.
1.3 Related Work
In [19], Gouda and McGuire measure the performance of some failure detector protocols under the
assumption that the protocol stops as soon as some process is suspected to have crashed (even if this
suspicion is a mistake). This class of failure detectors is less general than the one that we studied here:
in our work, a failure detector can alternate between suspicion and trust many times.
In [28], van Renesse et. al. propose a scalable gossip-style randomized failure detector protocol.
They measure the accuracy of this protocol in terms of the probability of premature timeouts. 4 The
probability of premature timeouts, however, is not an appropriate metric for the specification of failure
detectors in general: it is implementation-specific and it cannot be used to compare failure detectors
that use timeouts in different ways. This point is further explained at the end of Section 2.3.
In [24], Raynal and Tronel present an algorithm that detects member failures in a group: if some
process detects a failure in the group (perhaps a false detection), then all processes report a group failure
and the protocol terminates. The algorithm uses heartbeat-style protocol, and its timeout mechanism is
the same as the simple algorithm that we described in Section 1.2.
4 This is called "the probability of mistakes" in [28].
In [29], Ver-ssimo and Raynal study QoS failure detectors - these are detectors that indicate when
a service does not meet its quality-of-service requirements. In contrast, this paper studies the QoS of
failure detectors, i.e., how well a failure detector works.
Heartbeat-style failure detectors are commonly used in practice. To keep both good detection time
and good accuracy, many implementations rely on special features of the operating system and communication
system to try to ensure that heartbeat messages are received at regular intervals (see discussion
in Section 12.9 of [23]). This is not easy even for closely-connected computer clusters, and it is very
hard in wide-area networks.
The probabilistic network model used in this paper is similar to the ones used in [14, 6] for probabilistic
clock synchronization. The method of estimating the expected arrival times of heartbeat messages
is close to the method of remote clock reading of [6].
The rest of the paper is organized as follows. In Section 2, we propose a set of metrics to specify
the QoS of failure detectors. In Section 3, we describe a new failure detector algorithm and analyze
its QoS in terms of these metrics; we also present an optimality result. We then explain how to set
the algorithm's parameters to meet some given QoS requirements - first in the case when we know
the probabilistic behavior of messages (Section 4), and then in the case when this is not known (Sec-
tion 5). In Section 6 we deal with unsynchronized clocks. We present the results of some simulations
in Section 7, and we conclude the paper with some discussion in Section 8. Appendix A lists the main
symbols used in the paper, and Appendices B to D give the proofs of the main theorems. More detailed
proofs can be found in [13].
2 On the QoS Specification of Failure Detectors
We consider a system of two processes p and q. We assume that the failure detector at q monitors p,
and that q does not crash. Henceforth, real time is continuous and ranges from 0 to 1.
2.1 The Failure Detector Model
The output of the failure detector at q at time t is either S or T , which means that q suspects or trusts
p at time t, respectively. A transition occurs when the output of the failure detector at q changes: An
S-transition occurs when the output at q changes from T to S; a T-transition occurs when the output at
q changes from S to T . We assume that there are only a finite number of transitions during any finite
time interval.
Since the behavior of the system is probabilistic, the precise definition of our model and of our QoS
metrics uses the theory of stochastic processes. To keep our presentation at an intuitive level, we omit
the technical details related to this theory (they can be found in [13]).
We consider only failure detectors whose behavior eventually reaches steady state, as we now explain
informally. When a failure detector starts running, and for a while after, its behavior depends on the
trust
suspect suspect
up
FD at q
Figure
4: Mistake duration T M , good period duration T G , and mistake recurrence time T MR
initial condition (such as whether initially q suspects p or not) and on how long it has been running.
Typically, as time passes the effect of the initial condition gradually diminishes and its behavior no
longer depends on how long it has been running - i.e., eventually the failure detector behavior reaches
equilibrium, or steady state. In steady state, the probability law governing the behavior of the failure
detector does not change over time. The QoS metrics that we propose refer to the behavior of a failure
detector after it reaches steady state. Most of these metrics are random variables.
2.2 Primary Metrics
We propose three primary metrics for the QoS specification of failure detectors. The first one measures
the speed of a failure detector. It is defined with respect to the runs in which p crashes.
Detection time (T D Informally, T D is the time that elapses from p's crash to the time when q starts
suspecting p permanently. More precisely, T D is a random variable representing the time that elapses
from the time that p crashes to the time when the final S-transition (of the failure detector at q) occurs
and there are no transitions afterwards (Fig. 1). If there is no such final S-transition, then T
such an S-transition occurs before p crashes, then T
We next define some metrics that are used to specify the accuracy of a failure-detector. Throughout
the paper, all accuracy metrics are defined with respect to failure-free runs, i.e., runs in which p does
not crash. 6 There are two primary accuracy metrics:
Mistake recurrence time (T MR
this measures the time between two consecutive mistakes. More
precisely, T MR is a random variable representing the time that elapses from an S-transition to the next
one (Fig. 4).
Mistake duration (T M this measures the time it takes the failure detector to correct a mistake. More
precisely, T M is a random variable representing the time that elapses from an S-transition to the next
T-transition (Fig. 4).
As we discussed in the introduction, there are many aspects of failure detector accuracy that may be
important to applications. Thus, in addition to T MR
and T M , we propose four other accuracy metrics
in the next section. We selected T MR and T M as the primary metrics because given these two, one can
5 We omit the boundary cases of other metrics since they can be similarly defined.
6 As explained in [13], these metrics are also meaningful for runs in which p crashes.
compute the other four (this will be shown in Section 2.4).
2.3 Derived Metrics
We propose four additional accuracy metrics:
Average mistake rate (- M this measures the rate at which a failure detector make mistakes, i.e., it is
the average number of S-transitions per time unit. This metric is important to long-lived applications
where each failure detector mistake (each S-transition) results in a costly interrupt. This is the case for
applications such as group membership and cluster management.
Query accuracy probability (P A this is the probability that the failure detector's output is correct at a
random time. This metric is important to applications that interact with the failure detector by querying
it at random times.
Many applications can make progress only during good periods - periods in which the failure
detector makes no mistakes. This observation leads to the following two metrics.
Good period duration (T G this measures the length of a good period. More precisely, T G is a random
variable representing the time that elapses from a T-transition to the next S-transition (Fig. 4).
For short-lived applications, however, a closely related metric may be more relevant. Suppose that
an application is started at a random time in a good period. If the remaining part of the good period is
long enough, the short-lived application will be able to complete its task. The metric that measures the
remaining part of the good period is:
Forward good period duration (T FG this is a random variable representing the time that elapses
from a random time at which q trusts p, to the time of the next S-transition.
At first sight, it may seem that, on the average, T FG
is just half of T G (the length of a good period).
But this is incorrect, and in Section 2.4 we give the actual relation between T FG and T G .
An important remark is now in order. For timeout-based failure detectors, the probability of premature
timeouts has sometimes been used as the accuracy measure: this is the probability that when the
timer is set, it will prematurely timeout on a process that is actually up. The measure, however, is not
appropriate because: (a) it is implementation-specific, and (b) it is not useful to applications unless it is
given together with other implementation-specific measures, e.g., how often timers are started, whether
the timers are started at regular or variable intervals, whether the timeout periods are fixed or variable,
etc. (many such variations exist in practice [10, 19, 28]). Thus, the probability of premature timeouts is
not a good metric for the specification of failure detectors, e.g., it cannot be used to compare the QoS
of failure detectors that use timeouts in different ways. The six accuracy metrics that we identified in
this paper do not refer to implementation-specific features, in particular, they do not refer to timeouts
at all.
2.4 How the Accuracy Metrics are Related
Theorem 1 below explains how our six accuracy metrics are related. We then use this theorem to justify
our choice of the primary accuracy metrics. Henceforth, Pr(A) denotes the probability of event A;
E(X), E(X k ), and V(X) denote the expected value (or mean), the k-th moment, and the variance of
random variable X , respectively.
Parts (2) and (3) of Theorem 1 assume that in failure-free runs, the probabilistic distribution of failure
detector histories is ergodic. Roughly speaking, this means that in failure-free runs, the failure detector
slowly "forgets" its past history: from any given time on, its future behavior may depend only on its
recent behavior. We call failure detectors satisfying this ergodicity condition ergodic failure detectors.
Ergodicity is a basic concept in the theory of stochastic processes [26], but the technical details are
substantial and outside the scope of this paper.
We have also determined the relations between our accuracy metrics in the case that ergodicity does
not hold. The resulting expressions are more complex (they are generalized versions of those given
below) and can be found in [13].
Theorem 1 For any ergodic failure detector, the following results hold: (1) T . (2) If
is always 0. If
R x
In particular,
(3c) E(T FG
The fact that T holds is immediate by definition. The proofs of parts (2) and (3) use the
theory of stochastic processes. Part (2) is intuitive, while part (3), which relates T G and T FG
, is more
complex. In particular, part (3c) is counter-intuitive: one may think that E(T FG
(3c) says that E(T FG ) is in general larger than E(T G )=2 (this is a version of the "waiting time paradox"
in the theory of stochastic processes [4]).
We now explain how Theorem 1 guided our selection of the primary accuracy metrics. Parts (2) and
(3) show that - M , P A and T FG
can be derived from T MR
This suggests that the primary
metrics should be selected among T MR , T M and T G . Moreover, since T it is clear that
given the joint distribution of any two of them, one can derive the remaining one. Thus, two of T MR
T M and T G should be selected as the primary metrics, but which two? By choosing T MR
and T M as
our primary metrics, we get the following convenient property that helps to compare failure detectors:
if FD 1 is better than FD 2 in terms of both E(T MR ) and E(T M ) (the expected values of the primary
metrics) then we can be sure that FD 1 is also better than FD 2 in terms of E(T G ) (the expected values
of the other metric). We would not get this useful property if T G were selected as one of the primary
metrics. 7
7 For example, FD 1 may be better than FD 2 in terms of both E(T G ) and E(T M ), but worse than FD 2 in terms of
E(T MR ).
3 The Design and QoS Analysis of a New Failure Detector Algorith
3.1 The Probabilistic Network Model
We assume that processes p and q are connected by a link that does not create or duplicate messages, 8
but may delay or drop messages. Note that the link here represents an end-to-end connection and does
not necessarily correspond to a physical link.
We assume that the message loss and message delay behavior of any message sent through the link
is probabilistic, and is characterized by the following two parameters: (a) message loss probability
which is the probability that a message is dropped by the link; and (b) message delay D, which is a
random variable with range (0; 1) representing the delay from the time a message is sent to the time
it is received, under the condition that the message is not dropped by the link. We assume that the
expected value E(D) and the variance V(D) of D are finite. Note that our model does not assume that
the message delay time D follows any particular distribution, and thus it is applicable to many practical
systems.
have access to their own local clocks. For simplicity, we assume that there is no
clock drift, i.e., local clocks run at the same speed as real time (our results can be easily generalized to
the case where local clocks have bounded drifts). In Sections 3, 4 and 5, we further assume that clocks
are synchronized. We explain how to remove this assumption in Section 6.
For simplicity we assume that the probabilistic behavior of the network does not change over time.
In Section 8, we explain how to modify the algorithm so that it dynamically adapts to changes in the
probabilistic behavior of the system.
We assume that crashes cannot be predicted, i.e., the state of the system at any given time has no
information whatsoever on the occurrence of future crashes (this excludes a system with program-controlled
crashes [11]). Moreover, the delay and loss behaviors of the messages that a process sends
are independent of whether (and when) the process crashes.
3.2 The Algorithm
The new algorithm works as follows. The monitored process p periodically sends heartbeat messages
to q every j time units, where j is a parameter of the algorithm. Every heartbeat
message m i is tagged with its sequence number i. Henceforth, oe i denotes the sending time of message
. The monitoring process q shifts the oe i 's forward by ffi - the other parameter of the algorithm -
to obtain the sequence of times uses the - i 's and the
times it receives heartbeat messages, to determine whether to trust or suspect p, as follows. Consider
8 Message duplication can be easily taken care of: whenever we refer to a message being received, we change it to the
first copy of the message being received. With this modification, all definitions and analyses in the paper go through, and in
particular, our results remain correct without any change.
(c)
(b)
(a)
FD at q
suspect
trust trust
suspect
Figure
5: Three scenarios of the failure detector output in one interval [-
Process p:
1 for all i - 1, at time oe
Process q:
3 for all i - 1, at time -
4 if did not receive m j with j - i then output fsuspect p if no fresh message is receivedg
5 upon receive message m j at time t 2 [-
ftrust p when some fresh message is receivedg
Figure
Failure detector algorithm NFD-S with parameters j and ffi (clocks are synchronized)
time period [- checks whether it has received some message m j with j - i. If so,
trusts p during the entire period [- starts suspecting p. If at some time
before - i+1 , q receives some message m j with j - i then q starts trusting p from that time until - i+1 .
(Fig. 5 (b)). If by time - i+1 , q has not received any message m j with j - i, then q suspects p during
the entire period [- This procedure is repeated for every time period. The detailed
algorithm with parameters j and ffi is denoted by NFD-S, and is given in Fig. 6. 9
Note that from time - i to - i+1 , only messages m j with j - i can affect the output of the failure
detector. For this reason, - i is called a freshness point: from time - i to - i+1
are still fresh (useful). With this algorithm, q trusts p at time t if and only if q received a message that
is still fresh at time t.
9 This version of the algorithm is convenient for illustrating the main idea and for performing the analysis. We have
omitted some obvious optimizations.
3.3 The QoS Analysis of the Algorithm
We now give the QoS of the algorithm (the analysis is given in Appendix B). We assume that the link
from p to q satisfies the following message independence property: the behaviors of any two heartbeat
messages sent by p are independent. 10 Henceforth, let - 0
(as in line 3 of
the algorithm).
We first formalize the intuition behind freshness points and fresh messages:
Lemma 2 For all trusts p at time t if and only if q has received some
message t.
The following definitions are for runs where p does not crashes.
(1) For any i - 1, let k be the smallest integer such that for all is sent at or after time
(2) For any i - 1, let p j (x) be the probability that q does not receive message m i+j by time
for every j - 0 and every x - 0; let
(3) For any i - 2, let q 0 be the probability that q receives message m
(4) For any i - 1, let u(x) be the probability that q suspects p at time
(5) For any i - 2, let p S be the probability that an S-transition occurs at time - i .
The above definitions are given in terms of i, a positive integer. Proposition 3, however, shows that
they are actually independent of i.
Proposition 3 (1) dffi=je. (2) For all
By definition, if the probability that q receives m i by time - i is 1. Thus,
Similarly, it is easy to see that if
are degenerated cases of
no interest. We henceforth assume that
The following theorem summarizes our QoS analysis of the new failure detector algorithm.
In practice, this holds only if consecutive heartbeats are sent more than some \Delta time units apart, where \Delta depends on
the system. So assuming that the behavior of heartbeats are independent is equivalent to assuming that j ? \Delta.
Theorem 4 Consider a system with synchronized clocks, where the probability of message losses is
and the distribution of message delays is P (D - x). The failure detector NFD-S of Fig. 6 with
parameters j and ffi has the following properties.
(1) The detection time is bounded:
(2) The average mistake recurrence time is:
E(T MR
(3) The average mistake duration is:
From E(T MR ) and E(T M ) given in the theorem above, we can easily derive the other accuracy measures
using Theorem 1. For example, we can get the query accuracy probability P
Theorem 4 (1) shows an important property of the algorithm: the detection time is bounded, and the
bound does not depend on the behavior of message delays and losses.
In Sections 4, 5 and 6, we show how to use Theorem 4 to compute the failure detector parameters,
so that the failure detector satisfies some QoS requirements (given by an application).
3.4 An Optimality Result
Among all failure detectors that send heartbeats at the same rate and satisfy the same upper bound on
the detection time, the new algorithm provides the best query accuracy probability. More precisely, let
C be the class of failure detector algorithms A such that in every run of A, process p sends heartbeats
to q every j time units and A satisfies T D - T U
for some constant T U
. Let A be the instance of the
new failure detector algorithm NFD-S with parameters j and
j. By part (1) of Theorem 4,
we know that A 2 C. We can show that
Theorem 5 For any A 2 C, let P A be the query accuracy probability of A. Let P
A
be the query
accuracy probability of A . Then P
A
The theorem is a consequence of the following important property of algorithm A . Consider any
algorithm A 2 C. Let r be any failure-free run of A , and r 0 be any failure-free run of A in which the
heartbeat delays and losses are exactly as in r. We can show that if q suspects p at time t in r, then q
also suspects p at time t in r 0 . With this property, it is easy to see that the probability that q trusts p at
a random time in A must be at least as high as the probability that q trusts p at a random time in any
A 2 C. The detailed proof is given in Appendix C.
Failure Detector
Configurator
QoS
Requirements
U
MR
U
Pr (D - x)
Probabilistic Behavior
of Heartbeats
Figure
7: Meeting QoS requirements with NFD-S. The probabilistic behavior of heartbeats is given,
and clocks are synchronized
4 Configuring the Failure Detector to Satisfy QoS Requirements
Suppose we are given a set of failure detector QoS requirements (the QoS requirements could be given
by the application that uses this failure detector). We now show how to compute the parameters j and ffi
of our failure detector algorithm, so that these requirements are satisfied. We assume that (a) the local
clocks of processes are synchronized, and (b) one knows the probabilistic behavior of the messages,
i.e., the message loss probability p L and the distribution of message delays Pr(D - x). In Sections 5
and 6, we consider the cases when these assumptions do not hold.
We assume that the QoS requirements are expressed using the primary metrics. More precisely, a set
of QoS requirements is a tuple (T U
MR
of positive numbers, where T U
is an upper bound on the
detection time, T L
MR
is a lower bound on the average mistake recurrence time, and T U
is an upper bound
on the average mistake duration. In other words, the requirements are that: 11
MR
Our goal, illustrated in Fig. 7, is to find a configuration procedure that takes as inputs (a) the QoS re-
quirements, namely T U
MR
, and (b) the probabilistic behavior of the heartbeat messages, namely
p L and Pr(D - x), and outputs the failure detector parameters j and ffi so that the failure detector
satisfies the QoS requirements in (4.4). Furthermore, to minimize the network bandwidth taken by the
failure detector, we want a configuration procedure that finds the largest intersending interval j that
satisfy these QoS requirements.
Using Theorem 4, our goal can be stated as a mathematical programming problem:
11 Note that the bounds on the primary metrics E(T MR ) and E(T M ) also impose bounds on the derived metrics, according
to Theorem 1. More precisely, we have - M - 1=T L
MR
MR
MR
MR
M , and E(T FG
MR
)=2.
subject to
MR
where the values of u(x) and p S are given by Proposition 3. Solving this problem is hard, so instead
we show how to find some j and ffi that satisfy (4.5)-(4.7) (but the j that we find may not be the largest
possible). To do so, we replace (4.7) with a simpler and stronger constraint, and then compute the
optimal solution of this modified problem (see Appendix D). We obtain the following procedure to find
j and ffi:
. If j
"QoS cannot be achieved" and stop; else continue.
ffl Step 2: Let
Find the largest j - j max such that
MR
. Such an j always exists. To find such an j, we
can use a simple numerical method, such as binary search (this works because when j decreases,
increases exponentially fast).
ffl Step 3: Set
Theorem 6 Consider a system in which clocks are synchronized, and the probabilistic behavior of
messages is known. Suppose we are given a set of QoS requirements as in (4.4). The above procedure
has two possible outcomes: (1) It outputs j and ffi. In this case, with parameters j and ffi the failure
detector NFD-S of Fig. 6 satisfies the given QoS requirements. (2) It outputs "QoS cannot be achieved".
In this case, no failure detector can achieve the given QoS requirements.
As an example of the configuration procedure of the failure detector, suppose we have the following
QoS requirements: (a) a crash failure is detected within
s; (b) on average,
the failure detector makes at most one mistake per month, i.e., T L
s; (c) on
average, the failure detector corrects its mistakes within one minute, i.e. T U
Assume that
the message loss probability is p 0:01, the distribution of message delay D is exponential, and the
average message delay E(D) is 0:02 s. By inputting these numbers into the configuration procedure,
we get these parameters, our failure detector satisfies the given QoS
requirements.
Failure Detector
Configurator
Estimator of the Probabilistic
Behavior of Heartbeats
E(D)
QoS
Requirements
U
MR
U
Figure
8: Meeting QoS requirements with NFD-S. The probabilistic behavior of heartbeats is not
known, and clocks are synchronized
5 Dealing with Unknown Message Behavior
In Section 4, our procedure to compute the parameters j and ffi of NFD-S to meet some QoS requirements
assumed that one knows the probability p L of message loss and the distribution Pr(D - x) of
message delays. This assumption is not unrealistic, but in some systems the probabilistic behavior of
messages may not be known. In that case, it is still possible to compute j and ffi, as we now explain.
We proceed in two steps: (1) we first show how to compute j and ffi using only p L , E(D) and V(D)
(recall that E(D) and V (D) are the expected value and variance of message delays, respectively); (2)
we then show how to estimate p L , E(D) and V(D). In this section we still assume that local clocks are
synchronized (we drop this assumption in the next section). See Fig. 8.
Computing Failure Detector Parameters j and ffi Using p L , E(D) and V(D). With E(D) and
V(D), we can bound Pr(D ? t) using the following One-Sided Inequality of probability theory (e.g.,
see [4], p.79): For any random variable D with a finite expected value and a finite variance,
With this, we can derive the following bounds on the QoS metrics of algorithm NFD-S.
Theorem 7 Consider a system with synchronized clocks and assume ffi ? E(D). For algorithm NFD-S,
we have E(T MR
Y
and
Note that in Theorem 7 we assume that ffi ? E(D), where ffi is a parameter of NFD-S. This assumption
is reasonable because if ffi - E(D) then NFD-S would generate a false suspicion every time the
heartbeat message is delayed by more than the average message delay. But then, NFD-S would make
too many mistakes to be a useful failure detector.
Theorem 7 can be used to compute the parameters j and ffi of the failure detector NFD-S, so that it
satisfies the QoS requirements given in (4.4). Recall that these QoS requirements are given as a tuple
MR
is an upper bound on the worst-case detection time, T L
MR
is a lower bound on
the average mistake recurrence time, and T U
is an upper bound on the average mistake duration. The
configuration procedure is given below. This procedure assumes that T U
E(D), i.e., the required
detection time is greater than the average message delay (a reasonable assumption).
cannot be achieved" and stop; else
continue.
ffl Step 2: Let
Y
Find the largest j - j max such that
MR
. Such an j always exists.
ffl Step 3: Set
Notice that the above procedure does not use the distribution Pr(D - x) of message delays; it only
uses p L , E(D) and V(D).
Theorem 8 Consider a system in which clocks are synchronized, and the probabilistic behavior of
messages is not known. Suppose we are given a set of QoS requirements as in (4.4), and suppose
E(D). The above procedure has two possible outcomes: (1) It outputs j and ffi. In this case,
with parameters j and ffi the failure detector NFD-S of Fig. 6 satisfies the given QoS requirements.
(2) It outputs "QoS cannot be achieved". In this case, no failure detector can achieve the given QoS
requirements.
The above configuration procedure works when the distribution of the message delay D is not known
(only E(D) and V (D) are known). To illustrate this procedure, we take the same example as in Section
4, except that we do not assume that the distribution of D is exponential. Specifically, suppose
that the failure detector QoS requirements are that: (a) a crash failure is detected within
onds, i.e., T U
s; (b) on average, the failure detector makes at most one mistake per month,
i.e., T L
MR
s; (c) on average, the failure detector corrects its mistakes within one
minute, i.e. T U
Assume that the message loss probability is p 0:01, the average message
delay E(D) is 0:02 s, and the variance V (D) is also 0:02. By inputting these numbers into the configuration
procedure, we get these parameters, failure detector NFD-S
satisfies the given QoS requirements. Note that when we go from the case that the distribution of D is
known (example of Section 4) to the case that D is not known, j decreases from 9:97 s to 9:71 s. This
corresponds to a slight increase in the heartbeat sending rate (in order to achieve the same given QoS).
Estimating p L , E(D) and V(D). It is easy to estimate p L , E(D) and V(D) using heartbeat messages.
For example, to estimate p L , one can use the sequence numbers of the heartbeat messages to count the
number of "missing" heartbeats, and then divide this count by the highest sequence number received so
far. To estimate E(D) and V(D), we use the synchronized clocks as follows: When p sends a heartbeat
with the sending time S, and when q receives m, q records the receipt time A. In
this way, A \Gamma S is the delay of m. We then compute the average and variance of A \Gamma S for multiple
past heartbeat messages, and thus obtain accurate estimates for E(D) and V(D).
6 Dealing with Unknown Message Behavior and Unsynchronized
Clocks
So far, we assumed that the clocks of p and q are synchronized. More precisely, in the algorithm NFD-S
of Fig. 6, q sets the freshness points - i 's by shifting the sending times of heartbeats by a constant. When
clocks are not synchronized, the local sending times of heartbeats at p cannot be used by q to set the - i 's,
and thus q needs to do it in a different way. The basic idea is that q sets the - i 's by shifting the expected
arrival times of the heartbeats, and q estimates the expected arrival times accurately (to compute these
estimates, q does not need synchronized clocks).
6.1 NFD-U: an Algorithm that Uses Expected Arrival Times
We now present NFD-U, a new failure detector algorithm for systems with unsynchronized clocks.
The new algorithm is very similar to NFD-S; the only difference is that q now sets the - i 's by shifting
the expected arrival times of the heartbeats, rather than the sending times of heartbeats. We assume
that local clocks do not drift with respect to real time, i.e., they accurately measure time intervals (our
algorithm and results can be easily generalized to the case where local clocks have small bounded drifts
with respect to real time). Let oe i denote the sending time of m i with respect to q's local clock. Then,
the expected arrival time of m i at q is EA E(D) is the expected message delay.
Assume that q knows the EA i 's (we will soon show how q can accurately estimate them). To set the
shifts the EA i 's forward by ff time units (i.e., - ff is a new failure detector
parameter that replaces ffi. The intuition here is that EA i is the time when m i is expected to be received,
and ff is a slack added to EA i to accommodate the possible extra delay or loss of m i .
Figure
9 shows the whole algorithm, denoted by NFD-U. We restructured the algorithm a little, to
Process p: fusing p's local clockg
1 for all i - 1, at time i \Delta j, send heartbeat m i to q;
Process q: fusing q's local clockg
keeps the largest sequence number in all messages q received so farg
the current time: fif the current time reaches - '+1 , then none of the messages received is still freshg
6 output / fsuspect p since no message received is still fresh at this timeg
7 upon receive message m j at time t:
freceived a message with a higher sequence numberg
fset the next freshness point - '+1 using the expected arrival time of m '+1 g
ftrust p since m ' is still fresh at time tg
Figure
9: Failure detector algorithm NFD-U with parameters j and ff (clocks are not synchronized, but
the EA i 's are known)
show explicitly when q uses the EA i 's. Variable ' keeps the largest heartbeat sequence number received
so far, and - '+1 refers to the "next" freshness point. Note that when q updates ', it also changes - '+1 . If
the local clock of q ever reaches time - '+1 (an event which might never happen), then at this time none
of the heartbeats received is still fresh, and so q starts suspecting p (lines 5-6). When q receives m j ,
it checks whether this is a new heartbeat (j ? ') and in this case, (1) q updates ', (2) q sets the next
freshness point - '+1 to EA '+1 + ff, and (3) q trusts p if the current time is less than - '+1 (lines 9-11).
Note that this algorithm is identical to NFD-S, except in the way in which q sets the - i 's. In particular,
for any time t, let i be so that t 2 [- trusts p at time t if and only if q has
received heartbeat m i or higher.
6.2 Analysis and Configuration of NFD-U
NFD-U and NFD-S differ only in the way they set the - i 's: in NFD-S, - while in NFD-U,
last equality holds because EA E(D)). Thus, the QoS
analysis of NFD-U is obtained by simply replacing ffi with E(D) + ff in Proposition 3, Theorem 4 and
Theorem 7.
To configure the parameters j and ff of NFD-U to meet some QoS requirements, we use a method
similar to the one in Section 5. We proceed in two steps: (1) we first show how to compute j and ff
using only p L and V(D) (note that E(D) is not used); (2) we then show how to estimate p L and V(D).
Fig. 10.
Computing Failure Detector Parameters j and ff using p L and V(D). By replacing ffi with E(D)+
ff in Theorem 7, we obtain the following bounds on the accuracy metrics of NFD-U:
Failure Detector NFD-U
are known)
Configurator
Estimator of the Probabilistic
Behavior of Heartbeats
h a
QoS
Requirements
MR
U
Figure
10: Meeting QoS requirements with NFD-U. The probabilistic behavior of heartbeats is not
known; clocks are not synchronized, but they are drift-free
Theorem 9 Consider a system with drift-free clocks and assume ff ? 0. For algorithm NFD-U, we
have E(T MR
Y
Note that the bounds given in Theorem 9 uses only p L and V(D); on the other hand, E(D) is not used.
Theorem 9 can be used to compute the parameters j and ff of the failure detector NFD-U, so that it
satisfies some QoS requirements. We assume the QoS requirements are given as a tuple (T u
MR
of positive numbers. The requirements are that:
MR
Note that the upper bound on the detection time T D is not T u
plus the unknown average
message delay E(D). So, the actual upper bound T U
on the detection time is T u
E(D). In other
words, the QoS requirement on detection time is not absolute as in (4.4), but relative to E(D). This
is justified as follows. Note that when local clocks are not synchronized and only one-way messages
are used, an absolute bound T U
on detection time cannot be enforced by any nontrivial failure detector.
Moreover, it is reasonable to specify an upper bound requirement relative to the average delay E(D)
of a heartbeat. In fact, a failure detector that guarantees to detect crashes faster than E(D) makes too
many mistakes to be useful.
The following is the configuration procedure for algorithm NFD-U, modified from the one in Section
5.
cannot be achieved" and stop; else continue.
ffl Step 2: Let
dT u
Y
Find the largest j - j max such that
MR
. Such an j always exists.
ffl Step 3: Set
Theorem 10 Consider a system with unsynchronized, drift-free clocks, where the probabilistic behavior
of messages is not known. Suppose we are given a set of QoS requirements as in (6.11). The above
procedure has two possible outcomes: (1) It outputs j and ff. In this case, with parameters j and ff the
failure detector NFD-U of Fig. 9 satisfies the given QoS requirements. (2) It outputs "QoS cannot be
achieved". In this case, no failure detector can achieve the given QoS requirements.
Estimating p L and V(D). When local clocks are not synchronized, we can estimate p L and V(D)
using the procedure of Section 5. To estimate p L , this procedure did not use clocks, and so it works
just as before. For V(D), the procedure did use clocks, but it works even though the clocks are not
synchronized. To see why, recall that the procedure estimates V(D) by computing the variance of
of multiple heartbeat messages, where A is the time (with respect to q's local clock time) when
q receives a message m, and S is the time (with respect to p's local clock time) when p sends m. When
clocks are not synchronized, A \Gamma S is not the actual delay of m, but rather the delay of m plus a
constant, namely, the skew between the clocks of p and q. Thus the variance of A \Gamma S is the same as
the variance V(D) of message delays.
6.3 NFD-E: an Algorithm that Uses Estimates of Expected Arrival Times
Failure detector NFD-U (Fig. assumes that q knows the exact value of all the EA i 's (the expected
arrival time of messages). In practice, q may not know such values, and needs to estimate them. To do
so, every time q executes line 10 of algorithm NFD-U in Fig. 9, q considers the n most recent heartbeat
messages (for some n), denoted m 0
be the sequence numbers of such messages
and A 0
n be their receipt times according to q's local clock. Then EA '+1 can be estimated by:
Intuitively, this formula first "normalizes" each A 0
i by shifting it backward in time by js i . Then it
computes the average of the normalized A 0
Finally, it shifts forward the computed average by ('+1)j.
It is easy to see that this is a good estimate of EA '+1 . We denote by NFD-E the algorithm obtained
from Fig. 9 by replacing EA '+1 with this estimate. Our simulations show that NFD-E and NFD-U are
practically indistinguishable for values of n as low as 30. Thus, for large values of n, the configuration
procedure for NFD-U can also be used to configure NFD-E. See Fig. 11.
Failure Detector
Configurator
Estimator of the Probabilistic
Behavior of Heartbeats
h a
QoS
Requirements
MR
U
Figure
Meeting QoS requirements with NFD-E (same as with NFD-U, except that the expected
arrival times EA i 's of heartbeats are estimated)
7 Simulation Results
We simulate both the new failure detector algorithm that we developed and the simple algorithm commonly
used in practice (as described in Section 1.2). In particular, (a) we simulate the algorithm NFD-S
(the one with synchronized clocks), and show that the simulation results validate our QoS analysis of
NFD-S in Section 3.3; (b) we simulate the algorithm NFD-E (the one without synchronized clocks that
estimates the expected arrival times), and show that it provides essentially the same QoS as NFD-S;
and (c) we simulate the simple algorithm and compare it to the new algorithms NFD-S and NFD-E,
and show that the new algorithms provide a much better accuracy than the simple algorithm.
The settings of the simulations are as follows. For the purpose of comparison, we normalize the
intersending time j of heartbeat messages in both the new algorithm and the simple algorithm to 1. The
message loss probability p L is set to 0:01. The message delay D follows the exponential distribution
(i.e., Pr(D - We choose the exponential distribution because of
the following two reasons: first, it has the characteristic that a large portion of messages have fairly
short delays while a small portion of messages have large delays, which is also the characteristic of
message delays in many practical systems [14]; second, it has a simple analytical representation which
allows us to compare the simulation results with the analytical results given in Theorem 4. The average
message delay E(D) is set to 0:02, which is a small value compared to the intersending time j. This
corresponds to a system in which message delays are in the order of tens of milliseconds (typical for
messages transmitted over the Internet), while heartbeat messages are sent every few seconds. Note
that since D follows an exponential distribution, the standard deviation is
the variance is
To compare the accuracy of different algorithms, we first set their parameters so that: (a) they send
messages at the same rate (recall that they satisfy the same bound T U
on the detection
2.5 3 3.5
required bound T U
D on the worst-case detection time
average
mistake
recurrence
time
obtained
from
the
simulations
analytical
Figure
12: The average mistake recurrence times obtained by: (a) simulating the new algorithms NFD-S and
NFD-E (shown by + and \Theta), (b) simulating the simple algorithm (shown by -ffi- and -\Pi-), and (c) plotting the
analytical formula for E(T MR ) of the new algorithm NFD-S (shown by -).
time. We simulated runs for values of T U
ranging from 1 to 3.5, and for each value of T U
, we measured
the accuracy of the failure detectors in terms of the average mistake recurrence time E(T MR ) and the
average mistake duration E(T M ). For each value of T U
, we plotted E(T MR ) by considering a run
with 500 mistake recurrence intervals, and computing the average length of these intervals. We do not
show the plots for E(T M ) because the E(T M ) of all the algorithms were similar and bounded above by
approximately
7.1 Simulation Results of NFD-S and NFD-E
To ensure that NFD-S meets the given upper bound T U
on the detection time, we set ffi to T U
prescribed by Theorem 4 (1)). In algorithm NFD-E, we choose to estimate the expected
arrival time using the most recent messages. To ensure NFD-E meets the given upper
bound T U
, we set
In Fig. 12, we show the simulation results for algorithms NFD-S and NFD-E, together with the
analytical formula of E(T MR ) derived in Section 3.3. These results show that: (a) the accuracy of
algorithms NFD-S and NFD-E are very similar, and (b) the simulation results of both algorithms match
the analytical formula for E(T MR ).
7.2 Simulation Results of the Simple Algorithm
The simple algorithm has no upper bounds on the detection time. However, such an upper bound can
be guaranteed with a simple modification: the general idea is to discard heartbeats which have very
large delays. More precisely, the modified algorithm has another parameter, the cutoff time c, such
that all heartbeats delayed by more than c time units, called slow heartbeats, are discarded. 12 With this
modification, the detection time T D is at most T D - c + TO .
Given a bound T U
on the detection time, there is a tradeoff in setting the cutoff time c and the timeout
the larger the cutoff time c, the smaller the number of slow heartbeats being discarded, but
the shorter the timeout value TO , and vice versa. In our simulations, we choose two cutoff times
times the average message delay, respectively. The timeout TO is
set to T U
c. The algorithm with c = 0:16 is denoted by SFD-L, and the one with
by SFD-S.
The simulation results on the average mistake recurrence times of SFD-L and SFD-S (Fig. 12) show
that the accuracy of the new algorithms (with or without synchronized clocks) is better - sometimes
by an order of magnitude - than the accuracy of the simple algorithm. Intuitively, this is because the
use of a cutoff time to bound the detection time in the simple algorithm is detrimental to its accuracy: if
the simple algorithm uses a large cutoff time, then it must use a small timeout value, and this decreases
the accuracy of the failure detector; if it uses a small cutoff time, then it discards more heartbeats, and
this is equivalent to an increase in the message loss probability; this in turn also decreases the accuracy
of the failure detector (a detailed explanation of the simulation results can be found in [13]).
Concluding Remarks
An Adaptive Failure Detector. In this paper, we assumed that the probabilistic behavior of heartbeat
messages does not change. In some networks, this may not be the case. For instance, a corporate net-work
may have one behavior during working hours (when the message traffic is high), and a completely
different behavior during lunch time or at night (when the system is mostly idle): During peak hours,
the heartbeat messages may have a higher loss rate, a higher expected delay, and a higher variance of
delay, than during off-peak hours. Such networks require a failure detector that adapts to the changing
conditions, i.e., it dynamically reconfigures itself to meet some given QoS requirements.
It turns out that our failure detectors can be made adaptive, as we now explain. For the case when
clocks are synchronized, we make NFD-S adaptive by periodically reexecuting the configuration outlined
in Fig. 8. The basic idea is to periodically run the estimator, which uses the n most recent
heartbeats to estimate the current values of p L ; E(D) and V (D). These estimates are then fed into the
configurator to recompute the new failure detector parameters j and ffi.
This assumes that the algorithm can detect slow messages; this is not easy when local clocks are not synchronized, but
a fail-aware datagram service [18] may be used.
Similarly, when clocks are not synchronized, we can make NFD-E adaptive by periodically reexecuting
the configuration outlined in Fig. 11. The only difference here is that the estimator also outputs EA i
- the estimated arrival time of the next heartbeat - which is input into the failure detector NFD-E.
The above adaptive algorithms form the core of a failure detection service that is currently being implemented
and evaluated [15]. This service is intended to be shared among many different concurrent
applications, each with a different set of QoS requirements. The failure detector in this architecture dynamically
adapts itself not only to changes in the network condition, but also to changes in the current
set of QoS demands (as new applications are started and old ones terminate).
Acknowledgments
We would like to thank Carole Delporte-Gallet, Hugues Fauconnier and anonymous referees of the
conference version of this paper for their useful comments which helped us improve the paper.
--R
Using the heartbeat failure detector for quiescent reliable communication and consensus in partitionable networks.
Failure detection and consensus in the crash-recovery model
On quiescent reliable communication.
Transis: a communication sub-system for high availability
Probabilistic clock synchronization in distributed systems.
Relacs: a communications infrastructure for constructing reliable applications in large-scale distributed systems
Probability and Measure.
Renesse, editors. Reliable Distributed Computing with the Isis Toolkit.
Requirements for Internet Hosts-Communication Layers
On the impossibility of group membership.
Unreliable failure detectors for reliable distributed systems.
On the Quality of Service of Failure Detectors.
Probabilistic clock synchronization.
Failure detector service for dependable computing (fast abstract).
Failure detectors in omission failure environ- ments
Accelerated heartbeat protocols.
Non blocking atomic commitment with an unreliable failure detector.
The Ensemble System.
A fault-tolerant multicast group communication system
In Search of Clusters.
Group membership failure detection: a simple protocol and its probabilistic analysis.
Stochastic Processes.
--TR
Probability, statistics, and queueing theory with computer science applications
Unreliable failure detectors for reliable distributed systems
Totem
Horus
On the impossibility of group membership
In search of clusters (2nd ed.)
Using the heartbeat failure detector for quiescent reliable communication and consensus in partitionable networks
On Quiescent Reliable Communication
Reliable Distributed Computing with the ISIS Toolkit
Probabilistic Clock Synchronization in Distributed Systems
Time in Distributed System Models and Algorithms
Non blocking atomic commitment with an unreliable failure detector
Fail-aware failure detectors
A Fail-Aware Membership Service
Accelerated Heartbeat Protocols
Failure Detectors in Omission Failure Environments
The ensemble system
On the quality of service of failure detectors
--CTR
Xuanhua Shi , Hai Jin , Weizhong Qiang, ALTER: first step towards dependable grids, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Tiejun Ma , Jane Hillston , Stuart Anderson, Evaluation of the QoS of crash-recovery failure detection, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Marcos K. Aguilera , Carole Delporte-Gallet , Hugues Fauconnier , Sam Toueg, On implementing omega with weak reliability and synchrony assumptions, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.306-314, July 13-16, 2003, Boston, Massachusetts
Y. Horita , K. Taura , T. Chikayama, A Scalable and Efficient Self-Organizing Failure Detector for Grid Applications, Proceedings of the 6th IEEE/ACM International Workshop on Grid Computing, p.202-210, November 13-14, 2005 | probabilistic analysis;failure detectors;quality of service;fault tolerance;distributed algorithm |
507478 | Visual Input for Pen-Based Computers. | The design and implementation of a camera-based, human-computer interface for acquisition of handwriting is presented. The camera focuses on a standard sheet of paper and images a common pen; the trajectory of the tip of the pen is tracked and the contact with the paper is detected. The recovered trajectory is shown to have sufficient spatio-temporal resolution and accuracy to enable handwritten character recognition. More than 100 subjects have used the system and have provided a large and heterogeneous set of examples showing that the system is both convenient and accurate. | writing, and adopt the machine's ones: typing, mouse-clicking, knob-turning. Learning to
use a keyboard eectively requires time and patience. Ditto for menu-based mouse inter-
faces. Current interfaces were designed for habitual computer users and for a limited range
of tasks. If the \computer revolution" is to reach and benet the majority of the world
population, more intuitive interfaces have to be designed. If machines are to become our
helpers, rather than ever more complicated tools, they must be designed to understand us,
rather than us having to learn how to use them.
The third shortcoming is inadequacy. Our machines and the rest of our world are not
well integrated because machines lack a sensory system. A machine does not know what is
happening in its neighborhood, rather it sits and waits for a human to approach it and touch
skillfully some of its hardware. Our desktop, our white-board, the visitor in our o-ce are
completely unknown to our o-ce PC. There are many tasks that a machine will simply not
do because its interfaces are inadequate.
One avenue towards improving human-machine interfaces is to imitate nature and develop
'senses' for machines. Take vision: cameras may be miniaturized, thus allowing the development
of small and cheap hardware; humans can easily read the body language, sketches,
and handwriting produced by other humans { if a machine could do the same this would
provide a natural, friendly, and very eective vision-based interface. This interface would
allow capturing much information that current interfaces ignore.
The computer industry recognized the advantages of using handwriting as the human-machine
communication modality. Pen-based interfaces provide convenience,
exibility, and
small size. After the unsuccessful introduction of the visionary Apple Newton in the early
'90s, a new generation of pen-based PDAs has established itself in the market. These PDAs
(e.g. the popular PalmPilot) represent an interesting compromise. Their input device is the
computer screen: the screen must be as large as possible for convenience of use, and as small
as possible for portability. The optimal size, as identied by the market (approximately
12x8cm), makes PDAs acceptable but denitely not excellent on both counts.
Handwriting may also be captured using a video camera and computer vision techniques,
rather than the traditional tablets and touch-sensitive screens. This is an attractive alternative
because cameras may be miniaturized thus making the interface much smaller.
Furthermore, a vision-based system would allow the user to write at will on any convenient
e.g., write on a piece of paper with a normal pen, on a blackboard, etc., regardless
of size and location.
In this article, we present the rst fully on-line, vision-based interface for conveniently
and accurately capturing both handwriting and sketching. The interface is designed to
be small and simple to use. It is built with a single consumer-electronics video camera
and captures handwriting at high temporal (60Hz) and spatial (about 6000x2500 samples)
resolution without using a special writing instrument. It allows the user to write at normal
speed within a large writing area (more than half a letter-size page) with an output quality
that is su-cient for recognition. The input interface consists of a camera, a normal piece of
paper, and a normal pen. The camera focuses on the sheet of paper and images the pen tip;
computer analysis of the resulting images enables the trajectory of the pen to be tracked
and contact of the pen with the paper to be detected.
The paper is organized as follows. In section 1.1 we summarize previous work in this area.
This will allow us to motivate our approach and design, which are described in section 2.
Section 3 presents a number of experiments that explore the performance of the system. A
few concluding observations, as well as themes for future research, are collected in section 4.
1.1 Previous work
The literature on handwriting recognition (see [10, 22, 25] for very comprehensive surveys)
is divided into two main areas of research: o-line and on-line systems. O-line systems
deal with a static image in which the system looks for the handwritten words before doing
recognition. On-line systems obtain the position of the pen as a function of time directly from
the interface. On-line systems have better information for doing recognition since they have
timing information and since they avoid the initial search step of their o-line counterparts.
The most popular input devices for handwriting are electronic tablets for on-line capturing
and optical scanners for o-line conversion. We are of course interested in building on-line
human machine interfaces.
The integration of the electronic and physical aspects of an o-ce has been explored by
two ambitious experimental systems. The Digital Desk [30, 31] developed at Rank Xerox
EuroPARC merges physical objects (paper documents and pencils) with their electronic
counterparts using computer vision and video projection. A computer screen is projected
onto a physical desk using a video projector, while a camera is set up to watch the workspace
such that the surface of the projected image and the surface of the image area coincide.
A tablet digitizer or a nger tracked by the camera, like the system developed at INPG,
Grenoble [4, 5], are used to input mouse-type of information into the system allowing one to
select or highlight words on paper documents, cut and paste portions of text, draw gures,
etc. The Liveboard [6, 19] developed by Xerox is similar in concept to the digital desk. This
device is the replacement for the pads of
ip-chart paper used in meetings. A computer
screen is projected onto a white-board and a cord-less pen is used as input. The same
image could be displayed onto boards placed at dierent locations and the input from each
of the boards overlaid on all of them, allowing in this way for remote collaboration. The
Digital Desk and the Liveboard are steps towards the integration of paper documents into
the computing environment; these systems motivate the development of human-computer
interfaces that can collect and interpret sketches and handwriting, and that do not require
special hardware such as tablets and instrumented pens.
A few vision-based interfaces [2, 14, 15, 18, 32] for handwriting are described in the
literature. The MEMO-PEN [18] consists of a special pen that carries a small CCD camera
close to its tip, a stress sensor, a micro computer, and a memory. The camera captures a
series of snapshots of the writing, while the stress sensor detects the pressure applied on the
ballpoint to have a record of the pen-up/-down strokes. The images captured by the camera
only include a partial portion of the writing, so, the whole handwritten trace is recovered
by overlaying successive snapshots. This system is quasi on-line since timing information is
provided by the causality of image collection; however, the corresponding recognizer would
need to look for the ink trace on the images before doing recognition. Also, the user is
forced to write with a special purpose stylus rather than with a common pen. Alternative
approaches [2, 32] consist of a video camera aimed to a user writing on a piece of paper.
The camera provides a sequence of images at a frequency of 19 Hz. The last image of the
sequence is thresholded in order to segment out the written text. The temporal order of the
handwriting is reconstructed with a batch process by detecting the trace of ink produced
between each two successive images. This detection may be obtained by performing image
dierencing between successive images at the location of the segmented text. The user is
required to write with a white pen under carefully controlled lighting conditions [2]. This
system provides the location of the pen tip on each image, but it still requires batch processing
after all text have been written. Besides, the ink trace detection method is prone to errors
due to changes in lighting conditions and small movements of the writing surface.
In contrast with the mentioned systems, our approach [14, 15] is fully on-line. It obtains
data from a xed video camera and it allows the user maximum
exibility in choosing
virtually any pen and writing surface. We track the pen tip in real time in order to reconstruct
its trajectory accurately and independently of changes in lighting. As we show in section 2.5
(see also g. 5) our interface increases the spatial resolution of the interface by a factor
of ten (as compared with the batch ink-trace approach [2, 32]), and improves robustness
with respect to lighting and small motions of the writing surface. The pen tip is tracked
continuously both when the user is writing and when the pen is traveling on top of the paper.
The detection of the strokes corresponding to the ink trace is the added burden that our
system pays for all the described improvements.
Vision System for Pen Tracking
Our design of the interface is subject to the following constraints: all components (camera,
frame grabber, computer) must be cheap and readily available; the user has to be able to write
in a comfortable position using a normal pen; the interface has to be simple and intuitive so
that user's training time and eort is minimal; the acquired handwritten trajectory has to
have su-cient spatio-temporal information to enable recognition.
The rst premise constrains the selection of the video camera to commercial consumer
electronics devices. Typical low-cost cameras have spatial resolution of 480x640 pixels (rows
x cols) at a frequency of Most cameras are interlaced, so each frame is composed
of two half-frames with a maximum resolution of 240x640 pixels at a frequency of
Given that the cut-o temporal frequency of handwriting is below 20 Hz [12, 24, 29], we are
well above the Nyquist frequency of handwriting by working at making sure that no
frequency component of handwriting is lost. The spatial resolution of the interface should
be such that it enables clear legibility of the acquired handwriting. Figure 1(c) presents
one example image provided by a camera located above the writing hand, as shown
on g. 1(b). The resulting acquired trajectory is a signature shown magnied in g. 1(d).
This sequence approximately occupies 20 image pixels per centimeter of writing; the spatial
accuracy of the interface is 0.1 pixels; thus, the resolution of the system is about 200 samples
per centimeter. This signature as well as the other trajectories presented in the gure are
easily readable, showing that this ratio of image pixels per centimeter of writing provides
su-cient information for a human to perform recognition. All handwriting examples shown
in this paper follow a similar ratio of pixels per centimeter of writing.
In order to satisfy the other premises, our interface does not require calibration and
provides the user with the
exibility of arranging the relative positions of the camera and
the piece of paper. Only two conditions are imposed onto the user, one is that the camera
should be located so that it has a clear sight of the pen tip and the other is that the writing
implement should have su-cient contrast with the piece of paper.
Figure
1 shows the block diagram of the system, the experimental setup, an example of
an image provided by the camera, and three pen tip trajectories captured with the interface
along with their corresponding pen-down strokes. These examples show an important
dierence between our interface and conventional handwriting capture devices: we obtain
a continuous trajectory by tracking the position of the pen tip in each of the images in
the sequence; for some applications this trajectory must be segmented into strokes corresponding
to ink trace (pen-down strokes) and strokes corresponding to movement above the
paper (pen-up strokes). The method developed to detect pen-up/-down strokes as well as
the design and prototyping of the interface are the main contributions of this paper.
2.1 Initialization and preprocessing
The detection and localization of the position of the pen tip in the rst frame and the
selection of the template to be used for detection in subsequent frames is the rst problem
to solve. There are two possible scenarios: (a) the user writes with a pen that is familiar
to the system or (b) an unknown pen is used. The familiar-pen case is easy to handle: the
system may use a previously stored template representing the pen tip and detect its position
in the image by correlation.
There are a number of methods to initialize the system when the pen is unknown. Our
initialization method is a semi-automatic one that requires a small amount of user coop-
eration. It is based on a few reasonable assumptions: we assume that the user is writing
with a dark-colored pen on a light-colored piece of paper; we assume that the pen tip is
conical in shape; and we assume that the edges between the pen tip and the paper have
larger contrast than the edge between the pen tip and the nger (see g. 2(i)). The rst
assumption restricts the pen to be used with the system to have a well dened contrast
with the paper. Hence, transparent pen or pens without contrast could not be used. The
restriction is not severe since pens come in all sort of colors and it is quite simple to get one
that satisfy the requirement. The second assumption is true for most commercial pens. The
third assumption restrict the paper to be lighter than human skin. The requirement is easily
satised writing on a piece of a common white paper.
We display the image captured by the camera on the screen of the computer. A rectangular
box is overlaid on this image as shown in gure 2(a). The user is required to place
the pen tip inside the displayed box, ready to start writing. The system watches for activity
within this box, which is measured by image dierencing between frames. After the pen tip
enters the box, the systems waits until there is no more activity within the box, meaning
(a)
Pen Tip
Detector
Pen up / down
Classifier
Ballpoint
Detector
Camera
Initialization &
Preprocessing
Filter/
(b) (c)
100 200 300 400 500 600100200
(d)
50 100 150 200 250102030(g)
50 100 150 200 250102030(h)
50 100 150 200 25010203040
50 100 150 200 25010203040
Figure
1: Overview of the System. (a) Block Diagram of the system. The camera feeds a sequence
of images to the preprocessing stage (sec. 2.1). This block initializes the algorithm, i.e., it nds the initial
position of the pen, and selects the template (rectangular subregion of the image) corresponding to the pen
tip. In subsequent frames, the preprocessing stage has only the function of cutting a piece of image around
the predicted position of the pen tip and feeding it into the next block. The pen tip detector (sec. 2.2) has
the task of nding the position of the pen tip in each frame of the sequence. The lter (sec. 2.3) is a recursive
estimator that predicts the position of the tip in the next frame based on an estimate of the current position,
velocity, and acceleration of the pen. The lter also estimates the most likely position of the pen tip for
missing frames. The ballpoint detector (sec. 2.5) nds the position of the very end of the pen tip, i.e., the
place where the pen is in contact with the paper when the user is writing. Finally, the last block of our system
checks for the presence of ink on the paper at the positions where the ballpoint of the pen was detected
(sec. 2.6). (b) Experimental setup. The system does not require any calibration. The user has the
exibility
of arranging the relative positions of the camera and the piece of paper in order to write comfortably as long
as the system has a clear sight of the pen tip. (c) Image provided by the camera. The user has a writing
area larger than half a letter-size page. This image is the last frame corresponding to the trajectory shown
in (d). The pen tip is tracked continuously both when the user is writing (pen-down strokes) and when
the pen is moving on top of the paper (pen-up strokes). The complete tracked trajectory is shown in (d).
Pen-down strokes corresponding to trajectory (d). (f),(h) Two more examples of handwritten sequences
acquired with the interface. (h),(i) Corresponding pen-down strokes.
that the user has taken a comfortable position to start writing. When the activity within
the box has returned to low for a period of time (bigger than 200 ms), the system acquires
the pen tip template, sends an audible signal to the user, and starts tracking.
Figure
2(i) shows the pen tip, whose conical shape projects onto the image plane as a
triangle. One of the borders of this triangle corresponds to the edge between the pen tip
and the user's nger and the two other boundaries correspond to the edges between the pen
tip and the piece of paper. Detection and extraction of the pen tip template is reduced
to nding the boundary points of the pen tip, computing the corresponding centroid, and
selecting a portion of the image around the centroid. The edges between the pen tip and
the paper have bigger contrast than the edge between the pen tip and the nger, thus, we
only look for these two boundaries in the detection and extraction of the template. The
boundaries of the pen tip are located using Canny's edge detector [3] as shown in gure 2(c).
Since detection and extraction of the pen tip from a single frame is not very reliable due to
changes in illumination, the system collects information about the pen tip for a few frames
before extracting the template. The algorithm is summarized in gure 2.
The selection of the pen tip template is performed only at the beginning of the acquisition.
The function of the initialization and preprocessing module in subsequent frames is only to
extract a region of interest centered around the predicted position of the pen tip. The region
of interest is used by the following block of the system to detect the actual position of the
centroid of the pen tip in the current image.
2.2 Pen Tip Detection
The second module of the system has the task of detecting the position of the pen tip in the
current frame of the sequence. The solution of this task is well known in the optimal signal
detection literature [8, 26]. Assuming that the signal to be detected is known exactly except
for additive white noise, the optimal detector is a matched lter, i.e., a linear lter that looks
(a) (b) (c)
(d) (e)
1st quadrant
2nd quadrant
3rd quadrant
estimated
edge
most voted quad.
Ballpoint
Finger
2nd most
voted quad.
most voted quad.
Ballpoint -
Finger
Figure
2: Tracking Initialization. (a) Image provided to the user; the white rectangle is the initialization
box. (b) The user has to place the pen tip inside the box so that the system can acquire the tracking
template. Image dierencing is used to detect when the pen tip gets inside the box. The gure shows the
result of image dierencing when the pen enters the tip acquisition area. (c) The boundaries of the pen tip
are extracted using Canny's edge detector inside the initialization box. Only pixels with high contrast are
selected. The dots displays the boundary pixels and the cross indicates their centroid. Sub-pixel resolution
in the location of edge elements is achieved by tting a parabolic cylinder to the contrast surface in the
neighborhood of each pixel. (d) Orientation of the boundary edge elements obtained with Canny's detector.
(e) The dierent boundaries of the pen tip are obtained by clustering the orientation of the edge elements
into the four quadrants and interpolating lines through the corresponding clustered pixels. (f) In the case in
which only one of the boundaries is reliably detected, the other pen tip boundary is obtained by searching
the image brightness prole along lines perpendicular to the detected boundary. Points of maximum contrast
on these proles dene the missing boundary. The detection of the boundaries of the pen tip is performed
on a sequence of frames in order to increase the robustness of the template extraction. The nal centroid
position is obtained as the mean of the location of the centroid in each individual frame. (g) The triangular
model of the pen tip is completely specied with the location of the centroid of the tip, the orientation of
the axis of the tip, and the positions of the nger and of the ballpoint. The pen tip axis is dened as the
line passing through the centroid of the boundary pixels, whose orientation is the mean of the orientation
of the boundary lines. (h) Image brightness prole across the estimated pen tip axis. The positions of the
ballpoint and of the nger are extracted by performing a 1D edge detection on the prole. Subpixel accuracy
is obtained by tting a parabola to the edge detection result. (i) Final template of the pen tip automatically
extracted by the interface.
Predicted position
of the pen tip
Predicted position
of the pen tip
The most likely position of
the pen tip is given by the
location of maximum correlation
Location of
maximum
correlation
Region of interest
position of the pen tip
centered on the predicted
Pen tip template
Correlation
Figure
3: Pen Tip Detector. The detection of the pen tip is obtained in our system by locating the
maximum of the normalized correlation between the pen tip template and a subimage centered on the
predicted position of the pen tip. The system analyzes the values of maximum normalized correlation to
detect whether the pen tip is within the predicted region of interest. If the value of maximum correlation
is lower than a threshold, the system emits an audible signal and continues to look for the pen tip in the
same place, waiting for the user to realize that tracking has been lost and that the pen tip must be returned
to the region of interest. The system waits for a few frames; if the pen tip does not return to sight, then
tracking stops.
like the signal to be detected. In our case, the signal consists of the pixels that represent the
pen tip and the noise has two components: one component is due to noise in the acquisition
of the images; the other one is due to shadows, due to pen markings on the paper, and
due to changes in the apparent size and orientation of the pen tip during the sequence of
images. The acquisition noise is the result of a combination of many factors like changes
in illumination due to light
ickering or automatic gain of the camera, quantization noise,
changes in gain of the frame grabber, etc., where not all these factors are additive. Changes in
the apparent size and orientation of the pen while the user is writing signicantly distorts the
pen tip image, as shown in gure 3. Clearly neither component of the noise strictly satises
the additive white noise assumptions of the matched lter; however, as a rst approximation,
we will assume that the pen tip can be detected in each frame using the matched lter. In
our system, the nal localization of the pen tip is performed by tting a triangle to the image
of the tip as described in section 2.5.
2.3 Filtering
The lter predicts the most likely position of the pen tip on the following frame based on
the current predicted position, velocity, and acceleration of the pen tip, and on the location
of the pen tip given by the pen tip detector. The prediction provided by the lter allows
the interface to reduce the search area, saving computations while still keeping a good pen
tip detection accuracy. The measurements are acquired faster and the measured trajectory
is smoothed by the noise rejection of the lter. A Kalman Filter [1, 9, 11] is a recursive
estimation scheme that is suitable for this problem. We tested several dierent rst- and
second-order models for the movement of the pen tip on the image plane. The model that
provided the best performance with the easiest tuning was a simple random walk model for
the acceleration of the pen tip on the image plane. The model is given by equation 1:> > > > > > > > <
(1)
where x(k), v(k), and a(k) are the two-dimensional components of the position, velocity, and
acceleration of the pen tip, and n a
are additive zero-mean, Gaussian, white
noise processes. The output of the model y(k) is the position of the pen tip corrupted by
additive noise. The lter parameters used in the real-time implementation of the system are
listed on table 1.
2.4 Missing frames
The algorithm described in section 2.2 detects the position of the pen tip in each frame of
the sequence. Unfortunately, some intermediate frames could be missing due to problems in
image acquisition, or, in the case of the real-time implementation, due to synchronization
problems between the host computer and the frame grabber. It is desirable to sample the
handwritten trajectory at a constant rate; hence, there is a need for estimating the most
likely position of the pen tip for the missing frames. The Kalman smoother [1, 9] is the
scheme used in our system to solve this estimation problem (for more information see [14]).
2.5 Ballpoint detection
The pen tip detector nds the most likely position of the centroid of the pen tip, a point
that will be close to the the center of gravity of the triangular model of the pen tip (see 2.1).
The position of the ballpoint 1 is obtained using an algorithm similar to the one used in
the initialization; the major dierence is that the pen is now in movement, so we need to
compute one ballpoint position for each frame.
Using Canny's edge detector, we nd the position and orientation of the boundary edges of
the pen tip. The edge detector is only applied to small windows in order to save computations
and to speed up the processing of the current frame. We calculate the expected position of
the boundaries using the orientations of the boundaries in the previous frame, the distance
from the ballpoint and the nger to the centroid of the pen tip, as well as the current
detected position of the centroid. A few points on these boundaries (in the case of the real-time
system, we use ve points) are chosen as the centers of the edge detection windows;
we look for points in each window that have maximum contrast; the edges are found by
interpolating lines through these points; the axis of the pen tip is computed as the mean line
1 The term ballpoint is loosely used to indicate the actual ballpoint of pens and the pencil lead of pencils.
Ballpoint -
Finger
(b) Ballpoint -
Finger
(c) (d)
Figure
4: Fine localization of the ballpoint. (a) Image of the pen tip displaying the elements used to
detect the ballpoint. The cross '+' in the center of the image shows the centroid of the pen tip provided by
the pen tip detector. The points marked with a star '*' show the places where the boundaries of the pen
were found using edge detection. The lines on the sides of the pen tip are the boundary edges and the line in
the middle is the pen tip axis. The other two crosses '+' show the estimated positions of the ballpoint and
of the nger. (b) Brightness prole along the axis of the pen tip. The positions of the ballpoint and of the
nger are obtained by performing a 1D edge detection on the prole. This 1D edge detection is computed
by correlating the prole with a derivative of a Gaussian function. The spatial resolution of the interface
is dened by the accuracy on the localization of the ballpoint. The desired locations are extracted with
subpixel resolution by tting a parabola to the correlation peaks. (c) Result of correlating the image prole
with a derivative of a Gaussian function. (d) Blow-up of the region between the dotted vertical lines in (c).
The parabolic t of the peak identies the position of the ballpoint. The vertex of the parabola plotted with
a cross 'x' corresponds to the estimated sub-pixel position of the ballpoint.
dened by the pen boundary edges; the image brightness prole through the axis of the tip
is extracted in order to nd the positions of the ballpoint and of the nger (see gure 4).
2.6 Pen up detection
The trajectories obtained by tracking the ballpoint are not suitable for performing handwriting
recognition using standard techniques; most of the recognition systems to date assume
that their input is only formed by pen-down strokes, i.e., portions of the trajectory where the
pen was in contact with the paper. Our interface has only one camera from which we cannot
detect the 3D position of the ballpoint; therefore, contact has to be inferred indirectly. A
stereo system would solve this problem at a cost in additional hardware, calibration, and
visibility of the pen tip.
The detection of the times when the pen is lifted and therefore, not writing, is accomplished
in our system by using the additional information provided by the ink path on the
paper. Given a particular position of the ballpoint, the system checks whether there is an
(c) 1 2060100140180211
280 300 320 34095105(d)
Figure
5: Di-culties in detecting the ink trace. (a) The plot shows one sequence acquired with the
interface. The dots indicate the ballpoint position over time. (b) The image displays a portion of the
last frame of the sequence (we can see part of the pen tip on the right side of the image) showing the
corresponding ink trace deposited on the paper. (c) Recovered ballpoint trajectory overlaid on the image
of the ink trace. The sample points land over the ink trace most of the time with the exception of points
at the beginning of the sequence (shown on the left side of the image). This happens because there might
have been a displacement of the paper generated by one of the strokes (probably the long horizontal stroke
between samples 20 and 40). (d) Each column of the picture shows the brightness prole of the image along
lines that pass through each sample point and are perpendicular to the direction of motion. Brightness is
measured at the position of the ballpoint and on ve pixels on each side of the ballpoint along the mentioned
perpendicular. We note that the ink trace is not always found at the ballpoint position (row 6 of the plot).
We can see the ink trace being a few pixels o the ballpoint pixel at the beginning of the sequence (samples
1-20), then stabilizing on the ballpoint (samples 20-35) until the pen tip appears on the prole (samples
and later disappearing because of a pen up stroke (samples 40-55). From this example, we observe
that we cannot rely on the ink trace captured in last image of the sequence, but we should rather detect the
presence of ink as the ballpoint trajectory is being acquired (see sections 2.6.1-2.6.4).
ink trace on the paper at this place or not. The image brightness at any given place varies
with illumination, writer's hand position, and camera gain; moreover, the image contrast
could change from frame to frame due to light
ickering and shadows. Hence, the detection
of the ink trace on the paper using image brightness is quite di-cult, as illustrated by the
example of gure 5.
We can get several observations from the simple example of gure 5. The ink trace is
narrow (1-2 pixels), so even a small error in locating the ballpoint could lead to a mismatch
between the ballpoint and the ink trace. The handwritten strokes are quite distorted due to
confidence
measure
U D
Hidden Markov Model
of being in state pen up (U)
or pen down (D)
that estimates the likelihood
Ink absence
Segmentation
Classification
Trajectory
Trajectory
Image
state sequence
Most likely
Detection
Figure
Pen up/down classication. Block diagram of the pen-up/-down classication subsystem.
We detect when the pen is up or down using a bottom-up approach. The brightness of each point in the
trajectory is compared with the brightness of the surrounding pixels. This comparison provides a measure
of the condence of ink absence. A Hidden Markov Model is used to model the transition of the condence
measure between the states of pen up and pen down. Using the local condence measure and the estimated
HMM state sequence, the system classies each point of the trajectory as pen up or pen down. The measure
of ink absence is di-cult to obtain and prone to errors, so it is better to divide the trajectory of the ballpoint
into strokes and aggregate the point-wise classication into a stroke-wise classication.
the pixelization of the image, e.g., diagonal straight strokes present a staircase pattern. The
value of brightness corresponding to the ink trace varies within the same image (and even
more across images), so we need to detect the ink trace in a local and robust way. The local
ink measurement should be performed as soon as possible since the paper might move in the
course of the session. Working in this way, there would be a good t of the sample points on
top of the ink trace and the system would provide pen-up/-down information as the writing
is produced, in an on-line fashion. However, the measurement has to be done after the pen
tip moves away; otherwise, the pen tip will obstruct the paper and the ink trace. Figure 6
shows a block diagram of the pen-up/-down detection subsystem. The following sections
describe in more detail each of the blocks presented in the gure.
(a)
x
y
brightness
samples
Ink absence confidence value:0.075
Figure
7: Local ink absence detection. (a) Typical pen-down sample point. The center cross corresponds
to the estimated position of the ballpoint of the pen. The detection of ink is performed locally by comparing
the brightness at the ballpoint with the brightness of the pixels located on a circle centered at the ballpoint
position. The brightness at each point is obtained by interpolation [13]. (b) Histogram of the brightness
values measured on the circle and corresponding Gaussian p.d.f. estimated from these values. The vertical
line shows the value of brightness corresponding to the ballpoint's position. The ink absence condence
measure corresponds to the area below the Gaussian p.d.f. between 1 and the ballpoint's brightness. This
condence measure is equal to 0.075 for this example indicating that ink is likely to be present.
2.6.1 Local ink detection
The detection of the ink trace is performed locally for each point of the trajectory by comparing
the brightness at the ballpoint with the brightness of surrounding pixels (see gure 7).
A condence measure of ink absence is obtained in a probabilistic setting. The brightness
of ink-less pixels is assumed to be a Gaussian-distributed random variable. The parameters
of the corresponding Gaussian p.d.f. are estimated locally using the brightness of points
located on a circle centered at the ballpoint. We assume that all these points correspond to
ink-less pixels. The ink absence condence measure is computed as the probability of the
brightness at the ballpoint pixel given that this pixel is ink-less. If there is ink present at
the ballpoint pixel, this measure is low, close to zero; otherwise, the measure is high, close to
one. The selection of this particular condence measure is very convenient since it provides
automatic scaling between zero and one.
The measurements of brightness cannot be obtained until the pen tip has left the measurement
area; otherwise, the ink trace will be covered by the pen tip, by the hand of the
user, or by both. The system assumes a simple cone-shaped model for the area of the image
covered by the pen and the hand of the user. The ballpoint is located at the vertex of the
cone, the axis of the pen tip denes the axis of the cone, the position of the nger has to be
inside the cone, and the aperture of the cone is chosen to be 90 degrees. This simple model
allows the system to determine if the user is left handed or right handed and whether a particular
ballpoint position is within the cone. The system waits until the cone is su-ciently
far away from the area of interest before doing any brightness measurements. Left-handed
users are challenging since they usually cover with their hands the most recently written pen
strokes; hence, the system has to wait much longer than for right-handed users in order to
perform ink detection. We have acquired data from left-handed users for the experiment of
section 3.3, but we haven't compared the accuracy of pen-up/-down detection for left- and
right-handed users.
2.6.2 Local pen-up/-down modeling
The ink absence condence measure described in the previous section could be used to decide
whether a particular sample point corresponds to pen up or pen down. However, making hard
decisions based on a single measurement is likely to fail due to noise and errors in brightness
measurements. A soft-decision approach that estimates the probability of each individual
point being a pen up or a pen down is more robust. A further improvement is provided by
modeling the probability of transition between these two states (pen up or pen down), given
the current measurement and the previous state. A Hidden Markov Model (HMM) with two
states, one corresponding to pen up and the other corresponding to pen down, is a suitable
scheme to estimate these probabilities. The HMM learns the probabilities of moving from
one state to the other and the probabilities of rendering a particular value of condence
measure from a set of examples, in an unsupervised fashion. The HMM used in our system
has the topology presented in gure 8. We use the forward-backward algorithm [23] to train
Confidence Measure
Probability
Confidence Measure
Ink absence
Confidence
Measure
up
Figure
8: Local model of pen up/down. Hidden Markov Model that models the transitions between
pen-up and pen-down states. The observation of the model is the ink absence condence measure, an
intrinsically continuous variable as it was dened in section 2.6.1. The HMM output is a set of discrete
symbols, so we need to quantize the value of the condence measure in order to dene the output symbols.
The condence measure is a probability, so it is scaled between zero and one. The interval [0; 1] is divided
into sixteen equal intervals to quantize each condence measure value and to translate it into observation
symbols. The resulting HMM after training is shown in the gure. The bar plots displays the output
probability distributions of each state, that are learned purely from the examples.
the HMM using a training set of handwritten sequences collected with the system. The
training set consists of examples of cursive handwriting, block letters, numbers, drawings,
signatures, and mathematical formulas in order to sample the variability of the pen up/down
transition for dierent types of writing. The most likely state of the system at each point in
the handwritten trajectory is estimated using Viterbi's algorithm [7, 23].
2.6.3 Trajectory segmentation
The previous two sections describe local measures used to classify each sample of the hand-written
trajectory as either pen up or pen down. The measurement of ink absence is subject
to errors, so the performance may be improved by dividing the handwritten trajectory into
dierent strokes and by aggregating the sample-wise classication into a stroke-wise classi-
cation.
The handwritten trajectory is segmented into strokes using two features, the curvilinear
velocity of the pen tip and the curvature of the trajectory. Selection of these two features
Figure
9: Trajectory segmentation. Several examples of trajectories acquired with the interface and the
corresponding strokes obtained after segmentation. Successive strokes are indicated alternately with solid
and dashed lines. The threshold in curvilinear velocity was chosen so that points that remain within the
same pixel in two consecutive frames are discarded (see table 1).
was inspired by the work of Viviani [27, 28] and Plamondon [20, 21], and also by the intuitive
idea that on the limit points between two dierent handwriting strokes the velocity
of the pen is very small, the curvature of the trajectory is very high, or both. The set of
segmentation points is the result of applying a threshold on each of the mentioned features.
These thresholds were obtained experimentally and their values are presented on table 1.
Figure
9 shows several examples of trajectories and the corresponding segmented strokes.
2.6.4 Stroke classication
Having divided the trajectory into strokes, we proceed to classify the strokes as either pen-up
or pen-down. We experimented with two approaches, one based on the ink absence
condence measure and the other using the state sequence provided by the HMM. In the
rst approach, the mean of the ink absence condence measures for all points in the stroke
was used as the stroke condence measure. In the second approach, a voting scheme was used
to assess the likelihood of a particular stroke being a pen-up or pen-down, this likelihood
provided the stroke condence measure. If needed, hard classication of each stroke as pen
up or pen down can be obtained by applying a threshold on the stroke condence results.
The hard classication, as well as the likelihood of pen up/down, are the stroke descriptors
that our interface provides to a handwriting recognition system.
2.7 Stopping acquisition
We have mentioned that the system automatically stops if the value of maximum correlation
is very low since this would imply that the pen tip has moved outside the search window
(or that there was such a change in illumination that the pen tip no longer matches the
template). The user can exploit this behavior to stop the acquisition by taking the pen tip
away from the search window. There is another stopping possibility oered to the user. The
system checks whether the pen tip has moved at all between consecutive frames and counts
the number of consecutive frames in which there is no movement; if this number reaches a
predened threshold, the system stops the acquisition. Thus, if the user wants to nish the
acquisition at the end of a desired piece of handwriting, he can hold the pen tip still and the
system will stop the acquisition.
2.8 Real-time implementation
The interface was implemented using a video camera, a frame grabber, and a Pentium
II 230MHz PC. The camera was a commercial Flexcam ID, manufactured by Videolabs,
equipped with manual gain control. It has a resolution of 480x640 pixels per interlaced
image at 30Hz. The frame grabber was a PXC200 manufactured by Imagination. Figure 10
shows the graphical user interface (GUI) of the windows-based application that runs our
system.
Figure
10: Real-time application. This image shows the GUI (Graphical User Interface) of the windows-based
application that implements our system. The biggest window is a Dialog Box that allows the user to
input parameters and run commands. The top-left window displays the image captured by the camera in
order to provide visual feedback to the user. The bottom-left window shows the acquired trajectory after
having done point-wise pen up/down classication with a hard threshold.
3 Experimental Results
3.1 System specications
Temporal and spatial acquisition resolutions are key parameters that dene the performance
of the interface. The maximum working frequency provided by the camera is
the temporal resolution of the system is at most 16.67ms. The system is able to work at
maximum frame rate since the total processing time per frame is 14ms. However, some
frames are missed due to a lack of synchronization between the CPU and the frame grabber.
A component of the system (see 2.4) estimates the most likely state of the system in the
case of missing frames. This scheme is useful if the number of missing frames is small,
otherwise, the system would drift according to the dynamics of the model of equation 1.
We have used the system for acquiring hundreds of handwritten sequences in real time,
experiencing a missing frame rate of at most 1 out of every 200 frames. We have shown in
references [14, 16, 17] the performance of a signature verication system in which signatures
are captured in real-time with our interface. Signatures are written at higher speeds than
normal handwriting and therefore, a bigger image neighborhood has to be searched in order
to nd the pen tip. We acquired signature sequences by enlarging the search area and turning
the pen-up detection block of the system. In these experiments, we experienced a missing
frame rate of at most 1 out of every 400 frames. We observe that the system occasionally
loses track of the pen tip when the subject produces an extremely fast stroke. This problem
of losing track of the pen tip could be solved in the future by using a more powerful CPU or
dedicated hardware (that is able to process a larger search area). Nevertheless, after a few
trials, the user learns how to utilize the system without exceeding its limits.
The spatial resolution of the system was estimated in static and dynamic conditions.
We acquired a few sequences in which the pen tip was held xed at the same location, so
any dierences in the acquired points were due to noise on the image acquisition and errors
in pen tip localization. We repeated this experiment 10 times, placing the pen at dierent
positions and using dierent illumination. The static resolution of the system was estimated
by computing the average standard deviation of the points acquired in each of the sequences.
We also acquired 10 sequences of a subject drawing lines of dierent orientations with the help
of a ruler. The lines were carefully drawn to be straight, so any dierences from a straight
line would be due to noise in the image acquisition and errors in ballpoint localization. We
t a line through the acquired points and computed the distance between the points and the
tting line. The dynamic resolution of the system was estimated by computing the average
standard deviation of the mentioned distance in each of the sequences. Figure 11 shows
two sequences used to compute the spatial resolution and summarizes the resolution of the
system.
We note that the vertical resolution is almost the same for the two experiments, but
the horizontal resolution varies by a factor of two from one experiment to the other. This
dierence is possibly due to the subject holding the pen mostly in a vertical writing position
for the static resolution experiment. In any case, we observe that the system has quite a
Sequence used for static resolution computation
100 150 200 2504080120Sequence used for dynamic resolution computation
horiz. resolution (pixels) vert. resolution (pixels)
static
dynamic
Figure
11: Spatial resolution of the interface. The rst gure corresponds to points acquired while the
pen tip was kept still at a xed position. This sequence is used to estimate the static resolution of the system.
The second gure shows a straight line drawn with a ruler that is used to estimate the dynamic resolution
of the system. The standard deviations of the error from the ideal position are given in the table as the
estimated static and dynamic resolution of the system. One could take two standard deviations (roughly 0.1
pixel) to obtain a more conservative value of the spatial resolution.
good resolution of less than one tenth of a pixel.
Table
1 summarizes all the parameters used in the implementation of the real-time system.
Figure
12 shows several examples of complete handwritten sequences acquired in real time
with our system. A few portions of one of the sequences are blown-up in order to depict the
level of acquisition noise.
3.2 Pen up detection experiments
Only the pen tracking and the local ink detection components of the system have been implemented
in the real-time application. In order to evaluate the performance of the complete
pen-up detection subsystem, we collected 20 sequences comprising various types of handwriting
(cursive, block letters, printed letters, numbers, drawings, signatures, and mathematical
formulas). We used half of these sequences for training the HMM and the other half for test-
ing. We obtained ground truth by classifying by hand each of the points of the test sequences
as a pen up or pen down. We also classied by hand each of the strokes in which the test
Parameter Value
Pen tip template size 25x25 pixels
Initial dead time (given to the user to move the paper 2 sec. (120 frames)
to nd a clean area where to write)
Image dierence threshold 15 (3 bits of noise)
Number of pixels required to detect movement 20 pixels
Number of pixels required to detect lack of movement
Time of no pen tip movement waited before acquiring 200 ms
the pen tip template
Time used to acquire information on the pen tip 1 sec. (60 frames)
Edge detector scale 3 pixels
Contrast threshold (used with Canny's edge detector) 0.7
Distance from parabolic cylinder axis to center of pixel 0.5 pixels
threshold (used with Canny's edge detector)
Correlation window size 15x15 pixels
KF output noise covariance matrix (R) diag(10
KF state noise covariance matrix (Q) diag(0,0,0,0,10 4
KF initial estimation error covariance matrix (P 0
Maximum normalized correlation value considered as a match 0.75
Maximum velocity denoting pen not moving 0.5 pixels per frame
Time waited before stopping 0.5 sec (30 frames)
Minimum number of points in a sequence 150 samples
Minimum velocity threshold used for trajectory segmentation 0.75 pixels per frame
Maximum curvature threshold used for trajectory segmentation 0.05 pixels per frame 2
Table
1: System parameters. System parameters used in the real-time implementation.
sequences were divided by the segmentation algorithm. Two types of error measurements
were used to evaluate the performance of pen down detection: the false acceptance rate
which measured the percentage of pen-up points (segments) that were classied as
pen down by the system; and the false rejection rate (FRR) which provided the percentage
of pen-down points (segments) that were classied as pen up by our system. The examples
of gures 1, 9, and 12 were used for training the HMM.
3.2.1 Point-wise classication results
All points in the test sequences were hard classied as either pen down or pen up in this
experiment. Two dierent approaches were compared: the rst one used the value of the ink
absence condence measure as the classication parameter; the second approach used the
HMM to classify each point. The hard classication was provided by the most likely HMM
Figure
12: Example sequences. The rst row shows examples of sequences captured with the real-time
system. We collected examples of cursive writing, block letters, printed letters, drawings, and mathematical
symbols. The second row displays enlargements of portions of the sequence \Maria Elena". The dots
represent the actual samples acquired with the interface. The sequences present a very low acquisition noise.
state sequence obtained with Viterbi's algorithm. Table 2 shows the resulting error rates.
Figure
13 presents the results of these two approaches on three test sequences.
3.2.2 Stroke classication results
All test sequences were segmented into strokes and each stroke was classied as either pen
down or pen up in this experiment. Two classication approaches were compared: the rst
one was based on the ink absence condence measure; the second one was based on the
local measurements (%) HMM modeling (%)
FAR 24.6 28.6
FRR 10.05 5.33
Table
2: Point-wise classication results. Comparison of the error rates of point-wise ink detection
obtained using the ink absence condence measure and the HMM model. The classication threshold used
for the ink absence condence measure is 0.4. We observe that none of the approaches is clearly better than
the other. The HMM one has a lower FRR while the local measurements one has lower FAR. As we pointed
out before, we have to wait until the pen tip is out of sight in order to measure brightness, so many pen-up
points that correspond to a stroke that passes on top a segment of ink trace were misclassied as pen-down
points. This is the main reason for the apparently large value of the FAR.
Figure
13: Point-wise classication results. The rst row shows three test sequences. The gures of
the second row display each segment of the sequences with a thickness that is proportional to the average
of the condence of the endpoints. Most of the thicker segments corresponds to portions of the trajectory
that should be classied as pen down. The third row shows only points of the trajectories that have been
classied as pen down using the ink absence condence measure. The fourth row presents only points that
have been classied as pen down by the HMM. We see that there are several segments that appear in areas
where there should be no ink trace on the paper. This misclassication is due to a bad measurement of the
condence of ink absence. From the plots of the third and fourth row, it seems that the HMM approach has
lower FRR at the cost of a higher FAR.
local measurements (%) HMM modeling (%)
FAR 9.27 11.22
FRR 24.55 8.18
Table
3: Stroke classication results. Comparison of the error rates of stroke classication obtained
using the ink absence condence measure and the HMM model. For the rst case, the average ink absence
condence measure was used as the classication parameter. The classication threshold was set to 0.2
(strokes with stroke condence lower than the threshold were classied as pen down). For the second case,
the percentage of points in the stroke classied as pen down by the HMM was used as the classication
parameter. The classication threshold was also set to 0.2 in this case. We observe that the HMM has a
much better FRR than the local measurements at the expense of a slightly worse FAR. We note that in
most of the cases in which the stroke-up classication fails (re
ected in the FAR), it is due to an incorrect
segmentation, like the \C" in the sequence \PEDRO MUNICH" or the crossing stroke of the \x" in the
mathematical formula of gure 14. These incorrectly segmented strokes were always classied as pen down
in the ground truth. Leaving out these segments in the computation of the performance, we obtained a
reduction in the FAR for both methods of approximately 1% (absolute error) while the FRR is unchanged.
HMM. In the rst approach, the stroke condence measure was computed as the average of
the ink absence condence measure of all points in the stroke. For the case of the HMM,
the stroke condence measure was calculated using a voting scheme. The ratio between the
number of points classied as pen down by the HMM and the number of points in the stroke
provided the stroke condence measure. Table 3 shows the resulting error rates. Figure 14
presents the results of stroke classication on three test sequences.
3.3 Signature verication
As mentioned before, the real-time interface was used as front-end for a signature verication
system [14, 16, 17]. We acquired 25-30 true signatures and 10 forgeries from 105 subjects,
adding to an approximate total of 4000 signature samples. We collected data over the course
of a few months in which subjects would provide signatures at dierent times during the
day. The interface was placed next to a window, so natural sunlight was used for capturing
signatures at day time, while electric lighting was used for acquiring signatures during the
night. The subjects were asked to provide data in three dierent sessions in order to sample
their signature variability. Given the number of subjects involved in the experiment, the
Figure
14: Stroke classication results. The corresponding strokes for the sequences of gure 13 are
shown on the rst row. Successive strokes are plotted alternatively with solid or dashed lines. The gures of
the second and third rows correspond to classication using the ink absence condence measure; the gures
of the fourth and fth rows correspond to classication using HMM. The gures of the second and fourth
row displays each stroke with a thickness that is proportional to the stroke condence measure. The plots of
the third and fth rows shows only the strokes classied as pen down in each case. The classication based
the HMM seems to provide better results than the one based on the ink condence measure.
Signature s019010
90 100 110 120 130 140 1502060Signature s025005
Signature s041010
100 120 140 16010305070Signature s001010
Signature s026005
-551525Forgery s019001
Forgery s025000
100 120 140 1601030Forgery s041006
100 120 140 160 1801030Forgery s026000
Figure
15: Signature verication. The rst row of the gure shows examples of true signatures acquired
with the interface. The second row presents examples of corresponding skilled forgeries also captured with
our system.
position and orientation of the camera was dierent from session to session and from subject
to subject. Figure 15 shows some examples of acquired signatures. We achieved a verication
error rate of less than 1.5% for skilled forgeries and a verication error rate of less than 0.25%
for random forgeries. These rates correspond to the condition of equal false acceptance rate
and false rejection rate. These results and the techniques used for verication will be reported
in a forthcoming paper.
3.4 Discussion
The examples presented in section 3 show that the interface is quite convenient and accurate
for acquiring short handwritten trajectories. The system has not been tested for acquiring
long sentences or even full pages of text. The main di-culty in this case would be perspective
and radial distortion. This is not a problem for some applications, e.g., our signature
verication algorithm which encodes handwriting in an a-ne-invariant parameterization.
Perspective distortion of the image could be corrected easily if paper with a predened pattern
of symbols, e.g., a set of crosses located at a known distance from each other, was used;
however, this would make the interface less convenient and general.
Besides signature verication, informal tests by human observers found the output of
the interface well within the resolution limits for easy reading and interpretation. However,
the interface has not been tested for handwriting recognition. The results of the pen-down
detection experiments are encouraging. The stroke condence measure provides a soft classication
of the pen-down and pen-up strokes that could be used in a handwriting recognizer.
The usability of the interface has been tested by more than a hundred dierent subjects
during the signature verication experiment. The acquisition of signatures took place under
various lighting conditions and camera position, showing the robustness of the interface with
respect to variability of the user's setup.
Handwriting was captured at dierent scales with the interface. Changes of scale were
introduced by the user when he adjusted the position and orientation of the camera to write
more comfortably. These scale changes were small enough to be handled with a xed set of
system parameters. Larger scale changes would require adaptation of the system parameters
to the acquisition setup. An appropriate procedure may be designed for the user to help the
system in this task.
4 Conclusion and further work
The design and implementation of a novel human-computer interface for handwriting was
presented. A camera is focused on the user writing on a piece of paper with a normal pen.
We have shown that the handwriting trajectory is successfully recovered from its spatio-temporal
representation given by the sequence of images. This trajectory is composed by
handwritten strokes and pen movements between two strokes. The temporal resolution is
su-cient. The spatial resolution is approximately a tenth of a pixel, which allows capturing
handwriting at su-cient spatial resolution within an area corresponding to half a sheet of
letter paper using a cheap 480x640 pixels camera. The spatial resolution approximately
corresponds to 20 samples per millimeter of writing, resolution that is ve times lower than
that of commercial tablets (100 lines per millimeter), but that is obtained with a much
smaller and cheaper interface. The classication of pen-up and pen-down portions of the
trajectory of the pen is obtained by using local measurements of the brightness of the image
at the location in which the writing end of the pen was detected.
Several modules of the interface are susceptible of improvement. We used only one pen tip
template for the whole sequence acquisition. This template could be automatically updated
once the peak value of correlation fell below a certain threshold. Since the information
about the boundaries and the axis of the pen tip, as well as the position of the ballpoint and
the nger are computed for each frame by the ballpoint detection module, the automatic
extraction of a new pen tip template involves no extra computational cost.
The region of interest used to detect the location of the pen tip has constant size in
the current implementation of the system. The size of this region could be driven by the
uncertainty on the predicted position of the pen tip, i.e., the size could depend on the
covariance of the predicted location of the pen tip. Smaller regions would be required in cases
of low uncertainty, reducing in this way the computational cost of performing correlation
between the region of interest and the pen tip template.
The ballpoint detection in the current frame of the sequence is based on the orientation
of the axis and of the boundaries of the pen tip in the previous frame. We could improve
the robustness of the ballpoint detection by modeling the change of axis and boundaries
orientations from frame to frame. A recursive estimation scheme could be used to predict
the desired orientations, allowing one to reduce the size of the windows used to perform edge
detection and to decrease the number of computations.
We used a Gaussian model for the brightness of ink-less pixels. The estimation of the
model parameters was performed using the brightness of points lying on a circle centered at
the ballpoint position, assuming that all the circle points are ink-less points. Clearly, this
model is not strictly adequate for a random variable which takes values on the interval [0; 255],
and the assumption is not completely valid since some circle points could correspond to the
ink trace. This model could be improved by using a probability density function suitable
for representing a random variable that takes values on a nite interval. However, as a rst
order approximation we have shown that this model provides good results in pen-up/-down
classication.
The classication of strokes into pen-up strokes and pen-down strokes is based on local
measurements of brightness. A few other local measurements such as the local orientation
of the ink at the position of the ballpoint, the correlation of this orientation with the local
direction of the trajectory of the pen tip, etc., could be used in order to improve the classi-
cation rates. These local measurements of direction would decrease the FAR since a sample
would be classied as \pen down" only if an ink trace with the corresponding direction is
found at the location of the sample. These additional local measures could be naturally
included in the system by increasing the dimensionality of the observation of the HMM.
The set of examples used to estimate the HMM parameters and to evaluate the pen-up/-
classication performance included examples of dierent types of writing provided by
only one subject. More example sequences provided by dierent subjects should be acquired
in order to estimate this performance in a writer-independent setting. Also, a bigger set of
examples should be used to obtain a more accurate HMM for pen down detection.
Acknowledgements
We gratefully acknowledge support from the NSF Engineering Research Center on Neuromorphic
Systems Engineering at Caltech (National Science Foundation (NSF) Cooperative
Agreement No. EEC-9402726).
--R
Optimal Filtering.
On line handwriting data acquisition using a video camera.
A computational approach to edge detection.
Vision for man-machine interaction
Finger tracking as an input device for augmented reality.
Liveboard: a large interactive display supporting group meetings
The Viterbi algorithm.
Statistical Pattern Recognition.
Applied Optimal Estimation.
Optical character recognition
A new approach to linear
Dynamic approaches to handwritten signature veri
An iterative image registration technique with an application to stereo vision.
Visual Input for Pen-based Computers
Visual input for pen-based computers
Visual signature veri
A new input device.
Interactive dynamic whiteboard for educational applications.
An evaluation of motor models of handwriting.
Fundamentals of Speech Recognition.
The state of the art in on-line handwriting recognition
The relation between linear extent and velocity in drawings movements.
Trajectory determines movement dynamics.
Analysis and synthesis of handwriting.
Adaptative thresholding for the digitaldesk.
calibration for digitaldesk.
A new data tablet system for handwriting characters and drawing based on image processing.
--TR
A computational approach to edge detection
The State of the Art in Online Handwriting Recognition
Introduction to statistical pattern recognition (2nd ed.)
Liveboard
Fundamentals of speech recognition
On-Line and Off-Line Handwriting Recognition
Camera-Based ID Verification by Signature Tracking
Online Handwriting Data Acquisition Using a Video Camera
Visual Input for Pen-Based Computers
Visual input for pen-based computers
--CTR
Mario E. Munich , Pietro Perona, Visual Identification by Signature Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.2, p.200-217, February | pen-based interface;systems and applications;pen-based computing;active and real-time vision |
507649 | Generic validation of structural content with parametric modules. | We demonstrate a natural mapping from XML element types to ML module expressions. The mapping is inductive and definitions of common XML operations can be derived as the module expressions are composed. We show how to derive, in a generic way, the validation function, which checks an XML document for conformance to its DTD (Document Type Definition). One can view validation as assigning ML types to XML elements and the validation procedure a pre-requisite for typeful XML programming in ML. Our mapping uses the parametric module facility of ML in some contrived way. For example, in validating WML (WAP Markup Language) documents, we need to use 36ary type constructors, as well as higher-order modules that take in as many as 17 modules as input. That one can systematically model XML DTDs at the module level suggests ML-like languages are suitable for type-safe prototyping of DTD-aware XML applications. | INTRODUCTION
&MOTIVATION
XML (eXtensible Markup Language) is language for tagging
documents for their structural content [2]. A XML document
is tagged into a tree of nested elements. XML is extensible
because each XML document can include a DTD (Doc-
ument Type which lists the tags of the elements
and specifies the tagging constraints. A central concept in
document processing is validation. A XML document
is valid if its content is tagged with the constraints specified
Submitted to International Conference on Functional Pro-
gramming, 2001, and available as Technical Report TR-IIS-
001-005, Institute of Information Science, Academia Sinica,
Taipei, Taiwan (http://www.iis.sinica.edu.tw). Comments
and suggestions are most welcome.
by its DTD. A XML document is well-formed if each of its
element is enclosed with matching start-tag and end-tag. A
well-formed XML document is not necessarily valid.
The following XML document contains a DTD that defines
two element types folder and record. The document contains
as a root a folder element, which has an empty record
element as its only child. It is a valid XML document.
!?xml version="1.0"??
(folder,(folder-record)+))?
!!ELEMENT record EMPTY?
The DTD in the above XML document models the structure
where a record must contain no other element, and no folder
is empty or contains just another folder. One may think of
it modeling a tidy bookmark file. Of the following three
elements, f3 is valid, but items f1 and f2 are not.
Note that !record/? is a shorthand for !record?!/record?.
The tag sequence !record?!folder?!/record?!/folder? is
an example of not-well-formedness.
To simplify discussion, we may say that each element type
in the DTD is specified by its element content model (i.e.,
its tagging constraint) which is an unambiguous regular expression
with element type names as symbols. The content
model of an element type specifies what element sequences
are allowed as the children of the element. Naturally, when
coding XML programs, one need to map the element types
in a DTD to the corresponding data types in the source programming
language. A further requirement of the mapping
is that content validation is translated into type correctness
in the programming language, so that well-typed programs
will always produce valid XML elements. Note that this
goes beyond what is required of the so-called "validating
XML processor", which need only report violations of element
content models in the input XML document but need
not impose restrictions on the output.
There have been several directions in programming language
support for writing XML applications. We can classify them
into the following three categories.
ADT for well-formed elements. Abstract data types and
the accompanying library routines are designed to traverse
and transform well-formed XML elements. The
XML data is assumed to be validated in a separate
phase, or its validation is a separate issue and may not
even be required. Examples in this category include
standard XML API in C++, Java, or other languages
(e.g., Document Object Model, DOM [1]) and a combinator
approach to writing XML processing functional
programs [3, 18].
Type translation of DTD. A strongly typed language is
used for XML programming, and the type system of
the language is used to embed DTDs. The embedding
is complete (every element type has a corresponding
data type in the embedding language) and sound
(an expression of the embedding language evaluates to
a valid XML element if the expression is well-typed
in the language). Examples in this category include
HaXml [3, 18] and XMLambda [14]. If the strongly
typed language is statically typed, then the soundness
proof is done by the type checker at compile-
time. Hence no type-correct program will produce
invalid XML elements. One can also use constraint-based
languages or logic programming languages to encode
content models in a similar way [19]. The
type translation approach is not completely satisfactory
for two reasons. One is that the type translation
may not be systematic and can be tedious if done
manually. The other inconvenience is that code for
generic XML processing operations need to be rewritten
for every DTD because they are translated into
different types. XML content validation, which check
well-formed XML documents for conformance to their
DTDs, is such a generic operation.
Native language support of DTD. New languages are
being designed with builtin XML support to help build
XML-related applications. XDuce is a functional language
with regular expression types, so as to allow
direct representations of DTDs and processing of valid
elements Expressions in the language are evaluated
to valid XML elements, but variables must be
annotated with their element types. The concept of
validation is built into the language as type correct-
ness, and programs are type-checked at compile-time.
XDuce also provides regular expression patterns which
further help write concise XML programs. XDuce,
however, is currently a first-order and monomorphic
language, and lacks some language features (e.g., a
module system).
In this paper, we show how to use parametric modules in
ML-like languages to write XML-supporting program modules
that are both expressive and generic. It is expressive
because all XML DTDs can be constructed from the provided
parametric modules. It is generic because common
operations, including the validation function, are automatically
generated. As such, our approach has the advantages
of both the type translation approach and the native DTD
support approach, but without their disadvantages. There is
no need to recode generic operations, and no need to design
new language.
2. AN ILLUSTRATING EXAMPLE
For the tidy bookmark example described in Section 1, the
following is the actual code we write in Objective Caml to
specify the DTD, and to produce the validation functions
for the two element types in the DTD.
module
struct
type ('x0, Record of 'x1
let map (f0,
module
struct
module
module
module Tag = BookmarkTag
module
In the above, module TidySys contains two modules F0
and F1, which are translations, word by word, in Objective
Caml module language the XML element type declarations
of folder and record. The higher-order module Alt is for
"-", Seq for ",", Star for "*", and Plus for "+". Ideally,
we would like to define the two XML element types as two
mutually recursive ML modules T0 and T1 as the following.
module
and
But Objective Caml, as most ML-like languages, does not
support recursive modules. Instead we use two "place holder"
modules P0 and P1 as the two parameters to higher-order
modules (Alt, Seq, etc.), and use another higher-order module
Mu (pronounced as -) to derive the two simultaneous
fixed points.
Module TidyDtd contains
ffl module U, which defines the type for well-formed elements
ffl module V, which contains modules T0 and T1 that each
defines the type for valid folder and record elements,
ffl functions validate and forget, which provide mappings
between well-formed elements and valid elements.
It also defines exception Invalid, which may be raised
by function validate. Note that the following equations
always hold
forget
validate
The sample element f3 as shown in Section 1 can now be
defined and validated by the following Objective Caml code
(f3 u is well-formed and f3 v is
In addition, the valid element returned by the validation
function is parsed and typed in the sense that all of its sub-structures
are given specific types and can be extracted by
using ML pattern-matching.
In this paper, we will use the above example to explain the
idea and describe the construction. However, the idea and
the construction can be systematically applied to DTDs with
element types. One need just to define a n-ary fixed point
module Mun that will take a system of n n-ary higher-order
modules F0 , F1 produce the simultaneous
fixed points. The definition of Mun is symmetric and is similar
to Mu. We will later use WML (a markup language for
wireless applications whose DTD defines 35 element types)
as a benchmarking example to show the effectiveness of our
approach.
3. GENERICPROGRAMMINGWITHPARA-
The XML element types in the folder example can be translated
into Objective Caml using a series of type definitions
as shown below.
type ('a, 'b) alt = L of 'a - R of 'b
type ('a, 'b)
list
type 'a plus = One of 'a - More of 'a * 'a plus
Folder of
((record, (folder, record) alt star) seq,
(folder, (folder, record) alt plus) seq) alt
One can abstract the right-hand-sides of the type equations
for folder and record into two binary type constructors
f0 and f1, and view folder and record as the least fixed
points of f0 and f1.
Functions folder and record are syntactic sugars, and
can be defined by
let folder ulist = BookmarkTag.Folder
(TidyDtd.U.up ulist)
let record ulist = BookmarkTag.Record
(TidyDtd.U.up ulist)
type ('a, 'b)
('a, ('a, 'b) alt plus) seq) alt
type ('a, 'b)
Folder of (folder, record) f0
and record = Record of (folder, record) f1
One can further rewrite f0 and f1 using the two projection
functions p0 and p1, and the empty type constructor.
type ('a, 'b)
type ('a, 'b)
type
((('a,'b)p1, (('a,'b)p0, ('a,'b)p1) alt star) seq,
type
At this point, it is clear that one can program in the module
level, and define f0 and f1 as two module expressions using
a predefined set of constant modules (for p0, p1, and empty),
unary parametric modules (for star and plus), and binary
parametric modules (for alt and seq). This is shown in
Figure
1 where we also define the map function, inductively.
All XML element types can be defined using a fixed set of
parametric modules.
We may say that modules F0 and F1 are objects in a functor
category where each object has a type constructor t to map
types to types, and a function map to map typed functions to
typed functions. Parametric modules like Plus are arrows in
the functor category, i.e., natural transformations. We view
this definition of the map function a generic one, as each
map instance is inductively indexed by its governing type
expression. We will later show definitions of other generic
values that are used in the definition of the validation function
(which itself is generic as well).
4. PARAMETRICCONTENTMODELSAND
SIMULTANEOUS FIXED POINTS
In
Figure
modules F0 and F1 each defines a binary type
constructor t, and the the two type constructors are used
together to mutually define types folder and record. The
code is reproduced below.
module F0:
module F1:
Folder of (folder, record) F0.t
and record = Record of (folder, record) F1.t
The type constructors F0.t and F1.t are parametric content
models in the sense that each maps a tuple of type
instances to a content model. For example, given type instances
folder and record, the type expression (folder,
record) F0.t expands to
((record, (folder, record) alt star) seq,
module type
sig
type ('a, 'b) t
val map: ('a -? 'x) * ('b -? 'y) -?
module type
module type
module Empty:
struct
type ('a, 'b)
let map (f, g)
module P0:
struct
type ('a, 'b)
let map (f, g)
module Plus:
struct
type ('a, 'b)
One of ('a, 'b) F.t
- More of ('a, 'b) F.t * ('a, 'b) t
let rec map (f, g)
match t with
One s -? One (F.map (f, g) s)
- More (v, w) -?
More
module
struct
type ('a, 'b)
let map (f, g) (u,
F1.map (f, g) v)
module P1:
module Star:
module Alt:
module F0:
module F1:
Folder of (folder, record) F0.t
and record = Record of (folder, record) F1.t
Figure
1: Inductive definitions of XML element
types using parametric modules.
Note: Module type annotations can be, and often are, omit-
ted. W can take out the ": F2F" part in "module Plus:
", and at the same time expose the implementation
of module Plus. The annotations are added for clarity
and type-checking purposes.
(folder, (folder, record) alt plus) seq) alt
which is exactly the XML content model for element type
folder.
The main idea is to use type constructors as parametric content
models, and view XML element types as simultaneous
fixed points of a set of parametric content models. This
viewpoint helps us develop primitive functions that are abstract
and applicable to different content models (that is, the
primitives are polymorphic). One of these primitives is the
simultaneous induction operator - the fold function. We
will later show that the validation procedure can be defined
by using the fold function.
We then model two recursively defined XML element types
by two interdependent ML modules T0 and T1. Their signatures
are the following.
module T0:
sig
type ('x0, 'x1) cm
val up: (T0.t, T1.t) cm -? T0.t
val down T0.t -? (T0.t, T1.t) cm
and
module T1:
sig
type ('x0, 'x1) cm
val up: (T0.t, T1.t) cm -? T1.t
val down T1.t -? (T0.t, T1.t) cm
In the above, type constructor ('x0, 'x1) cm is for the
parametric content model, and type t is for the element
type. Functions up and down map between an element and
its content model, and together define their equivalence:
Note that the above mutually defined signatures are not allowed
in Objective Caml (as in most ML-like languages).
However, one can use both auxiliary type names and additional
type sharing constraints to overcome the problem.
We can define a higher-order module MuValid that derives
modules T0 and T1, when given a module that specifies the
corresponding parametric content models and the tag set,
see
Figure
2. In Figure 2, modules F0 and F0 of the input
module S specify the parametric content models, and
module Tag specifies the tag set.
Note that, in the module returned by MuValid, the type for
all valid elements is simply defined as the disjoint sum of
type T0.t and type T1.t:
Also note that the simultaneous fold function has type
val fold: (('a, 'b) T0.cm -? 'a) *
(T0.t -? 'a) * (T1.t -? 'b)
Function fold returns with two reduction functions (whose
types are T0.t -? 'a and T1.t -? 'b) if given two properly
typed induction functions as bases (whose types are ('a,
'b) T0.cm -? 'a and ('a, 'b) T1.cm -? 'b).
Similarly, a higher-order module MuWF can be defined to derive
a module for all well-formed elements; see Figure 3. In
module MuWF, type constructor ('x0, 'x1) cm - the parametric
content model for well-formed elements - is defined
as a list of tagged values:
list
and type u - the type for well-formed elements - is defined
as the fixed point of the parametric content model cm:
type
Note as well that type of all well-formed elements, type t, is
defined as the disjoint sum of u and u, representing elements
with two distinct tags. The definition of the simultaneous
fold function is the same as that in module MuValid.
In
Figure
3, there are several functions in module U2V and
V2U that are given their types but are left undefined. They
are used to specify functions validate and forget. Function
validate maps a well-formed element to a valid ele-
ment, while forget is the inverse function. Let us look at
functions cm0 and cm1 in module U2V first. Their types are
the following
val cm0: (V.T0.t, V.T1.t) U.cm -?
(V.T0.t, V.T1.t) V.T0.cm
val cm1: (V.T0.t, V.T1.t) U.cm -?
(V.T0.t, V.T1.t) V.T1.cm
Function cm0 maps a well-formed content, whose constituting
parts are valid elements already, into a valid content. If
function cm0 is composed with function V.T0.up, one gets
a function that returns a valid element of type V.T0.t as
result (we use $ as the function composition operator):
Given these two functions as the inductive bases to the simultaneous
fold function, one derives the validation functions
for elements of types V.T0.t and V.T1.t.
module type
sig
type ('x0, 'x1) t
val map: ('x0 -?'y0) * ('x1 -? 'y1) -?
module type
sig
module F0: FUN
module F1: FUN
module Tag: TAG
module
struct
module Tag = S.Tag
and
module
struct
let up
let down (V0
module
struct
let up
let down (V0
let fold (f0,
let rec
(T0.down
and
(T1.down
in
Figure
2: Module MuValid derives element types as
simultaneous fixed points of a set of parametric content
models.
module
struct
module
list
let map
type
let down (U
let fold (f0,
let rec
(down
and
(down
in
module
struct
module
module
module
exception Invalid
module
struct
let cm0: (V.T0.t, V.T1.t) U.cm -?
(V.T0.t,
let cm1: (V.T0.t, V.T1.t) U.cm -?
(V.T0.t,
let (t0, t1): (U.u -? V.T0.t) * (U.u -?
U.fold (V.T0.up
let t: U.t -?
module
struct
let cm0: (U.u, U.u) V.T0.cm -?
let cm1: (U.u, U.u) V.T1.cm -?
let (t0, t1): (V.T0.t -? U.u) * (V.T1.t -?
V.fold (U.up
let t: V.t -?
Figure
3: Module MuWF derives the type for well-formed
elements. Module Mu uses simultaneous fold
to define the validation function.
Note: Type annotations for functions are added for clarity
purpose.
U.fold (V.T0.up
Recall that the types for all well-formed elements and all
valid elements are defined by
It follows that the validation function is defined by
U.fold (V.T0.up
As shown in Figure 3, one can define function forget in a
similar way. It remains to be shown how functions like cm0
and cm1 are defined for all content models. This is shown
next.
5. GENERIC VALIDATION OF CONTENT
MODELS
Recall that, in Figure 1, a map function is defined in a
generic way for any module with signature FUN, as long as
the module is generated with the predefined set of parametric
modules (Empty, P0, P1, Star, etc. The vaildation and
forgetting functions can be defined in a generic way as well.
First we define the validation functions for the inductive
bases. The validation function for any other content model
can then be derived, automatically, as module expressions
for the content are built.
There are two remaining details. The first is that at the
time of building the content model, one does not have access
to the tag module. This tag module is of signature TAG,
and defines the variant data type for tagging elements (e.g.,
module BookmarkTag in Section 2). Therefore the validation
and forgetting functions must reside in a higher-order
module that takes in a TAG module as input.
One need also to maintain a nullable condition and a first
set of element tags. A content model is nullable if it accepts
the empty element sequence. The first set contains all tags
that can appear at the first position of a valid sequence.
It can be used to check if a content model is ambiguous,
e.g., when the first sets of the two input modules to Alt
overlap. When combined with a lookahead tag, it is used to
implement a non-backtracking validation procedure as well.
(More on this in Section 8.) Both nullable and first are
generic values. The module signature FUN for parametric
content model now consists of the following components.
module type
sig
type ('x0, 'x1) t
val map: ('x0 -? 'y0) * ('x1 -? 'y1) -?
val nullable: bool
val first: Natset.t
module Content: functor (T: TAG) -?
sig
val validate: ('x0, 'x1) T.t list -?
val forget: ('x0, 'x1) t -? ('x0, 'x1) T.t list
Function validate takes a list of tagged values and turns
it into a value of content model followed with the remaining
list. Note that the type for the input, ('x0, 'x1) T.t
list, is the same as the content model of well-formed element
if the two share the same tag set. Figure 4 illustrates
the construction by showing the implementations of modules
P0 and Star.
The validation and forgetting functions are wrapped in module
Content. The definition of Content is inductive: It depends
on the Content module in the input module F (see,
e.g., the module expression module
Star). We can view this as constituting a generic definition
of the validation function, as each instance is systematically
generated by its module expression. As evident in module
Star, we adapt the longest prefix matching rule in validating
the input element sequence against the "*" content model.
This longest prefix matching rule is indeed required by XML.
Validation functions for other modules, i.e., Empty, P0, P1,
Plus, Seq, and Alt, can be similarly defined and are omitted
here.
Now we return to Figure 3 to complete the defintions of
functions cm0 and cm1 in modules U2V and V2U. They are
defined as the following.
module
struct
module
let cm0 ulist =
match CM0.validate ulist with
Some (v, []) -? v
-? raise Invalid
module
struct
module
Function cm0 in module U2V need to validate the input sequence
of tagged value with the content model of element
type V.T0.t, using the current tag set. This can be accomplished
by using the validation function in module
Sys.F0.Content(Sys.Tag). The only difference is that, if
there remains a non-empty sequence after a validated (longest)
prefix, the entire sequence is not valid with respect to the
content model V.T0.t.
module P0:
struct
type ('x0,
Natset.of-list [0]
module Content = functor (T: TAG) -?
struct
let validate ulist =
match ulist with
T.fold ((fun x -? Some (x, t)),
(fun x -?
(* if success, return the untagged
value along with the remaing
list; otherwise returns None. *)
let forget a = [T.x0 a] (* Tag with the first
variant of type T.t *)
module Star:
struct
type ('x0,
F.first
module Content = functor (T: TAG) -?
struct
module
let rec validate ulist =
match ulist with
[] -? Some ([], ulist)
if . h in first .
then match CM.validate ulist with
Some (u, t) -?
(match validate t with
Some (us, s) -? Some (u::us, s)
else Some ([], ulist)
let rec forget
match t with
Figure
4: Generic definition of the content validation
functions.
6. TYPEFULXMLPROGRAMMINGINML
One of the purposes of validation is to assign a type to an
XML element. Programming with validated XML elements
is now programming with typed values. Using a statically
typed langauge for such programming allows one to detect
type errors, hence expressions for invalid elements, at compile
time.
Our generic validation procedure gives types to valid ele-
ments, and allows one to construct XML processors in a
typeful way. In the following illustrating diagram, let U be
the ML type for well-formed elements, and V and V 0 be the
ML types that correspond to specific XML element types.
U
validate
f
forgetWe may say that functions in U ! U are untyped as they
may produce invalid elements. However, functions in
are typed as they always output valid elements. Whenever
one is programming a function expects
the output also to be valid, one can do so by programming
a
In
Figure
5, we show some ML code fragment to illustrate
the approach. The code maps a well-formed tidy bookmark
to a well-formed flat bookmark (function tidy2flat u). Because
the the mapping is composed from a typed conversion
routine (function tidy2flat v), it will always output a valid
element if the input element is valid. Note that the types
for the functions below will be inferred by ML. The functions
are annotated with their types in Figure 5 for clarity
purpose only.
7. COMBING GENERICITY WITH POLY-
MORPHISM
The generic modeling of XML DTDs can be combined with
type polymorphism for a better result. Indeed, we use
both genericity and polymorphism to model XML element
type declarations that are accompanied with attribute-list
declarations. We can extend the previous folder example
by requiring an optional subject attribute for each folder
element, and a pair of title and url attributes for each
record element. The following is a valid XML document
with the newly extended DTD.
!?xml version="1.0"??
(folder,(folder-record)+))?
!!ELEMENT record EMPTY?
subject
!!ATTLIST record
title
module
module
struct
module
module
module Tag = Tag
module
module
module
module
module
module
let t2f-folder:
fun fd -? match fd with
. (* the case of a flat
record r followed by a sequence
t of flat records or folders *)
. (* the case of a flat folder
f followed by a non-empty sequence
t of flat records or folders *)
let t2f-record:
let flatten-v: (TidyFolder.t, TidyRecord.t) Tag.t -?
Tag.map (TidyDtd.V.fold
FlatRecord.up
let flatten-u: TidyDtd.U.t -?
FlatDtd.forget
Figure
5: An example of typeful XML programming
Note: Type annotations for functions are added for clarity
purpose.
url
!folder subject="Research Institutes"?
!record title="Academia Sinica"
url="http://www.sinica.edu.tw"/?
The original definitions of folder and record (Figure 1, last
two lines),
Folder of (folder, record) F0.t
and record = Record of (folder, record) F1.t
can now be replaced by the following
type ('u, 'v) Folder of
and ('u, 'v) record = Record of
'v * (('u, 'v)folder, ('u, 'v)record) F1.t
string option-
string -
In the above, attribute declarations are modeled at the type
level. It can be lifted to the model level if needed. Further-
more, the generic definition of the validation function can
be modified accordingly to accommodate validation check
for attribute formats and values.
8. MORE XML CONTENT VALIDATION
XML requires content models in element type declarations
be deterministic. Br-uggemann-Klein and Wood further clarified
the requirement as meaning 1-unambiguity [7, 8]. A
regular expression is 1-unambiguous if its sequence of symbols
can be recognized deterministically, with one-symbol
lookahead, by the corresponding nondeterministic finite-state
machine. For example, the content model ((b, c)-(b, d))
is not 1-unambiguous, because given an initial b, one cannot
know which b in the model is being matched without looking
further ahead to see what follows b. However, the equivalent
content model (b,(c-d)) is 1-unambiguous [2]. We can use
the nullable predicate and the first set to check whether
the content model as specified by a module expression is
1-unambiguous. The check is performed at module elaboration
time so that an ambiguous content model is detected
and an exception is raised as soon as possible. A content
model may also contain epsilon ambiguity which is allowed
by XML but demands additional work during validation. An
example of epsilon ambiguity is (a*-b*), when the empty
sequence is derivable from both a* and b*.
Besides element content models (i.e., regular expressions on
element type names), an XML element type may use other
content specifications. For example, the element type may
have EMPTY or ANY specification, or mixed content specifi-
cation. These specifications impose no additional difficulty
in the definition of the generic validation function. The
ANY specification means that the sequence of child elements
may contain elements of any declared element types, including
text, in any order. The mixed content specification allows
text data to be interspersed with elements of some prescribed
types. One may think of ANY as a special case of
mixed content.
One can view text data, which is denoted as ("Parsed
Character Data") in a mixed content specification, as elements
enclosed within an pair of implicit !text? start-tag
and !/text? end-tag. A Pcdata module, similar to the
Empty module we already have, can be defined to help inductive
definitions of mixed content specifications. For ex-
ample, for DTDs with 2 element types, one can define an
Any module as following by using a 3-ary alternative module
Alt3:
module Any:
9. EXPERIENCE WITH LARGER DTDS
WML is a markup language for WAP applications. Its DTD
consists of 35 element type definitions. We have applied the
generic approach to validate WML documents. In order to
do so, we need to produce ML modules that include and
operate upon 36-ary type constructors (35 element types
plus 1 for #PCDATA). We also need to construct higher-order
modules that take in as many as 17 modules as input
(one of the element type definitions needs a 17-ary Alt mod-
ule). Our experience has been quite satisfactory: Our code
is compiled without problem with Objective Caml, but the
compilation time is not negligible (about 1 min. at a desktop
workstation). The validation time is negligible how-
ever, at least for the smallish examples we have tried (around
100 elements). We are working on both larger DTDs and
documents, and are collecting more performance data.
The size of the ML source code is quite large, however. Take
the following ML module expression as an example.
module
One need a 10-ary module Seq10 to construct the required
content model, which specifies a sequence of 10 elements,
each of a different element type. Code for module Seq10
looks like the following:
module
struct
type ('x0, 'x1, . , 'x35) t
* ('x0, 'x1, . , 'x35) F1.t
* .
* ('x0, 'x1, . , 'x35) F9.t
It is clear from the above that, for a DTD with n element
types, the source for module Seq m will have code size
O(mn). At the worst case, for a DTD of length n, our code
will need O(n) unique type variables, will contain type sharing
constraints of length O(n 2 ), and will have a overall code
size of O(n 2 ). The source code of all the necessary ML modules
for the 35-element WML DTD has a size of about 0.5
MB. When compiled, it produces a binary of size 175 KB
(*.cmo file in Objective Caml), and an interface of size 2.3
MB (*.cmi file in Objective Caml). ML code for the WAP
examples is accessible at the following URL:
http://www.iis.sinica.edu.tw/~trc/x-dot-ml.html
One can do a connected component analysis on the DTD
so that the set of element types are partitioned into disjoint
subsets where there is no type-dependency between the sub-
sets. A subset with k element types need only use k-ary type
constructors, and the overall code size for the modules used
for the subset can be reduced.
10. RELATED WORK AND CONCLUSION
In Section 1, we have introduced previous work that uses
existing or new functional languages to model and program
with XML DTDs. There is a wealth of research and system
work that is related to XML content modeling but is
not necessarily from the perspective of (functional) programming
languages. We list just a few here.
Br-uggemann-Klein and Wood addressed the problem of ambiguous
XML (and SGML) content models, based on theory
of regular languages and finite automata [7, 8]. In particu-
lar, they showed that linear time suffices to decide whether
a content model is ambiguous. It is showed that regular
expressions in both "star normal form" and "epsilon normal
are always unambiguous [9]. The Glushkov automaton
that corresponds to a regular expression is used
for checking ambiguity and, if not unambiguous, for validation
as well. Murata has proposed a data model for XML
document transformation that is based on forest-regular language
theory [15, 16]. His model is a lightweight alternative
to XML Schema and provides a framework for schema trans-
formation. There is also work on type modeling for document
transformation in a structured editing systems using
data types [5]. However, none of the above work has used
specific programming language as a modeling language.
XML Schema is a maturing specification language for XML
content that is being developed at World Wide Web Consortium
[4]. XML Schema is more expressive than DTD
and the specification language itself uses XML syntax. The
difference between XML Schema and DTD seems to
be XML Schema's ability to derive new types by extending
or restricting the content models of existing types. XML
Schema also provides a "substitution groups" mechanism to
allow elements to be substituted for other elements. We are
investigating whether ML-like module languages are expressive
enough to model these mechanisms.
Backhouse, Jansson, and Jeuring, and Meertens have written
a detailed introduction to generic programming [6]. See
also the introduction to fold/unfold by Meijer, Fokkinga,
and Paterson [13], as well as work on using fold/unfold for
structuring and reasoning about program semantics by Hutton
[12]. Our extension of simple fold to simultaneous fold
seems new. Most work about generic programming in the
functional programming research community seems to rely
on the mechanism of type class to derive type-specific instances
of generic functions. The language of choice is often
Haskell. We have shown in this paper that the parametric
module mechanism in ML-like languages is suitable for
generic programming as well. In fact, we think that parametric
modules allow one to take finer control on the inductive
derivations of generic values. More powerful module
systems have been developed to allow mutually recursive
modules, as well as modules that depend on values and types
(see, e.g., Russo [17]). However, we showed here that the
lack of recursive modules need not be a problem as long as
the mutual dependency between the modules is only about
interdependent type definitions.
Viewed in the above context, our work can be thought to
use the ML module facility to generate a deterministic automata
that is specialized for the validation of elements for
a specific DTD. Validation automata also gives types to the
elements (and its parts). In additional, the construction of
the validation automata is entirely generic and can be au-
tomated. Our work also serves as a usage case of ML parametric
modules, and can be used to stress test current ML
implementations. It is a delight to see our contrived code of
36-ary type constructors and 17-ary higher-order modules
is compiled and executed with no problem under Objective
Caml.
11.
--R
XML Schema Part 0: Primer.
Type modelling for document transformation in structured editing systems.
Generic programming: An introduction.
Anne Br-ugemann-Klein and Derick Wood
Anne Br-uggemann-Klein
XDuce: A typed XML processing language.
Regular expression types for XML.
Fold and unfold for program semantics.
Functional programming with bananas
Transformation of documents and schemas by patterns and contextual conditions.
Data models for document transformation and assembly.
Haskell and XML: Generic combinators or type-based translation? <Proceedings>In Proceedings of the International Conference on Functional Programming</Proceedings>
A logic programming approach to supporting the entries of XML documents in an object database.
--TR
Functional programming with bananas, lenses, envelopes and barbed wire
Regular expressions into finite automata
One-unambiguous regular languages
The under-appreciated unfold
Fold and unfold for program semantics
Haskell and XML
Regular expression types for XML
Recursive structures for standard ML
A Logic Programming Approach to Supporting the Entries of XML Documents in an Object Database
Out-of-Core Functional Programming with Type-Based Primitives
Transformation of Documents and Schemas by Patterns and Contextual Conditions
XDuce
An Algebra for XML Query
Data Model for Document Transformation and Assembly
--CTR
Tyng-Ruey Chuang , Jan-Li Lin, On modular transformation of structural content, Proceedings of the 2004 ACM symposium on Document engineering, October 28-30, 2004, Milwaukee, Wisconsin, USA | fixed points;functional programming;XML;validation;modules and interfaces |
507673 | Probabilistic congestion control for non-adaptable flows. | In this paper we present a TCP-friendly congestion control scheme for non-adaptable flows. The main characteristic of these flows is that their data rate is determined by an application and cannot be adapted to the current congestion situation of the network. Typical examples of non-adaptable flows are those produced by networked computer games or live audio and video transmissions where adaptation of the quality is not possible (e.g., since it is already at the lowest possible quality level). We propose to perform congestion control for non-adaptable flows by suspending them at appropriate times so that the aggregation of multiple non-adaptable flows behaves in a TCP-friendly manner. The decision whether or not a flow is to be suspended is based on random experiments. In order to allocate probabilities for these experiments, the data rate of the non-adaptable flow is compared to the rate that a TCP flow would achieve under the same conditions. We present a detailed discussion of the proposed scheme and evaluate it through extensive simulation with the network simulator ns-2. | Introduction
C ONGESTION control is a vital element of computer
networks such as the Internet. It has been widely
discussed in the literature { and experienced in reality
{ that the lack of appropriate congestion control mechanisms
will lead to undesirable situations such as a congestion
collapse [1]. Under such conditions, the network
capacity is almost exclusively used up by trac that never
reaches its destination.
In the current Internet, congestion control is primarily
performed by TCP. During recent years, new congestion
control schemes were devised, supporting networked applications
that cannot use TCP. Typical examples of such
applications are audio and video transmissions over the
Internet. One prime aim that these congestion control
schemes try to achieve is to share the available band-width
in a fair manner with TCP-based applications, thus
falling into the category of TCP-friendly congestion control
mechanisms.
TCP, as well as existing TCP-friendly congestion control
algorithms, require that the data rate of an individual
ow can be adapted to network conditions. Using TCP,
it may take a variable amount of time to transmit a xed
amount of data, or with TCP-friendly congestion control,
the quality of an audio or video stream may be adapted
to the available bandwidth.
While for a large number of applications this not a limi-
tation, there are cases where the data rate of an individual
ow is determined by the application and cannot be adjusted
to the network conditions. Networked computer
games are a typical example, considering the fact that
players are very reluctant to accept the delayed transmission
of information about a remote player's actions. Live
audio and video transmissions with a xed minimum qual-
ity, below which reception is useless, fall into the same
category. For this class of applications there are only two
acceptable states: either a
ow is on and the sender transmits
at the data rate determined by the application or it
is no data is transmitted at all. We call network
ows produced by these applications non-adaptable
ows.
In this paper we describe a TCP-friendly end-to-end
congestion control mechanism for non-adaptable unicast
ows called Probabilistic Congestion Control (PCC). The
main idea of PCC is
to calculate a probability for the two possible states
(on/o) so that the expected average rate of the
ow is
TCP-friendly,
to perform a random experiment that succeeds with
the above probability to determine the new state of the
non-adaptable
ow, and
to repeat the previous steps continuously to account for
changes in the network conditions.
Through this mechanism it is ensured that the aggregate
of multiple PCC
ows behaves TCP-friendly.
The remainder of this paper is structured as follows.
Section II summarizes related work. In Section III we
examine non-adaptable
ows in more detail. A thorough
description of the PCC mechanism is given in Section IV.
The results of the simulation studies that were conducted
are presented in Section V and we conclude the paper with
a summary and an outlook on future work in Section VI.
II. Related Work
Much work has been done on TCP-friendly congestion
control schemes for applications that cannot use TCP.
Prominent examples of these schemes are PGMCC [2],
TEAR [3], TFRC [4], and FLID-DL [5]. A discussion of
such TCP-friendly congestion control mechanisms can be
found in [?]. TCP, as well as all existing TCP-friendly
congestion control schemes, requires that the bandwidth
consumed by a
ow be adapted to the level of congestion
in the network. By denition, non-adaptable
ows cannot
use such congestion control mechanisms.
It is conceivable to use reservation mechanisms such as
Intserv/RSVP [7] or Diserv [8] for non-adaptable
ows
so as to prevent congestion altogether. However, these
mechanisms require that the network supports the reservation
of resources or provides dierent service classes.
This is currently not the case for the Internet. In con-
trast, PCC is an end-to-end mechanism that does not
require support from the network. With PCC it is possible
to \partly" admit a
ow and to continuously adjust
the number of
ows to network conditions.
We are not aware of any previous work that directly
matches the category of probabilistic congestion control.
III. Non-Adaptable Flows
For the remainder of this paper, a non-adaptable
ow
is dened as a data
ow with a sending rate that is determined
by an application and cannot be adjusted to the
level of congestion in the network. A non-adaptable
ow
has exactly two states: either it is in the state on, carrying
data at the rate determined by the application, or
it is o, meaning that no data is transmitted at all. Any
data rate in between those two states is inecient, since
the application is not able to utilize the oered rate.
Examples of applications using non-adaptable
ows are
commercial network games such as Diablo II, Quake III,
Ultima Online, and Everquest. These games typically
employ a client-server architecture. The data rate of the
ows between client and server is determined by the fact
that the actions of the players must be transmitted instan-
taneously. Similar restrictions hold for the
ows between
participants of distributed virtual environments without
a centralized server. If a congestion control scheme delays
the transmission of actions too long, the application
quickly becomes unusable. This can easily be experienced
by experimenting with a state-of-the-art TCP-based networked
computer game during peak hours. For this rea-
son, a number of applications resort to UDP and avoid
congestion control altogether.
A situation with either no congestion control at all or
vastly reduced utility in the face of moderate congestion
is not desirable. A much more preferable approach is
to turn the
ows of some participants o and to inform
the application accordingly. All other participants do not
need to react to the congestion. On average, all users
should be able to participate in the session for a reason-able
amount of time between o-periods to ensure utility
of the application. At the same time, o-periods should
be distributed fairly among all participants.
Other examples of applications with non-adaptable
ows are audio or video transmissions with a xed quality.
There are two main reasons why it may not be possible to
scale down a media
ow: either the user does not accept
a lower quality, or the quality is already at the lowest possible
level. The second reason indicates that a congestion
control mechanism for non-adaptable
ows can complement
congestion control schemes that adapt the rate of a
ow to current network conditions.
IV. Probabilistic Congestion Control
The Probabilistic Congestion Control scheme (PCC)
provides congestion control for non-adaptable unicast
ows by suspending
ows at appropriate times. PCC is an
end-to-end mechanism and does not require the support
of routers or other intermediate systems in the network.
The key aspect of PCC is that { as long as there is
a suciently high level of statistical multiplexing { it is
not important that each single non-adaptable
ow behave
TCP-friendly at any specic point of time. What is important
is that the aggregation of all non-adaptable
ows
on a given link behave as if the
ows were TCP-friendly.
Due to the law of large numbers this can be achieved if
(a) each PCC
ow has an expected average rate which is
TCP-friendly and if (b) each link is traversed by a su-
ciently large number of independent PCC
ows.
At rst glance (b) may be considered problematic, because
it is possible that a link is traversed only by a small
number of PCC
ows. However, further re
ection reveals
that in this case the PCC
ows will only be signicant
in terms of network congestion if each individual PCC
ow occupies a high percentage of the link's bandwidth.
We therefore relax (b) to the following condition (c): a
single PCC
ow is expected to have a rate that is only
a small fraction of the available bandwidth on any link
that it crosses. Given the current development of available
bandwidth in computer networks, this is a condition
that is likely to hold true.
A. Requirements
There are a number of requirements that have to be
fullled in order for PCC to be applicable:
R1: High level of statistical multiplexing. Condition (c)
discussed above is met.
R2: No synchronization of PCC
ows at startup. PCC
ows start up independent of each other.
R3: The average rate of a PCC
ow can be predicted.
In order for PCC to work, it must be possible to
predict the average rate of a PCC
R4: The average rate of a TCP
ow under the same conditions
can be estimated. We expect that there is a
reasonably accurate method to estimate the average
bandwidth that a TCP
ow would have under
the same network conditions.
B. Architecture
A simple overview of the PCC architecture is depicted
in
Figure
1. A PCC sender transmits data packets at
the rate determined by the application, while the PCC
receiver monitors the network conditions by estimating
a TCP-friendly rate using a model of long-term TCP
throughput. Whenever a PCC receiver observes a degradation
in network conditions, it conducts a random experiment
to determine whether or not the
ow should be
suspended. In the case of a negative result, a control
packet is sent to notify the sender that it is temporarily
required to stop. After a certain o-period, a sender may
then resume data transmission. For PCC, we chose to
allocate as much functionality to the receiver as possible
to facilitate a future extension of PCC to multicast.
(data, reflected timestamp, sequence number)
data packets
receiver
sender
control packets
(flow state: on/off, timestamp)
- start stop flow
reflect timestamp
on/off-probability calculation
parameter measurements
TCP-friendly rate and
- random experiment
Fig. 1. PCC Architecture
While a
ow is in the on-state, control packets are sent
at certain time intervals. They allow to continuously measure
the round-trip time required to determine the TCP-friendly
rate and they serve as a backup mechanism in
case of very heavy network congestion. In the absence of
these periodic control messages, the sender stops sending,
thus safeguarding against the loss of notications to stop.
As long as the
ow is in the on-state, the data packets
are transmitted at the rate determined by the applica-
tion. Each data packet includes the timestamp of the
most recent control packet that the sender has received
in order to be able to determine the round-trip time. Each
1 There are multiple ways in which this can be done, ranging from
a constant bit-rate,
ow where this prediction is trivial, to the usage
of application level knowledge or prediction based on past samples
of the data rate.
data packet also contains a sequence number to allow the
receiver to detect packet losses.
For the remainder of this work we use the TCP through-put
formula of Padhye et al. [9] to compute the TCP-friendly
rate. In order to determine the parameters required
for the formula, the current version of PCC uses
the measurement mechanisms proposed for the TCP-Friendly
Rate Control Protocol (TFRC) [4]. However,
it is important to note that PCC is independent of the
method used to estimate the throughput of a TCP
ow
for given network conditions. A possible alternative, for
example, would be to use the rate calculation mechanism
of TCP Emulation At Receivers (TEAR) [3].
C. Continuous Evaluation
To determine the probability with which a PCC
ow
is allowed to send for a certain time interval T , it is necessary
to compare the average rate r NA of PCC to the
TCP-friendly rate r TCP .
r NA
(1)
where p denotes the ratio of r NA to r TCP . When solving
the equation, two outcomes are possible:
1: The non-adaptable
ow consumes less than or
the same amount of bandwidth that would be TCP-friendly
and should therefore stay on.
1: The non-adaptable
ow consumes more
bandwidth than a comparable TCP-friendly
ow. In
this case, p is taken as a probability and the non-
adaptable
ow should be turned
For uniformly distributed random number x
is drawn from the interval (0; 1]. If x > p holds, the PCC
ow is turned o for a time of T . After that time interval
the
ow may be turned on again. If x p, then the
ow
remains in the on-state. Since we require a sucient level
of statistical multiplexing (R1) and because of the law of
large numbers, the aggregation of all PCC
ows behaves
as if each of them were TCP friendly.
T is an application-specic parameter that is crucial
for the utility of the protocol and thus for the user acceptance
of the congestion control mechanism. For example,
if short news clips are transmitted T should be equal to
the length of these clips. If a networked computer game
is played, T should be determined so that in \normal"
congestion situations the player is able to perform some
meaningful tasks during the average time the
ow stays
on. If the network is designed to carry the required trac
(i.e., congestion is low), then the average on-time will be
a large multiple of T .
Under the assumption of a relatively constant level of
congestion, the further behavior of PCC is very simple.
After a time of T , a
ow that is in the on-state has to repeat
the random experiment using the same r TCP . How-
ever, in a real network the level of congestion is not constant
but may change signicantly within a time frame
much shorter than T . There are two cases to consider:
network conditions may improve (increasing r TCP ) or the
congestion may get worse.
The rst case is not problematic since it does not endanger
the overall performance of the network. PCC
ows
may be treated unfairly in that they are turned
a higher probability than they should be. However, after
a time of T the decision will be reevaluated with the correct
probability and PCC will adjust to the new level of
The second case is much more dangerous to the net-
work. In order to prevent unfair treatment of competing
adaptive
ows or even a congestion collapse, it is very
important that PCC
ows respond quickly to an increase
in congestion. Therefore, PCC continuously updates the
value for p and performs further random experiments if
necessary.
Obviously, it is not acceptable to simply recalculate p
without accounting for the fact that the
ow could have
been turned during one of the previous experiments.
any adjustments, PCC would continue to perform
the same random experiment again and again and
the probability to survive those experiments would drop
to 0. The general idea of how to avoid this drop-to-zero
behavior is to adjust the rate used in the equations to
represent the current expected average data rate of the
ow.
PCC modies the value r NA , taking into account the
last random experiments that have been performed for the
ow. To this end, PCC maintains a set P of the probabilities
i with which the
ow stayed on in the random
experiments during the last T seconds. 2 The so-called effective
rate r EFF is determined according to the following
r NA
r NA for
For the continuous evaluation and the random experiments
r EFF replaces r NA in Equation 1.
D. Initialization
At the initial startup and after a suspended
ow
restarts, a receiver does not have a valid estimate of the
current condition of the network and thus is not able to instantaneously
compute a meaningful TCP-friendly rate.
To avoid unstable behavior, a
ow will stay in the on-state
for at least the protected time T 0 , where T 0 is the
amount of time required to get the necessary number of
2 Note that p i
the corresponding p 1.
measurements to obtain a suciently accurate estimate
of the network conditions.
After T 0 , PCC determines whether it should cease to
send or may continue. In order to take the data transmitted
during the protected time into account, the probability
of turning the
ow increased during the rst
interval of T so that the average amount of data transmitted
during T 0 +T is equal to that carried by a competing
NA denote the average rate of the non-
adaptive
ow during the protected time and r 0
TCP the
average rate a TCP
ow would have achieved during the
same time. For
the adjusted ratio p 0 can be calculated as
Again, for 0 p 0 1 we use p 0 as the probability
for the random experiment. If the
ow is turned o, the
application may resume sending after it has been
a least T seconds, starting again with the initialization
step. 3 If the
ow is not turned o, then the
ow will
stay on for at least T more seconds, provided that the
congestion situation of the network does not get worse.
Note that it is now possible that p 0 0 if the non-
adaptable
ow transmits more data during T 0 than a TCP
ow would carry during Obviously, in this case
cannot be used as a probability for the random exper-
iment. Instead, it is necessary to turn the
ow o and to
increase T , so that p
Through the above mechanism the excess data transmitted
during the protected time T 0 is distributed over a
time span of T . At time T 0 , r 0
but in contrast to r 0
to be updated after T 0 .
When a random experiment has to be conducted, it is
necessary to calculate not only p 0 but also the corresponding
p. Each is included in their respective set P 0 and P .
As long as PCC is in the rst T slot and the protected
time has to be accounted for, the values in P 0 are used
to calculate the eective rate and thus the on-probability.
Later on, the set P is used.
It may be considered problematic to let a
ow send at
its full rate for T 0 as this violates the idea of exploring the
available bandwidth as is done, e.g., by TCP slow-start.
However, requirements level of statistical multi-
plexing) and (no synchronization at startup) prevent
3 T can be adjusted by some random oset to prevent synchronization
in case several
ows with the same value for T were forced
to cease sending simultaneously due to heavy congestion.
this causing excessive congestion. In addition, the value of
usually decrease the more congested the network
is, since the actual measurement of the loss event rate
makes up most of the time interval T 0 . Loss events become
more frequent as congestion increases and therefore
the estimate of the network conditions converges faster
to the real value. While r TCP is determined, the receiver
also calculates the average rate of the non-adaptable
ow
r NA . 4 Summing up, three important values are determined
during initialization: r TCP , r NA , and T 0 .
A nite state machine of a PCC receiver is depicted in
Figure
2.
timeout
INIT
set
timer
OFF
T' over and p'<x/
set timer
over
and
p'>=x/
ON
p'>=x p>=x
p<x/
set
timer
timeout
set
timer
p'<x/
First T
Fig. 2. Finite State Machine of a PCC Receiver
The runtime of the timer used in this state machine is
always T .
F. FEC
Since applications generating non-adaptable
ows frequently
have to obey real-time constraints, they benet
from forward error correction to compensate for packet
loss. However, packet loss typically signals congestion.
Therefore it has long been considered unacceptable to
compensate for congestion-based packet loss by increasing
the data rate of a
ow with redundant information for
forward error correction.
PCC supports the use of forward error correction in a
straightforward fashion: When an application decides to
employ forward error correction, the new r NA is simply
set to the rate of the
ow including the forward correction
information. From the perspective of PCC this is equivalent
to an application increasing its sending rate and thus
needs no special treatment. Increasing r NA results in an
4 In our implementation, we use an exponentially weighted moving
average of past PCC rates, but as noted in requirement R3, other
options are possible.
appropriate decrease of p and is therefore fair towards
competing
ows.
G. Example of PCC Operation
To provide a better understanding of the behavior of
PCC, let us demonstrate how PCC operates by means of
an example. As depicted in Figure 3, the sender starts
transmitting at the rate determined by the application.
After seconds the receiver arrives at an initial
estimate of r
Furthermore, let us assume that the application developer
decided that seconds is a good value for the given
application. Now p can be calculated as:
s
100 KBit
s
The value of p is included in the set P and p 0 is calculated
since we are in the rst T interval and have to make
up for the data transmitted during the protected time.
10s (100 KBit
s
s
50s 100 KBit
s
rate
time
r TCP
Fig. 3. Example of PCC Operation
Now a random number is drawn from the interval (0; 1],
deciding whether the
ow will stay on or be turned o.
Given a high level of statistical multiplexing, this will
result in roughly 1 out of 4 PCC
ows being turned o,
with the aggregation of the remaining PCC
ows using a
fair, TCP-friendly share of the bandwidth.
Let us assume that the random number drawn is
smaller than p 0 and that the
ow will stay in the on-
state. As depicted in Figure 3, at some later point in
time the bandwidth required by the application increases
to r 200KBit=s. A new value for p is then calculated
as follows:
s
200 KBit
s 0:8
This value for p is saved to the set P for later use. The
adjusted probability p 0 has to be calculated based on the
past value of p 0 .
s
200 KBit
s
0:76
10s (100 KBit
s
s
50s 200 KBit
s
0:76
Let the random number drawn for this decision be below
0:5 so that the
ow remains on. A few seconds after
this decision the rate a TCP
ow would have under
the same conditions drops to r Consequently
new values for p and p 0 are calculated:
s
200 KBit
s
0:8 0:5
s
200 KBit
s
0:76 0:5
10s (100 KBit
s
50s 200 KBit
s
0:76 0:5
Again the value p is stored in P while the random number
drawn is below p 0 . At
rst, the data transmitted during the protected time need
no longer be accounted for since PCC has made up for
that during the past T interval. Therefore p 0 is no longer
calculated. Second, the rst value within P times out and
is removed from the set. If the network situation has not
changed this will result in the following new value for p:
s
200 KBit
s 0:5 0:5
This time let the random number be larger than p. As a
result the
ow is suspended for the next T interval before
it may start again with a protected time. It should be
noted that this example was designed to demonstrate how
PCC works. In reality, a situation where the rate of the
non-adaptable
ow is ve times the TCP-friendly rate
indicates that the network resources are not sucient for
this application.
Extensions
While the current version of PCC works as described
above, there are a number of options and possible improvements
that we have investigated. In the following
we outline two modications that have not yet been incorporated
into PCC.
H.1 Probe While O
ows on average may receive less bandwidth than
competing TCP
ows, since a
ow that has been turned
may resume only after a time of T , even if network conditions
improve beforehand. This degrades PCC's perfor-
mance, particularly if T is large. In order to improve
average PCC throughput,
ows that are o could monitor
network congestion by sending probe packets at a very
low data rate from the sender to the receiver. The data
rate r OFF produced by the probe packets needs to be
taken into account in the Equations 1 and 3 by including
an additional factor (1 p) r OFF T .
If the loss rate and the round-trip time of the probe
packets signal that r TCP has improved, a
ow that has
been turned o may be turned on again immediately,
without waiting for the remainder of the T to pass, and
without performing an initialization step. This may be
done only if, under the new network conditions, all experiments
within the last T interval had been successful.
If the congestion situation worsens later on, it must be
checked whether any of the experiments during the last
T interval had failed. If this is the case, the
ow must be
turned Only after the last entry in set P has
timed out may the
ow resume normal operation. For
Probe While O to work correctly, it is of major importance
that the estimate of the network parameters work
independent of the data rate PCC is sending at.
The current version of PCC does not include Probe
While O, since it could lead to frequent changes between
the states \on" and \o", which is likely to be
distracting to the user of the application. Furthermore,
probe packets waste bandwidth. Probe While O may be
included in a later version of PCC as an option for the
application. The mechanism can be improved by including
a threshold, so that the
ow is turned on again only if
the available bandwidth increases signicantly. With this
improvement, the number of state changes is reduced to
improve stability.
H.2 Probe Before On
In PCC, a
ow is turned on upon initialization. This
has two drawbacks. First, it violates the idea of exploring
the available bandwidth as in TCP slowstart. Second, the
ow may be turned immediately after the initialization
is complete, so that the user perceives only a brief moment
where the application seems to work, before it is turned
o. An alternative would be to send probe packets at an
increasing rate before deciding whether or not to turn on
the
ow. Only after the parameters have been estimated
and the random experiment has succeeded will real data
for the
ow be transmitted. The drawback to this method
is that bandwidth is wasted by probe packets and that the
initial startup of a
ow is delayed.
H.3 Loss Rate Monitoring
ows do not take into account the impact of their
actions on the network conditions. Assume that the random
experiments of a number of PCC
ows fail due to
increased congestion, but that the congestion was largely
caused by these PCC
ows. Then too many
ows will be
suspended since it is impossible to include the expected
improvement in the network conditions in the calculation
of the on-probability. Similarly, when the bandwidth consumed
by PCC
ows during the protected time is a sig-
nicant fraction of the bottleneck link bandwidth, severe
congestion may be inevitable. Even after the protected
time, the changes in network conditions caused by PCC
ows that consume a large fraction of the bandwidth are
undesirable.
For these reasons it is vital that the condition of a sufcient
level of statistical multiplexing holds and that the
ows do not consume too large a fraction of the
bandwidth of the bottleneck link. By continuously monitoring
the packet loss rate (e.g., through probe packets)
and correlating it with the on- and o-times of the PCC
ow, it is possible to estimate the impact of the
ow on
the network conditions. If the PCC
ow causes very large
variations in the loss rate, the
ow should be suspended
permanently. With this extension it is possible to use
PCC in environments where it is unclear whether the
condition of a sucient level of statistical multiplexing
is fullled.
V. Simulations
In this section, we use network simulations to analyze
PCC's behavior. Simulations are based on the dumbbell
topology (Figure since it is sucient to analyze
PCC fairness and the results can be compared to those of
other congestion control protocols evaluated with it. For
the same reason, simulations were carried out with the
ns-2 network simulator [10], commonly used to evaluate
such protocols. Drop-tail queuing (with a buer size of 50
packets) was employed at the routers. We used the standard
TCP implementation of ns for the
ows competing
with PCC.
Receivers
Senders
Bottleneck Link
Fig. 4. Simulation Topology
A. TCP-Friendliness
A typical example of PCC behavior is shown in Figure
5. For this simulation, 32 PCC
ows and
ows were run over the same bottleneck link with
capacity. At an application sending rate of
750KBit/s, the PCC
ows should ideally be in the on-state
for two thirds of the time. In this example, T was
set to 60s, leading to an expected average on-time of 120s.
The graph depicts the throughput of one sample TCP
ow
and one sample PCC
ow, as well as the average through-put
of all
ows. The starting time of the PCC
ows is spread out over the rst 50s to avoid synchronization
20060010000 100 200 300 400 500 600 700 800 900
Throughput
Time [s]
Fig. 5. PCC and TCP throughput
The TCP rate shows the usual oscillations around the
fair rate of 500KBit/s. PCC's behavior is nearly perfect,
with an average rate that closely matches the fair rate
and an on-o ratio of two to one. Naturally, not all of the
ows achieve exactly this ratio; some stay on for
more, some for less time.
B. Intra-Protocol Fairness
Usually, it is desirable to evenly distribute the necessary
o-times over all PCC
ows instead of severely penalizing
only few. To examine PCC's intra-protocol fairness, a
simulation setup similar to the previous one was used, yet
the number of concurrent PCC and TCP
ows varied between
2 and 128. The probability density function of the
throughput distribution from these simulations is shown
in
Figure
6. As expected, the throughput range is larger
for PCC. The coecient of variation (standard deviation
over mean) for PCC throughput is 15% compared to a
TCP coecient of variation of only about 3%.
This results from the time frame for changes in the
states of the PCC
ows being 60s instead of a few RTTs
for TCP
ows. There is a direct tradeo between the
parameter T and the intra-protocol fairness. Longer on-
times, achieved by a larger T , result at the expense of the
ows that are suspended for a longer time, thus decreasing
intra-protocol fairness. Taken to the extreme, for very
Average Throughput (KBit/s)
Fig. 6. Distribution of Flow Throughput
large T
ows may stay on for the whole duration of the
session or are not permitted at all, leading to a type of
admission control scheme.
C. Responsiveness
In addition to inter- and intra-protocol fairness, su-
cient responsiveness of a
ow to changes in the network
conditions is important to ensure acceptable protocol be-
havior. TCP adapts almost immediately to an increase in
congestion (manifest in the form of packet loss). Through
the continuous evaluation at timescales of less than T , as
described in Section IV-C, PCC can react nearly as fast
as TCP to increased congestion, however, it will react to
improved network conditions on a timescale of T . Figure
7 depicts the average throughput of
ows,
again with parameter T set to 60s, and
ows. A
rather dynamic network environment was chosen where
the loss rate increases abruptly from 2.5% to 5% from
time 200s to 300s and from time 400s to 420s.100300500700100 150 200 250 300 350 400 450 500
Throughput
Time [s]
avg. TCP
avg. PCC
Fig. 7. Loss Bursts
When the loss rate changes at time 200s, PCC does
not adapt as fast as TCP but still achieves an overall average
rate that is quite close to the TCP rate after only
a few seconds. seconds later we can see a little spike
in the average PCC rate, resulting from the PCC
ows
that reenter the protected time to probe for bandwidth
once their o-time is over. Since the loss rate is still high,
the average PCC rate settles at the appropriate TCP-friendly
rate shortly thereafter. As soon as the loss rate
is reduced to its original value, the probability that sus-
pended
ows reentering the protected time will immediately
be suspended again (and the probability that the
random experiment of
ows in the on-state will fail) de-
creases. Thus, after time 300s, the random experiments of
more and more
ows succeed until about 50 seconds later
the TCP-friendly rate is reached again. Although PCC
reacts more slowly than TCP, the average throughput of
TCP and PCC up to time 350s is very similar. In contrast
to long high-loss periods, short loss spikes hurt PCC
performance much more than TCP performance. When
the loss rate increases again at time 400s, suspended PCC
ows will stay in the o-state for at least 60s, while the
actual congestion persists for only 20s. From the time the
congestion ends until the time the PCC
ows are allowed
to reenter the protected time, TCP throughput is considerably
higher than PCC throughput. However, we can
also see from the graph that during periods of congestion
PCC throughput does not quite drop to the level of TCP
throughput but remains slightly higher. In the following
we will analyze this eect in more detail.
D. PCC Throughput for Dierent Application Sending
Rates
Ideally, no PCC
ows would be suspended as long as the
PCC application sending rate is below the TCP-friendly
rate. For higher application sending rates the average
PCC rate should remain at exactly the fair rate through
the use of the random experiments. From Figure 8 we
take it that an average PCC rate of exactly the fair rate
is not reached when the application sending rate equals
the fair rate but for an application sending rate that is
about 25% higher. The latter eect can be explained
by PCC's susceptibility to dynamic network conditions.
TCP's typical sawtooth-like sending rate results in variations
in the network conditions which unduly cause suspension
of PCC
ows. When we compare the average
PCC throughput to TCP throughput for high PCC application
sending rates, we nd that PCC throughput
and thus PCC's aggressiveness continues to increase with
the application sending rate once the fair rate has been
reached.
The eect of increased aggressiveness at higher application
sending rates can be attributed to the TCP model
used by PCC. As stated in [9], the TCP model is based on
the so-called loss event rate. A loss event occurs when one
or more packets are lost within a round-trip time, and the
loss event rate is consequently dened as the ratio of loss
events to the number of packets sent. The denominator of
the loss-event rate increases as more and more packets are
sent during a round-trip time due to a higher application
Average
Rate
[%
of
Fair
PCC Application Sendrate [% of Fair Rate]
Model
Fig. 8. Comparison with Estimated TCP-Friendly Rate
sending rate. At the same time, the number of loss events
does not increase to the same extent since more and more
lost packets are aggregated to a single loss event. An in-depth
analysis of this eect can be found in [11]. When
relating the estimated TCP-friendly rate at dierent application
sending rates to the average PCC rate achieved
in these simulations, it becomes obvious that PCC's aggressiveness
is not caused by PCC's congestion control
mechanism but by the dependence of the TCP model on
the measurement of the loss event rate at sending rates
close to the actual TCP rate (to ensure that for TCP and
the TCP model the lost packets that constitute a loss
event are the same). In addition to PCC's susceptibility
to variations in the network conditions, the dierence between
the TCP-friendly rate and the average PCC rate is
also caused by taking into account only the rate estimates
of
ows in the on-state.
E. PCC Fairness for Dierent Combinations of Flows
Figure
9 shows the average throughput achieved by
PCC for dierent combinations of PCC and TCP
ows
when the fair rate is 500KBit/s and the application sending
rate is 750KBit/s. Generally, PCC throughput increases
with the number of TCP
ows, since the higher
the level of statistical multiplexing, the lower the variations
in the network conditions that degrade PCC per-
formance. This eect is more pronounced, the lower the
number PCC
ows is.
For a more detailed analysis of PCC and further net-work
simulations we refer the reader to [12].
VI. Conclusions
In this paper we presented a congestion control scheme
for non-adaptable
ows. This type of
ow carries data
at a rate determined by the application. It cannot be
adapted to the level of congestion in the network in any
way other than by suspending the entire
ow. Existing41664
Number of PCC flows2832Number of TCP flows250750
Throughput [KBit/s]
Fig. 9. Average PCC Throughput for Dierent Numbers of Flows
congestion control approaches therefore are not viable for
non-adaptable
ows.
We proposed to perform congestion control for such
ows by suspending individual
ows in such a way that
the aggregation of all non-adaptable
ows on a given link
behaves in a TCP-friendly manner. The decision about
suspending a given
ow is made by means of random experiments
In a series of simulations we have shown that PCC
displays a TCP-friendly behavior under a wide range of
network conditions. We identied the conditions under
which PCC throughput does not correspond to the TCP-friendly
rate. To some extent, these eects on the average
PCC sending rate cancel each other out. Nevertheless, the
may degrade PCC performance.
We intend to include Probe While O as an optional
element in PCC, which would improve PCC's behavior in
highly dynamic network environments. Furthermore, we
are currently investigating a method to perform a more
accurate estimate of the fair TCP rate if the loss event
rate is measured at a sending rate that diers considerably
from the TCP-friendly rate. Finally, we plan to evaluate
if and how PCC can complement congestion control for
multicast transmissions.
--R
John Hei- demann
--TR
Promoting the use of end-to-end congestion control in the Internet
Modeling TCP Reno performance
pgmcc
Equation-based congestion control for unicast applications
FLID-DL
Advances in Network Simulation
Issues in Model-Based Flow Control | TCP-friendliness;non-adaptable flows;congestion control |
507677 | The design of a transport protocol for on-demand graphical rendering. | In recent years there have been significant advances in 3D scanning technologies that allow the creation of meshes with hundreds of millions of polygons. A number of algorithms have been proposed for the display of such large models. These advances, coupled with the steady growth in the speed of network links, have given rise to a new area of interest, the streaming transmission and visualization of three-dimensional data sets. We present a number of issues that must be addressed when designing a transport protocol for three-dimensional images as well as a method of transport. We then evaluate an implementation of an On-Demand Graphic Transport Protocol (OGP) that addresses these issues. | INTRODUCTION
With the enormous increase in computing power of today's home
computers, it is now possible for complex three-dimensional models
to be used in home applications. The growth of the Internet
introduces the possibility of using such three-dimensional models
on websites and online applications. The size of the models,
with the corresponding long download times, prohibits transferring
the models completely before rendering begins; therefore, efficient
methods for streaming three-dimensional models must be developed
if the models are to be used in online applications. The use
of a streaming protocol allows the user to begin viewing a portion
of the model before transmission is completed. This type of
Work funded in part by the NSF under grant CCR-0086094
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
May 12-14, 2002, Miami, Florida, USA.
streaming is, however, distinct from the problem of video stream-
ing. Videos can be loss-tolerant and are linearly ordered. By con-
trast, three-dimensional images are not linearly ordered. Given
this information, we observe that while video streams have time-
dependencies between video segments, three-dimensional images
have space-dependencies between picture segments. This leads to
radically different methods for optimal transport.
A common scheme for rendering three-dimensional models is
based on a bounding volume approach. The data structure containing
the models is a tree with certain properties, such as loss-
tolerance yielded by the partial ordering of the data structure, that
can be leveraged to facilitate efficient streaming. The nodes of the
tree represent bounding volumes of the model with the roots of the
tree representing spheres that encompass the entire model. Nodes
lower in the tree represent successively smaller volumes, the leaves
being the smallest resolution points. The dependencies in the tree
run along the branches. Each node is dependent only on its parent
node and there is no ordering among siblings in the tree. This
allows a certain flexibility in the reliable transport of the model because
losses only affect subtrees rooted at the lost nodes. Another
facet of the tree is that greater resolution is achieved by running
down the tree, therefore it is possible to achieve a lower resolution
version of the model by only streaming down to a certain level
within the tree. The ability to send only part of the tree for rendering
can be leveraged to control the quality of the model based
on the bandwidth available to the user. A further property of the
tree is that particular branches in the tree correspond to particular
spatial sections of the picture. It is possible to take advantage of
this fact and render exclusively, or more completely, the sections
of the image that are of interest to the user, perhaps by allowing
the user to mouse-over the portion of the model in which they are
interested [14].
We therefore contribute an analysis of the issues involved in developing
an efficient transport protocol for streaming three-dimensional
models and the design of the On-Demand Graphic Transport
Protocol (OGP). The standard Internet transport protocols are
not appropriate for streaming such models. TCP's [11] full reliability
can create unnecessary delays in the stream, but UDP [12]
does not provide the rendering application the reliability it requires
to maintain the partial ordering [3]. Therefore we present a protocol
that maintains the partial ordering required by the rendering
applications. We present a good node packing algorithm that
can maintain properties such as loss-tolerance constant throughout
transport, thereby allowing the transport protocol to continue to exploit
these properties. We demonstrate that better performance can
be obtained by a transport protocol by taking advantage of these
properties.
The rest of this paper is organized as follows. In Section 2,
we describe approaches to streaming media in general, discussing
their limitations. In Section 3, we present our analysis of the tree
structured three-dimensional models and the design of an efficient
streaming protocol for three-dimensional models. In Section 4, we
present the parameters used in designing an efficient protocol for
three-dimensional model transport. We then present the implementation
of the On-Demand Graphic Transport Protocol (OGP).
In Section 5 we present the results of the evaluation of OGP. In
Section 6, we present some conclusions about three-dimensional
model streaming.
2. STREAMING MEDIA
The sheer size of media, such as entire videos and three-dimensional
models, leads us to consider new representations that enable
intelligent partitioning of the data into smaller application-layer
data units [4]. Such partitioning supports the streaming of data,
providing the benefits of pipelining data transmission with data pre-
sentation. This new type of media is often progressive, with each
additional piece of data adding to what has already been recieved.
Protocols such as RTP [15] make use of application level framing.
Many such protocols are integrated directly into the application,
which would then sit on top of UDP. This allows the applications
to be run without changes to the current Internet technologies. This
research in streaming media has mainly focused on video as the
media of choice. RTP, for example, is tailored to handle real-time
requirements that are inherent in video. The streaming of three-dimensional
models introduces different requirements for the transport
of model data based on the structure of the data.
2.1 Video
Since a video is the progressive representation of images, streaming
video is naturally represented by a sequence of frames or groups
of frames for optimization. These frames represent a fully linear se-
quence. Loss cannot be tolerated within a frame, but each of these
frames is an independent entity, enabling streaming video protocols
to tolerate the loss of some frames. Essentially, these protocols are
memory-less. For example, in MPEG encoding, the I-, P-, and B-frames
must be in order to be usable but the stream can tolerate
the loss of a B-frame affecting only the subsequent B-frames until
the reception of the next I-frame [6]. There are also strict time
considerations with video. Once the play-out time for the frame
following a lost frame expires, it is too late to do anything with that
lost frame. Such partitioning of video divides it into application
data units that have temporal locality. Each group of pictures is
related to the others in time, not space.
The usefulness of designing image models that allow losses in
particular parts of the image, thereby allowing the continued transfer
and display of the rest of the image, with the lost piece appearing
as a small blank has been shown to help intra-frame loss [16].
This can be seen as a motivating step toward the structuring of models
using spatial partitioning as opposed to temporal partitioning.
Three-dimensional modeling uses this concept extensively.
2.2 Three-Dimensional Models
The design goal of three-dimensional models is to provide a representation
of an object for presentation to the user. In order to
provide a complete representation, all data must be available for
presentation. In order to support more flexible presentation of data,
these models have been designed in a progressive manner; the more
of the data that is available, the better the quality of the presenta-
tion. The three-dimensional models are divided into application
data units that have spatial locality. Each node is related to the others
with respect to their relative positions in the actual model. Such
models inherently require memory of all available data. Model representations
based on the bounding volume approach have a tree-based
structure. This structure provides a partial ordering on the
data. Each node that refines a particular spacial location on the
model is dependent on all previous nodes that involve that location.
If a node from one part of the model is lost, this does not affect
other sections of the model. In this way, the data is non-linear. The
partial ordering can be leveraged to make significant performance
gains in the face of loss [2, 3, 8]. It is possible to maintain the
stream efficiently by carefully keeping track of the dependencies
existing between parts of the data structure [9].
When a loss is encounter in a model, any received children of the
lost node cannot be rendered. If the rendering is on-demand, then
the time delay in rendering caused by the transmission of nodes
that cannot be immediately rendered will be perceived. In order
to avoid this, nodes that are dependent on the loss should not be
transmitted until the lost node is re-transmitted and received. This
implies that rapid loss detection is essential to efficiently stream
models for on-demand rendering. To this end, there has been some
work in finding effective ways to detect losses early. Papadopoulos,
et. al. [10] use gap-based loss detection as opposed to timer-based
loss detection. This minimizes the latency for loss detection. While
the time constraints for three-dimensional models are different than
for video, it will still prove to be very useful to minimize the time
it takes to detect a loss. By detecting losses early and transmitting
only sections of the data structure that can be rendered, the door
is opened for the possibility of not re-transmitting lost data immediately
if it is not desired. This amounts to varying the level of
reliability for the transfer of each packet. Varying the reliability
levels dynamically during transmission of data has been shown to
be useful in creating better efficiency in the transfer of data that can
sustain some loss [7].
One approach to streaming three-dimensional models is to transmit
data progressively with lower resolution data being transmitted
first and then streaming higher-resolution data to fill in the details.
One approach [13] uses the HTTP/1.0 protocol to request data from
a standard web server, such as Apache, that would send requested
bits of data to the client. We believe this approach is not optimal
because it does not make use of the properties of the data structure.
Since node size will not always be the same as packet size, it is
important to consider how to pack the models into packets. The
problem is essentially one of linearization. The tree in which the
three-dimensional models is not linear, it is partially ordered, but
transmission over the network requires linearization. If the data is
not intelligently linearized, we can end up transmitting data that
can not be used.
3. APPLICATION
Today, the transport of data across the Internet is achieved primarily
through the use of two transport protocols, TCP and UDP.
TCP provides a fully reliable means of communication that guarantees
in-order delivery of all packets sent without duplication. UDP,
on the other hand, provides no reliability guarantees at all, packets
may come out of order or even not at all. With the widespread
use of these two transport protocols, it is tempting to take a very
binary, one-dimensional view of transport service. The tendency
is to either see the data transmission as reliable and fully ordered
or not reliable and unordered. The main problem is that this cuts
off a large portion of the possible space. Really, transport service
should be seen as a two dimensional continuous function involving
guarantee of delivery and ordering as in Figure 1 [2, 3, 7, 8].
UDP
Ordering
Reliability
Full
None 100%
Partial Ordering
Partial Reliability
Figure
1: Transport Service Space
3.1 Exploring the Service Space
Once the transport service space is expanded, the impact of using
the incorrect transport service level can be analyzed. There are
three possible relationships between the transport service level used
in the transmission of data and the actual need of the application
requiring the transmission.
1. The level of transport service is optimal for the application.
2. The level of transport service is too high for the application.
3. The level of transport service is too low for the application.
For the second two relationships, there are two different aspects in
which the transport service level could be seen as too high or low
for an application.
1. The ordering requirements can be too strict or lax.
2. The loss tolerance can be too strict or lax.
The various combinations of possibilities are derived directly from
the expanded transport service space. The effects of the situations,
however, must be discussed systematically.
3.2 Mapping Channel Service to Application
Requirements
If an optimal transport service level is chosen for the transport
of an application's data, it is clear that there will be no wasted resources
providing services that the application does not require. It
is also clear that the application will be able to function since all of
its service requirements would be met by the optimal matching.
The interesting cases to study are the sub-optimal cases. If there
is no significant impact on performance when a sub-optimal choice
in transport service level is chosen, there would be no reason to
consider alternative transport mechanisms to TCP.
If the reliability level required by the application is not met, it
will be impossible for the application to produce correct results.
The actual effects on the application are scenario specific, but can
range from total failure to incorrect functionality [9]. Similar effects
can be seen from an inappropriate ordering constraint. If the
application's ordering requirements are more strict than the transport
layer can provide, then again the application will fail to function
correctly. For three-dimensional modeling applications, it is
not possible for the rendering engines to deal with loss or complete
mis-ordering, therefore it is clear that UDP will not provide a sufficient
level of transport service.
Dependencies
run down
the branches
Figure
2: Partially Ordered Tree
If the transport service level required by the application is surpassed
by the network, unnecessary delays and a loss of goodput is
experienced. A similar result occurs when the ordering constraints
of the network are too strict. If this happens, it is possible that some
data will be ready to be sent but will have to wait due to false ordering
constraints. In this way, both throughput and goodput will
suffer [9].
3.3 Maintaining Application Data Properties
While there has been research into designing protocols that can
handle partial orderings [2, 3, 8], the protocol to handle streaming
bounding-sphere based three-dimensional models must be specially
tailored. Current partial order protocols address the ordering
of packets, but not how data will be organized into packets.
Given that a node in a three-dimensional model may not fill an
MTU, it may be necessary to pack multiple nodes in one packet.
Such packing can greatly improve the performance and efficiency
of the transmission of the model, but must be accomplished within
the constraints of the original ordering. Also, due to the very specific
nature of the tree and its partial ordering, a specialized transport
scheme can enable on-demand improvement of specific, user-chosen
parts of the model.
We now analyze the tree used to store the models in bounding
sphere methods of rendering. The only ordering constraint is that
for any child received, its parent must have been received first. Es-
sentially, there must be complete ordering only down the legs of
the tree, as in Figure 2.
The nodes of the tree represent the smallest data units that can
be drawn, with each node relying on its parent. The leaves of the
tree represent the individual pixels fully rendered. The nodes of
the tree can vary between a few bytes and a few hundred bytes.
Because of this small size, more than one node can be put in each
packet. A decision must be made on the order in which to pack the
nodes together. The main constraint is that transmission must not
continue too far down any path in which one of the parent nodes
has been lost or not yet been acknowledged. This problem can be
avoided by only allowing new nodes to be sent after an acknowledgment
has been received for the node's parent. Such information
can be integrated into the flow control mechanism as discussed in
Section 4.1.
Node packing is very important in order to maintain the partial
ordering throughout transmission of the model. We note that any
subtree of the tree will have the same data structure properties as
the whole tree. Therefore, in order to be able to leverage the properties
that have been discussed throughout transport, nodes must be
packed in such a way as to not break the subtree relationships across
packets. Packing the nodes of the tree in a subtree-by-subtree order
will effectively maintain the partial ordering of the tree throughout
transmission. The details of node packing are discussed in Section
4.1.
4. ANON-DEMANDGRAPHICTRANSPORT
There are fundamental design decisions that must be made in
order to optimize a protocol for the transport of three-dimensional
models. The inherent loss tolerance and partial ordering of the tree
should be leveraged in order to efficiently stream the models. The
models can sustain no loss along any branch. However, due to the
fact that there are no dependencies between siblings in the tree,
if a loss occurs, streaming of other branches in the tree may continue
without interruption. The inherent loss tolerance of the data
structure can be leveraged to allow the transfer of data to continue
in the face of loss. The partial ordering of the data structure can
be leveraged to allow the protocol to continue to transfer the data
smoothly in the face of lost or reordered packets. The partial ordering
can also be used to allow the protocol to take into account user
focus information and to transfer only the part of the model that is
currently in focus. The protocol must also contain some method of
deciding when it is appropriate to retransmit lost packets and which
packets to retransmit. Finally the protocol must package the nodes
in such a way as to maintain the properties of the data structure
throughout the transfer. In the following subsections, we present
our implementation of a three-dimensional model streaming pro-
tocol, OGP. We use OGP to illustrate how these issues in protocol
design can be resolved.
OGP is a self-clocked protocol that makes use of TCP congestion
control algorithms [1]. Self clocking is achieved by allowing
the receipt of an acknowledgment to trigger the transmission of
new packets. In the current version of OGP, every packet is ac-
knowledged, therefore, the loss of an acknowledgment implies the
loss of a packet. This problem could be fixed by using selective-
acknowledgments instead of single-packet acknowledgments. Congestion
control is managed through the use of a window mechanism
that monitors the number of outstanding packets, as discussed in
Section 4.2.
Because it is possible for multiple nodes to be sent in one packet,
the sender maintains a mapping between nodes and packets. In
this way, the receiver can acknowledge packets and the sender can
then map these acknowledgments onto acknowledgments of nodes,
which are used by the packing algorithm as described in Section 4.1.
4.1 Node Packing
In order to take advantage of the properties of the tree, the partially
ordered data structure must be linearized and sent across the
network while maintaining the inherent properties. In order to do
this, the partial ordering relations of the tree, as well as the spacial
relations of the tree nodes, should be used. OGP begins by packing
the largest possible subtree at the root. After this first packet has
been acknowledged each node sent in that packet can now be used
as a root of a new subtree to send. OGP continues by packing the
children of the acknowledged nodes as roots of largest subtrees in
breadth-first order [Figure 3]. Essentially, the largest possible next
subtree is being packed in each successive packet. In the event of a
loss, OGP avoids sending any of the children dependent on the lost
nodes until the lost nodes have been successfully re-transmitted.
The goal is to maintain a subtree-by-subtree transmission pattern
in order to keep the partial ordering and reliability properties of the
entire model constant throughout transfer.
In order to keep track of what nodes have been sent, markers are
added into the data structure that can be set as sent and ackd. As
the tree is traversed breadth-first while packing nodes, the nodes
are marked as sent. When a packet is acknowledged, each node
contained in the packet is marked as ackd. Pointers are kept to the
first node in the tree that is not acknowledged and the current node
needing to be sent. In this way nodes needing to be marked and
nodes needing to be re-transmitted can be found efficiently while
still walking forward in the tree.
The packing algorithm works as follows. Get Next Node takes
the current sub tree as the root of a tree and increments node
in breadth-first order using a standard tree walking algorithm.
Get Next Subtree increments current sub tree to the next
unmarked node in the tree using a standard breadth-first tree walking
algorithm.
while(node != end_of_tree)
{
{
To illustrate the algorithm, consider an example of a three-dimensional
model with node sizes of 150 bytes. The tree has a branching
of 4. Consider a link over the Internet between a client and
server with a delay of 20ms and a bandwidth of 256Kbits. The
Ethernet frame size is 1500bytes. 10 nodes fit per packet and there
can be at most 5 packets in flight between the last acknowledged
and the last sent packets. Given this information, if the first nodes
are packed in breadth-first order in the first packet, there will be 15
nodes that can be rendered but have not yet been sent. If nodes are
continued to be packed breadth-first, there are possible cases where,
after a loss, data will still be sent that cannot be rendered because
the loss will not yet be noticed on the sender side. This problem
will be exacerbated by smaller nodes sizes, such as Qsplat's 4 byte
node sizes [14]. If however, the node packing is done intelligently,
the branching factor of the tree can be leveraged to prevent the possibility
of transmitting data that cannot be immediately rendered,
see
Figure
3.
4.2 Congestion Control
The loss tolerant nature of the data structure allows OGP to continue
transmitting data in the face of losses. The only invariant
that must be maintained is that no node whose parent has not yet
been received should be sent. The packing algorithm in Section 4.1
insures that the ordering properties of the tree will be maintained
throughout transmission. A flow control algorithm is needed, how-
ever, to insure that nodes are always available whose parents have
already been acknowledged when it is time to send. OGP should
also react to congestion appropriately so that it is TCP friendly.
First Packet
Further packets group
subtrees moving across tree
Figure
3: Intelligent Packing Order
For congestion control, OGP reacts much the same way as TCP
New-Reno, making use of the slow-start, congestion avoidance,
fast recovery, and fast retransmit algorithms common to TCP [1].
Changes have been made to account for the fact that it does not need
to actually retransmit a lost packet. OGP does require losses to be
identified by the sender, however, so that nodes can be marked cor-
rectly. OGP uses a TCP like sequence numbering scheme. The receiver
always acknowledges the sequence number received and not
the sequence number of the next packet expected. This is necessary
because packets are not retransmitted so a lost sequence number
will never be received.
OGP has a congestion window, cwnd. The cwnd is started at
1 and is increase according to the TCP New-Reno algorithms. According
to these algorithms, cwnd is never increased by more than
one at any received acknowledgment. The cwnd is reduced due to
loss according to the TCP New-Reno algorithms.
A loss is detected when the acknowledgment received at the
sender is not the next acknowledgment expected. In order to relate
a packet loss to a node loss, the sender maintains a mapping
between packets and nodes that are in flight. When a loss occurs,
the sender refers to this mapping to decide which nodes were lost
in the lost packet. The nodes in the lost packet are then marked
unsent. It is possible that packets that are reordered will be considered
lost. This is not a real problem, however, because OGP does
not immediately retransmit lost packets, therefore when the packet
is eventually received and acknowledged, the nodes will be marked
ackd. OGP also has a time-out mechanism to allow it to recover in
the face of losses of acknowledgments like TCP New-Reno.
Because packets may be lost and never retransmitted, a standard
sliding window, such as the one TCP uses, cannot be used. Instead
only the number of packets still in flight need be considered, which
can be calculated based on the last packet acknowledged and the
last packet sent. This number is compared with the value of cwnd.
When the number of packets in flight is less than the congestion
window, OGP can send. Each time an acknowledgment is received,
OGP will send as many packets as it can.
An important issue is to make sure that there are always nodes
whose parents have been acknowledged with which to build the
next subtree to send in a packet. Consider that a packet is only
sent after an acknowledgment is received. Also consider that the
congestion window is never opened by more than 1. Then, in the
face of no losses, there will never be more than 2 packets sent per
acknowledged packet. Therefore, as long as the data structure has
a branching factor of 2 and there is at least 1 node per packet, there
will always be nodes to send. In the face of losses, it is possible that
more than 2 nodes may be sent after an acknowledgment. How-
ever, the congestion window is halved when a loss is encountered
for each round-trip-time. Therefore, there will not be more packets
to send until a number of new acknowledgments have arrived.
Therefore, either the branching factor of the structure must be at
least four, or there should be at least three nodes per packet. The
largest node size that we have observed in the three-dimensional
modeling schemes using bounding volumes is 300 bytes and the
branching factors are typically 4. It is not unreasonable to assume
that 4 nodes fit per packet, giving the needed 4 nodes to send per
acknowledgment even assuming only a branching factor of 2.
4.3 Exploiting the Partial Ordering
The partial ordering of the data structure combined with the node
packing algorithm allows OGP to insure that the packets in flight
are not dependent on one another. Because of this, OGP does not
need a standard congestion-window and can focus on the number
of nodes in flight. Therefore, OGP can continue to send in the face
of packet reordering and loss. The only reason OGP should slow
transmission is in response to congestion. Therefore, OGP should
see more throughput than TCP. This result is achieved as shown in
Section 5.
The partial ordering of the data is also used to allow the protocol
to send data that is relevant to the user's focus. Each node in the
tree represents a particular bounding volume that covers a particular
section of the model. The user's focus can be given to the protocol
as the identity of the node that covers the section of interest. It is
then possible, due to the partial ordering of the data, to only send
the line of nodes down the branch that leads to the node covering
the area of the model on which the user is focused. The subtree of
the model rooted at this user focus defined node can then be sent
using the normal transport algorithm.
4.4 Deciding What to Transmit
Aside from sending without the retransmission of lost packets as
described so far in the paper, the protocol also has the ability to
use some its throughput to retransmit some or all of the lost data.
We implemented full-reliability by marking lost nodes as lost in
the data structure and first packing those nodes into packets to be
sent. While these nodes will not normally reveal new nodes that
can be sent, this is not a problem for the protocol as there were
nodes that would have been sent that were not, had the protocol
been functioning with no reliability.
Another parameter to consider in transmission is user demand.
The reliability level of a certain part of the tree can be changed
depending on user focus. If there is a particular part of the model
that the user is interested in, then OGP can immediately re-transmit
any lost nodes from the corresponding section of the data structure.
This mechanism works the same way that full-reliability does, except
that only nodes in the part of the model that are of interest to
the user are marked lost and are therefore retransmitted.
User focus could also be used to partition out one part of the
model to be filled in faster than the other parts, using all available
bandwidth to concentrate on the refinement of relevant parts of the
model if this is desired. Again, the part of the model that is of
interest to the user can be viewed as a subtree of the data structure.
Therefore, OGP begins by packing nodes in depth-first order to
the point where the relevant data begins, and then uses the standard
packing algorithm. We implemented this functionality by assuming
that the user focus is represented by the node that surrounds the
volume of the model that is of interest to the user. This node is then
considered as the root of the subtree that should be sent.
5. EVALUATION
In evaluating OGP, we found it useful to consider data-specific
properties of the protocol, namely, node size and branching fac-
tor. We also evaluated the performance of the protocol in terms of
throughput and fairness to TCP.
OGP
New-Reno
New-Reno
New-Reno
Queue
Figure
4: Simulation Set-Up
Packets Received Packets Dropped
OGP 3898 135
Table
1: Simulation Results: queue
5.1 Simulation Set-Up
In the simulations we used the ns2 [5] network simulator. The
simulation layout we used is depicted in Figure 4. For the simulation
runs, we used a node size of 200bytes and a branching factor of
4. The bottleneck queue size was varied between 5 and 40 packets.
For the simulation runs, we had 3 TCP New-Reno flows and one
OGP flow. The link speeds are given in Figure 4.
5.2 Data-Specific Evaluation
We first evaluated the effect of node size on OGP. As mentioned
in Section 4.1, as node sizes decrease and more nodes are included
in each packet, a breadth-first packing algorithm will encounter
more occurrences of packets being sent that cannot be rendered.
however does not have this problem as it guarantees in-order,
fully-reliable transport. We found however, that smaller node sizes
merely increase the number of nodes that may be used as next sub-tree
roots for each packet that is acknowledged. Therefore small
node sizes have no real effect on OGP. We notice similar results
from larger branching factors. We do not show these results here as
they were all roughly the same.
In our simulations, we found that OGP always achieved 100%
goodput, meaning that all packets received can be rendered. TCP
also achieves 100% goodput due to its full reliability. One interesting
point to notice, however, is that the packets sent by OGP can be
immediately rendered, there is never a need to wait for retransmissions
or re-ordering.
5.3 Performance Evaluation
The two main results that we wanted to evaluate were TCP fairness
and good throughput. Our simulations show that these goals
have been met. As the length of the simulations as well as the
queue size on the bottleneck link were varied, we found that OGP5001500250035000 10 20
"OGP"
Figure
5: Simulation Results: queue
achieved very similar results to itself. The results for simulations
with a queue size of 10 are displayed in Table 1 and Figure 5.
OGP reacts quicker to congestion, and therefore has fewer packets
in flight during a congestion period. This is due to the fact that
OGP detects losses not by the number of duplicate acknowledgments
but instead by the difference between the current acknowledgment
and the expected acknowledgment, i.e. gap detection.
This rapid detection of losses allows OGP to cut its congestion window
more rapidly than TCP. OGP, however, gets more throughput
due to the fact that it does not worry about re-ordered packets and it
does not wait to re-send lost packets before opening its flow control
window. TCP New-Reno and OGP did reach an equilibrium point
where all flows were getting bandwidth. OGP achieves approximately
26% better throughput however, due to the mechanisms
already mentioned.
6. CONCLUSION
Given the greater processing power and better graphics cards
in desktop computers, it is now possible for very complex three-dimensional
models to be used in everyday applications. This leads
to the desire to send such models across the network. Due to the
potentially large size of the models, it is necessary to find a way
to stream them efficiently over the network. In order to do this,
the properties of the data structures used to store three-dimensional
models must be exploited. The tree that is used in three-dimensional
models has the property of being partially ordered, i.e. not completely
linear. This led us to analyze the problems of having too
narrow a view of the transport service space. Once we see the space
as expanded, we can understand the problems caused by having too
much reliability provided by the network. We therefore show which
parameters must be considered in designing an efficient protocol
for the transmission of on-demand three-dimensional models. We
discussed the approach to linearize the non-linear partially ordered
tree. We then exploit the approaches to design the On-Demand
Graphic Transport Protocol (OGP). OGP is shown to gain performance
benefits over TCP, making it clear that the inherent properties
of the three-dimensional modeling data structures should be
used to develop efficient transport mechanisms.
Acknowledgments
We would like to thank Mike Garland for his help in understanding
three-dimensional modeling. We would also like to thank Prashant
for his help in developing the packing methods used
in the protocol. We would also like to thank the entire Mobius
Group for their continued help and support.
7.
--R
Tcp congestion control.
Architectural considerations for a new generation of protocols.
Ns notes and documentation.
Adaptive variation of reliability.
Optimizing partially ordered transport services for multimedia applications.
Transmission control protocol.
User datagram protocol.
Streaming qsplat: A viewer for networked visualization of large
Qsplat: a multiresolution point rendering system for large meshes.
A transport protocol for real-time applications
Image transfer: An end-to-end design
--TR
Architectural considerations for a new generation of protocols
Image transfer
Partial-order transport service for multimedia and other applications
Adaptive variation of reliability
QSplat
Streaming QSplat: a viewer for networked visualization of large, dense models | streaming;3-D models;partial order;partial reliability;transport protocol |
507680 | Impact of link failures on VoIP performance. | We use active and passive traffic measurements to identify the issues involved in the deployment of a voice service over a tier-1 IP backbone network. Our findings indicate that no specific handling of voice packets (i.e. QoS differentiation) is needed in the current backbone but new protocols and mechanisms need to be introduced to provide a better protection against link failures. We discover that link failures may be followed by long periods of routing instability, during which packets can be dropped because forwarded along invalid paths. We also identify the need for a new family of quality of service mechanisms based on fast protection of traffic and high availability of the service rather than performance in terms of delay and loss. | INTRODUCTION
Recently, tier-1 Internet Service Providers (ISPs) have shown an
ever increasing interest in providing voice and telephone services
over their current Internet infrastructures. Voice-over-IP (VoIP) appears
to be a very cost effective solution to provide alternative services
to the traditional telephone networks.
However, ISPs need to provide a comparable quality both in
terms of voice quality and availability of the service. We can identify
three major causes of potential degradation of performance for
telephone services over the Internet: network congestion, link failures
and routing instabilities. Our goal is to study the frequency of
these events and to assess their impact on VoIP performance.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
NOSSDAV'02, May 12-14, 2002, Miami, Florida, USA.
We use passive monitoring of backbone links to evaluate the occurrence
and impact of network congestion on data traffic. Passive
measurements carried over different locations in the U.S. Sprint IP
backbone allow us to study the transmission delay of voice packets
and to evaluate the degree of congestion. However, this kind of
measurement cannot provide any information related to link failures
or routing instabilities.
For this purpose, we have deployed an active measurement infrastructure
in two locations well connected to the backbone. We
capture and timestamp the probe packets at both ends to quantify
losses and observe the impact of route changes on the voice traffic.
We performed many week-long experiments in order to observe
different link failure scenarios.
Given that all our measurements take place in the same Autonomous
System (AS) we also complement our data with IS-IS
routing information [9] collected in one of the backbone Points of
Presence (POPs). This additional information give us a fairly complete
view of the events that occur during our experiments. Indeed,
active probes and routing information give us the capability of identifying
precisely the links, the routers and even the interfaces that
are responsible for failures or instabilities in the network.
Our findings indicate that the Sprint IP backbone network is
ready to provide a toll-quality voice service. The level of congestion
in the backbone is always negligible and has no impact on the
voice quality.
On the other hand, link failures can impact the availability of
VoIP services. We discovered that link failures may be followed
by long periods of routing instability, during which packets can be
dropped because forwarded along invalid paths. Such instabilities
can last for tens of minutes resulting in the loss of reachability of a
large set of end-hosts.
The paper is structured as follows. Section 2 briefly presents
some related work, while Section 3 provides detailed information
on the measurement approaches followed in this study. Section 4
describes the model used to assess the subjective quality of voice
calls from transport level measurable quantities. In Section 5 we
finally discuss our findings, while Section 6 presents some concluding
remarks.
2. RELATED WORK
Past literature on end-to-end Internet measurements has often
focused on the study of network loss patterns and delay characteristics
[6, 8, 16, 26, 24]. For example, Kostas [18] studied the
feasibility of real-time voice over the Internet and discussed measured
delay and loss characteristics. In order to evaluate the quality
of Internet Telephony, [14] provided network performance data (in
terms of delay and losses) collected from a wide range of geographically
distributed sites. All these studies were based on round-trip
delay measurements.
While information about delay and losses can give valuable insights
about the quality of VoIP, they do not characterize the actual
subjective quality experienced by VoIP users. In [11], Cole et al.
propose a method for monitoring the quality of VoIP applications
based upon a reduction of the E-model [3] to measurable transport
level quantities (such as delay and losses).
Markopoulou et al. [19] use subjective quality measures (also
based on the E-model) to assess the ability of Internet backbones
to support voice communications. That work uses a collection of
GPS synchronized packet traces. Their results indicate that some
backbones are able to provide toll quality VoIP, today. In addition,
they report that even good paths exhibit occasional long loss periods
that could be attributed to routing changes. However, they
do not investigate the causes of network failures neither the impact
they have on the voice traffic.
3. MEASUREMENTS
In this section we describe the two measurement approaches used
in our study, i.e. the passive measurement system deployed in the
Sprint IP backbone network and the active measurement system
that uses probe packets to study routing protocols stability and link
failures.
3.1 Passive measurements
The infrastructure developed to monitor the Sprint IP backbone
consists of passive monitoring systems that collect packet traces on
more than links located in three POPs of the network. Details
on the passive monitoring infrastructure can be found in [13].
In this study, we use traces collected from various OC-12 intra-
POP links on July 24th, 2001, September 5th, 2001 and November
8th, 2001. A packet trace contains the first 44 bytes of every IP
packet that traverses the monitored link. Every packet record is also
timestamped using a GPS reference signal to synchronize timing
information on different systems [20].
We use the technique described in [23] to compute one-way delays
across the Sprint backbone. The basic idea behind that technique
is to identify those packets that enter the Sprint backbone in
one of the monitored POPs and leave the network in another one.
Once such packets are identified computing the delays simply requires
to compute the difference between the recorded timestamps.
3.2 Active measurements
Passive measurements provide valuable information about net-work
characteristics, but the data collected depend on the traffic
generated by other parties, which is completely out of our control.
Moreover, given that we do not monitor all the links of the back-bone
network, we are not able to measure jitter or loss rates through
simple passive monitoring (packets may leave the network through
not monitored links) [23]. Therefore, our passive measurements
alone cannot provide results on the quality of the voice calls. These
are the motivations behind the use of active measurements to complement
the passive ones. In an active measurement environment
we can perfectly control the amount and the characteristics of the
traffic that we inject in the network and thus draw precise conclusions
about the impact of the network on the monitored traffic.
3.2.1 Measurement infrastructure
We deployed active measurement systems in two locations of
the U.S. (Reston, VA and San Francisco, CA) well connected to
the Sprint backbone, i.e. just one router away from the backbone
network. Figure 1 shows the architecture of the testbed and the way
the sites are connected through the Sprint network (the thick lines
indicate the path followed by our traffic). Note that each access
router in a POP is connected to two backbone routers for reliability
and, usually, per-destination prefix load balancing is implemented.
The access links to the backbone were chosen to be unloaded in
order not to introduce additional delay. At the end of each experiment
we verified that no packet losses were induced on the last
hops of the paths.
In each site, four systems running FreeBSD generate a traffic
made of 200 byte UDP packets at a constant rate of 50 packets per
second. We choose this rate so that the probes could be easily used
to emulate a voice call compliant to the G.711 standard [2].
An additional system captures and timestamps the probe packets
using a DAG3.2e card [10]. The DAG cards provide very accurate
timestamping of packets synchronized using a GPS (or CDMA)
receiver [20]. The probe packets are recorded and timestamped
right before the access links of the two locations in both directions.
In the experiment we discuss here, probes are sent from Reston
(VA) to San Francisco (CA) for a duration of 2.5 days starting at
04.00 UTC on November 27th, 2001. We have run active measurements
for several weeks but we have chosen that specific trace
because it exhibits an interesting network failure event. In terms
of delay, loss and voice call quality we have not measured, instead,
any significant difference among the many different experiments.
3.2.2 Routing data
We integrate our measurement data with IS-IS routing information
collected in POP#2 (see Figure 1). We use an IS-IS listener
[21] to record all routing messages exchanged during the ex-
periment. IS-IS messages permit to correlate loss and delay events
to changes in the routing information. In order to illustrate the kind
of data that are collected by the listener, we give a brief description
of the IS-IS protocol.
IS-IS [22] is a link state routing protocol used for intra-domain
routing. With IS-IS, each link in the network is assigned a metric
value (weight). Every router 1 broadcasts information about its direct
connectivity to other routers. This information is conveyed in
messages called Link State PDUs (LSP). Each LSP contains information
about the identity and the metric value of the adjacencies of
the router that originated the LSP. In general, a router generates and
transmits its LSPs periodically, but LSPs are also generated whenever
the network topology changes (e.g. when a link or a router
goes up or down). Thus, LSPs provide valuable information about
the occurrence of events such as loss of connectivity, route changes,
etc.
Once a router has received path information from all other routers,
it constructs its forwarding database using Dijkstra's Shortest Path
First (SPF) algorithm to determine the best route to each destina-
tion. This operation is called the decision process. In some transitory
conditions (e.g. after rebooting), the decision process can take
a considerable amount of time (several minutes) since it requires
all the LSPs to be received in order to complete. During that transitory
period, a router is responsible to make sure that other routers
in the network do not forward packets towards itself. In order to do
so, a router will generate and flood its own LSPs with the "Infinite
Hippity Cost" bit set 2 . This way, other routers will not consider it
as a valid node in the forwarding paths.
1 IS-IS has been designed within the ISO-OSI standardization effort
using the OSI terminology. In this paper, we have instead decided
to avoid the use of OSI terms.
This bit is also referred to as the OverLoad (OL) bit.
Figure
1: Topology of the active measurement systems (the thick lines indicate the primary path)
4. VOICE CALL RATING
Even though active measurements may provide accurate information
on network delay and losses, such statistics are not always
appropriate to infer the quality of voice calls. In addition to mea-
surements, we use a methodology to emulate voice calls from our
packet traces and assess their quality using the E-model standard [3,
4, 5].
4.1 A voice quality measure: the E-model
The E-model predicts the subjective quality that will be experienced
by an average listener combining the impairment caused by
transmission parameters (such as loss and delay) into a single rat-
ing. The rating can then be used to predict subjective user reactions,
such as the Mean Opinion Score (MOS). According to ITU-T Recommendation
G.107, every rating value corresponds to a speech
transmission category, as shown in Table 1. A rating below
unacceptable quality, while values above 70 correspond to
PSTN quality (values above 90 corresponding to very good qual-
ity).
R-value range MOS Speech transmission quality
100 90 4.50-4.34 best
90
low
very poor
Table
1: Speech transmission quality classes and corresponding
rating value ranges.
The E-model rating R is given by:
where R0 groups the effects of noise, Is represents impairment
that occur simultaneously with the voice signal (quantization),
is the impairment associated with the mouth-to-ear delay, and Ie is
the impairment associated with signal distortion (caused by low bit
rate codecs and packet losses). The advantage factor A is the deterioration
that callers are willing to tolerate because of the 'access
advantage' that certain systems have over traditional wire-bound
telephony, e.g. the advantage factor for mobile telephony is assumed
to be 10. Since no agreement has been reached for the case
of VoIP services, we will drop the advantage factor in this study.
4.2 Reduction of the E-model to transport level
quantities
Although an analytical expression for is given in [4] and values
for Ie are provided in Appendix A of [5] for different loss con-
ditions, those standards do not give a fully analytical expression for
the R-factor. In this work, we use a simplified analytic expression
for the R-factor that was proposed in [11] and that describes the
R-factor as a function of observable transport level quantities.
In this section, we briefly describe the reduction of equation (1)
to transport level quantities as proposed in [11] and we introduce
the assumptions made about the VoIP connections under study.
4.2.1 Signal-to-noise impairment factors R0 and Is
Both R0 (effect of background and circuit noise) and Is (effect
of quantization) describe impairment that have to do with the signal
itself. Since none of them depend on the underlying transport net-
work, we rely upon the set of default values that are recommended
in [4] for these parameters. Choosing these defaults values, the
rating R can be reformulated as:
4.2.2 Delay impairment
ITU-T Recommendation G.107 [4] gives a fully analytical expression
for in terms of various delay measures (such as mouth-
to-ear delay, delay from the receive side to the point where signal
coupling occurs and delay in the four wire loop) and other parameters
describing various circuit switched and packet switch inter-working
scenarios.
Since we focus, in this work, on pure VoIP scenarios, we make
the following simplifications: i) the various delay measures collapse
into a single one, the mouth-to-ear delay, and, ii) the default
values proposed in [4] are used for all parameters in the expression
of other than the delay itself. In particular, the influence of
echo is supposed negligible. The curve obtained describing Id as a
function of the mouth-to-ear delay can then be approximated by a
piece-wise linear function [11]:
where d is the mouth-to-ear delay and H is the Heavyside func-
tion. d is composed of the encoding delay (algorithmic and packetization
delay), the network delay (transmission, propagation and
queuing delay) and the playout delay introduced by the playout
buffer in order to cope with delay variations. The Heavyside function
is defined as follows:
4.2.3 Equipment impairment Ie
The impairment introduced by distortion are brought together in
Ie . Currently, no analytical expression allows to compute Ie as
a function of parameters such as the encoding rate or the packet
loss rate. Estimates for Ie must be obtained through subjective
measurements. A few values for Ie are given in Appendix A of [5]
for several codecs (i.e. G.711, G.729,.) and several packet loss
conditions.
In this work, we focus on the G.711 coder which does not introduce
any distortion due to compression (and hence leads to the
smallest equipment impairment value in absence of losses). In ad-
dition, we assume that the G.711 coder in use implements a packet
loss concealment algorithm. In these conditions, the evolution of
the equipment impairment factor Ie as a function of the average
packet loss rate can be well approximated by a logarithmic func-
tion. In particular, if we assume that we are in presence of random
losses, the equipment impairment can be expressed as follows [11]:
where e is the total loss probability (i.e., it encompasses the losses
in the network and the losses due to the arrival of a packet after its
playout time).
In summary, the following expression will be used to compute
the R-factor as a function of observable transport quantities:
0:024d
where d is the mouth-to-ear delay, e is the total loss probability and
H is the Heavyside function defined in equation (4).
4.3 Call generation and rating
In order to assess the quality of voice calls placed at random
times during the measurement period, we emulate the arrival of
short business calls. We pick call arrival times according to a Poisson
process with a mean inter-arrival time of 60 seconds. We draw
the call durations according to an exponential distribution with a
mean of 3.5 minutes [17]. The randomly generated calls are then
applied to the packet traces for quality assessment.
Since IP telephony applications often use silence suppression to
reduce their sending rate, we simulate talkspurt and silence periods
within each voice call using for both periods an exponential distribution
with an average of 1.5s [15]. Packets belonging to a silence
period are simply ignored.
At the receiver end, we assume that a playout buffer is used to
absorb the delay variations in the network. The playout delay is
defined as the difference between the arrival and the playout time
of the first packet of a talkspurt. Within a talkspurt, the playout
times of the subsequent packets are scheduled at regular intervals
following the playout time of the first one. Packets arriving after
28.35 28.4 28.45 28.5 28.55 28.6 28.65 28.71357Delay (msec)
Frequency
Figure
2: Passive measurements: distribution of the one-way
transmission delay between East and West Coast of the U.S.
their playout time are considered lost. A playout buffer can operate
in a fixed or an adaptive mode. In a fixed mode, the playout delay
is always constant while in an adaptive mode, it can be adjusted
between talkspurts.
In this work, we opt for the fixed playout strategy because the
measured delays and jitters are very small and a fixed playout strategy
would represent a worst case scenario. Thus, we implement a
fixed playout delay of 75ms (which is quite high, but still leads to
excellent results, as described in Section 5).
The quality of the calls described above is then computed as fol-
lows. For each talkspurt within a call, we compute the number of
packet losses in the network and in the playback buffer. From these
statistics, we deduce the total packet loss rate e for each talkspurt.
In addition, we measure the mouth-to-ear delay d, which is the sum
of the packetization delay (20ms, in our case), the network delay of
the first packet of the talkspurt and the playout delay.
In order to assess the quality of a call we apply equation (6) to
each talkspurt and then we define the rating of a call as the average
of the ratings of all its talkspurts.
5. RESULTS
In this section we discuss our findings derived from the experiments
and measurements. We first compare the results obtained via
the passive and active measurements and then focus on the impact
of link failures on VoIP traffic. We conclude with a discussion of
the call rating using the methodology proposed in Section 4.
5.1 Delay measurements
In
Figure
2 we show the one-way delay between two Sprint POPs
located on the East and West Coast of the United States. The data
shown refers to a trace collected from the passive measurement system
on July 24th 2001. However, we have systematically observed
similar delay distributions on all the traces collected in the Sprint
monitoring project [13]. The delay between the two POPs is around
28.50ms with a maximum delay variation of less than 200s. Such
delay figures show that packets experience almost no queueing delay
and that the element that dominates the transmission delay is
the propagation over the optical fiber [23].
We performed the same delay measurements on the UDP packets
sent every 20ms from Reston (VA) to San Francisco (CA) for a
period of 2.5 days. Figure 3 shows the distribution of the one-way
East Coast to West Coast
Delay (ms)
Empirical
density
function
Figure
3: Active measurements: distribution of the one-way
transmission delay from Reston (VA) to San Francisco (CA).
transmission delay. The minimum delay is 30.95ms, the average
delay is 31.38ms while the 99.9% of the probes experience a delay
below 32.85ms.
As we can see from the figures, the results obtained by the active
measurements are consistent with the ones derived from passive
measurements. Low delays are a direct result of the over-provisioning
design strategies followed by most tier-1 ISPs. Most
tier-1 backbones are designed in such a way that link utilization remains
below 50% in the absence of link failures. Such strategy is
dictated by the need for commercial ISPs to be highly resilient to
network failures and to be always capable of handling short-term
variations in the traffic demands.
The delay distribution in Figure 3 shows also another interesting
feature: a re-routing event has occurred during the experiment. The
distribution shows two spikes that do not overlap and for which we
can thus identify two minima (30.96ms and 31.46ms), that represent
the propagation delays of the two routes 3 .
While the difference between the two minima is relatively high
(around 500s), the difference in router hops is just one (derived
from the TTL values found in the IP packets). One additional router
along the path cannot justify a 500s delay increase [23]. On the
other hand, the Sprint backbone is engineered so that between each
pair of POPs there are two IP routhat use too disjoint fiber paths.
In our experiment, the 500s increase in the delay is introduced by
a 100km difference in the fiber path between the POPs where the
re-routing occurred.
5.2 Impact of failures on data traffic
In this section we investigate further the re-routing event. To the
best of our knowledge there is no experimental study on failures
and their impact on traffic on an operational IP backbone network.
It can be explained by the difficulties involved in collecting data on
the traffic at the time of a failure. Within several weeks of experi-
ments, our VoIP traffic has suffered a single failure. Nevertheless,
we believe it is fundamental for researchers and practitioners to
study such failure events in order to validate the behaviors and per-
3 The delay distribution derived from passive measurements also
shows some spikes. In that case, however, we cannot distinguish
between delays due to packet sizes [23] or due to routing, given that
we lack the routing information that would let us unambiguously
identify the cause of the peaks.
formance of routing protocols, routing equipment and to identify
appropriate traffic engineering practices to deal with failures.
The failure perturbed the traffic during a 50 minutes period between
and 07:20 UTC on November 28th, 2002. During that
failure event, the traffic experienced various periods of 100% losses
before being re-routed for the rest of the experiment (33 hours).
We now provide an in-depth analysis of the series of events related
to the failure and we identify the causes of loss periods. We
complement our active measurements with the routing data collected
by our IS-IS listener.
Figure
4 shows the delay that voice probe packets experienced at
the time of the failure. Each dot in the plot represents the average
delay over a five-second interval. Figure 5 provides the average
packet loss rate over the same five-second intervals.
At time 06:34, a link failure is detected and packets are re-routed
along an alternative path that results in a longer delay. It takes about
100ms to complete the re-routing during which all the packets sent
are lost. Although the quality of a voice call would certainly be
affected by the loss of 100ms worth of traffic, the total impact on
the voice traffic is minimal given the short time needed for the re-routing
(100ms) and the small jitter induced (about 500s).
After about a minute, the original route is restored. A series of
100% loss periods follows, each of which lasts several seconds.
Figure
6 shows the one-way delay experienced by all the packets
during one of these 100% loss periods (the same behavior can be
observed in all the other periods). As we can see from the figure,
packets are not buffered by the routers during the short outages
(packets do not experience long delays) but they are just dropped
because forwarded along an invalid path. Figure 7 shows the sequence
numbers of the packets as received by the end host on the
West Coast. Again, no losses nor re-orderings occur during those
periods. This is a clear indication that packet drops are not due to
congestion events but due to some kind of interface or router failure
At time 06:48, the traffic experiences 100% losses for a period
of about 12 minutes. Surprisingly, during that period no alternative
path is identified for the voice traffic. At time 07:04 a secondary
path is found but there are still successive 100% loss periods. Fi-
nally, at 07:19, the original path is operational again and at time
07:36, an alternative path is chosen and used for the remaining part
of the experiment.
The above analysis corresponds to what can be observed from
the active measurements. The routing data can provide us more
information on the cause of these events. Figure 8 illustrates the
portion of the network topology with the routers involved in the
failure. The routers (R1 to R5 ) are located in 2 different POPs.
The solid arrows show the primary path used by the traffic. The
dashed arrows show the alternative path used after the failure.
Table
summarizes all the messages that we have collected from
the IS-IS listener during the experiment. The "Time" column indicates
the time at which the LSPs are received by our listener, the
central column ("IS-IS LSPs") describes the LSPs in the format
while the third column describes the impact
on the traffic of the event reported by IS-IS.
At the time of the first re-routing, routers
via IS-IS the loss of adjacency with R4 . The fact that all the links
from R4 are signaled down is a strong indication that the failure is
a router failure as opposed to link failure. As we said earlier, the
network reacts to this first event as expected. In about 100ms, R5
routes the traffic along the alternative path through
In the period between 06:35 and 06:59, the IS-IS listener receives
several (periodic) LSPs from all the five routers reporting that all
Time (UTC)
Delay
(ms)
East Coast to West Coast
Figure
4: Average delay during the failure. Each dot corresponds
to a five-second interval
Packet
loss
rate
East Coast to West Coast
Figure
5: Average packet loss rate during the failure computed
over five-second intervals
the links are fully operational. During that time, though, the traffic
suffers successive 100% loss periods. For about 13 minutes, R4
oscillates between a normal operational state (i.e. it forwards the
packets without loss or additional delay) and a "faulty" state during
which all the traffic is dropped. However, such "faulty" state never
lasts long enough to give a chance to the other routers to detect the
failure.
At time 06:48, R4 finally reboots. It then starts collecting LSP
messages from all the routers in the network in order to build its
own routing table. This operation is usually very CPU intensive
for a network of the size of the Sprint backbone. It may require
minutes to complete as the router has to collect the LSP messages
that all the other routers periodically send.
While collecting the routing information, R4 does not have a
routing table and is therefore not capable of handling any packet.
As we described in Section 3, a router is expected to send LSP
messages with the "Infinity Hippity Cost" bit set. In our case R4
does not set that bit. R5 , having no other means to know that R4 is
not ready to route packets, forwards the voice traffic to R4 , where
it is dropped.
Time (UTC)
Delay
(ms)
East Coast to West Coast
Figure
One-way delay of voice packets during the first 100%
loss period
Sequence
number
East Coast to West Coast
Figure
7: Sequence numbers of received voice packets during
the first 100% loss period
At time 06:59, R4 builds its first routing table and the traffic
is partially restored but the links flapping
resulting again in a succession of 100% loss periods. Note that the
traffic is only restored along the alternative path (hence, the longer
delays) because the link between R1 and R4 is reported to be down.
We conjecture that the 100% loss periods are due to R5 forwarding
traffic to R4 every time the link R4 R5 is up, although R4 does
not have a route to R1 .
Most likely the links are not flapping because of an hardware
problem but because R4 is starting receiving the first BGP updates 4
force frequent re-computations of the routing table to add new destination
prefixes.
Finally, at time 07:17 all routers report that the links with R4 are
up and the routing remains stable for the rest of the experiment.
Traffic is however re-routed again along the alternative path after
about even if the original path is operational. This is
due to the fact that R5 modifies its load balancing policy over theR4 can setup the I-BGP sessions, that run over TCP, with its peers
only once it has a valid routing table, i.e. it has received all LSP
updates.
Time IS-IS LSPs Impact on traffic
Re-routed through
link to R4 is down R3 in 100ms
Re-routed
adjacency with R4 recovered through R4
from loss periods.
to 07:06 link to R4 "flaps" 7 times Re-routed through R3
from 07:00 loss periods.
to 07:17 link to R4 "flaps" 5 times Re-routed through R3
from 07:04 R5 : 100% loss periods.
to 07:17 link to R4 "flaps" 4 times Re-routed through R3
Re-routed
link to R4 is down through
restored
link to R4 is definitely up on the original path
Table
2: Summary of the events occurred during the failure
event
two equal cost paths (solid and dashed arrows in Figure 8). Routers
that perform per-destination prefix load balancing (as R5 , in our
case) can periodically modify their criteria (i.e., which flow follows
which path) in order to avoid that specific traffic patterns defeat the
load balancing (e.g., if most of the packets belong to few destination
prefixes, one path may result more utilized than the other).
In order to summarize our findings, we divide the failure we observed
in two phases:
The first phase from time 06:34 to 06:59 is characterized
by instabilities in the packet forwarding on router R4 : only
few LSPs are generated but the traffic experience periods of
100% packet loss. Such "flapping" phase is due to the particular
type of failure that involved an entire router and most
likely the operating system of the router. The effect on packet
forwarding and routing is thus unpredictable and difficult to
control protocol-wise.
The second phase goes from time 06:48 to 07:19 and is instead
characterized by a very long outage followed by some
routing instabilities and periods of 100% loss. This phase
was caused by router R4 that did not set the "Infinity Hip-
pity Cost" bit. We cannot explain how this problem arised
as resetting the hippity bit after the collection of all the BGP
updates is a common engineering practice within the Sprint
backbone network.
It is important to observe that both the first and the second phase of
Figure
8: Routers involved by the failure. The solid arrows
indicate the primary path for our traffic. The dashed arrows
indicate the alternative path through R3 .
Time (UTC)
Call
Rating
East Coast to West Coast
Figure
9: Voice call ratings (excluding the failure event)
the failure event are not due to the IS-IS routing protocol. There-
fore, we do not expect that the use of a different routing protocol
(e.g. "circuit-based" routing mechanisms such as MPLS [25])
would mitigate the impact of failures on traffic.
Instead, it is our opinion that router vendors and ISPs should
focus their efforts on the improvement of the reliability of routing
equipment, intended both in terms of better hardware architectures
and more stable software implementations. Another important
direction of improvement is certainly the introduction of automatic
validation tools for router configurations. However, such
tools would require first to simplify the router configuration proce-
dures. As a side note, introducing circuits or label-switched paths
on top of the IP routing will not help in such simplification effort.
5.3 Voice quality
This section is devoted to the study of the quality experienced by
a VoIP user. Figure 9 shows the rating of the voice calls during the
2.5 days of the experiment. We did not place any call during the
failure event (50 minutes out of the 2.5 days) because the E-model
only applies to completed calls and does not capture the events of
loss of connectivity.
Figure
shows the distribution of call quality for the 2.5 days of
experiment. All these results were derived assuming a fixed playout
buffer. One can notice that the quality of calls does not deviate
much from its mean value which is fairly 90.27. Among the
3,364 calls that were placed, only one experiences a quality rating
below 70, the lower threshold for toll-quality. We are currently in
the process of investigating what caused the low quality of some
calls. Moreover, with 99% of calls experiencing a quality above
84.68, our results confirm that the Sprint IP backbone can support
a voice service with PSTN quality standards.
The very good quality of voice traffic is a direct consequence of
the low delays, jitter and loss rates that probes experience. Without
taking into account the 50 minutes of failure, the average loss rate
is 0.19%.
We also studied the probability of having long bursts of losses.
The goal was to verify that the assumptions on the distribution of
packet losses (in Section 4 we assumed that the losses were not
bursty) and on the performance of packet loss concealment techniques
are well suited to our experiment.
For this purpose, we define the loss burst length as the number
of packets dropped between two packets correctly received by our
Call Rating R
Frequency
East Coast to West Coast
Figure
10: Distribution of voice call ratings (excluding the failure
Loss burst length Frequency of occurence
4 and above 0.16%
Table
3: Repartition of loss burst lengths (excluding the failure
end hosts. Table 3 shows the repartition of burst length among the
losses observed during the period of experiment. The vast majority
of loss events have a burst length 1, while 99.84% of
the events have a burst length less than 4. This tends to indicate
that the packet loss process is not bursty. Moreover, with a large
majority of isolated losses, we can conjecture that packet loss concealment
techniques would be efficient in attenuating the impact of
packet losses. The results shown in Table 3 are in line with previous
work of Bolot et al. [7] and they suggest that the distribution
of burst length is approximately geometric (at least for small loss
burst lengths). Future work will include an in-depth study of the
packet loss process.
6. CONCLUSION
We have studied the feasibility of VoIP over a backbone network
through active and passive measurements. We have run several
weeks of experiments and we can derive the following conclusions.
A PSTN quality voice service can be delivered on the Sprint IP
backbone network. Delay and loss figures indicate that the quality
of the voice calls would be comparable to that of traditional telephone
networks.
We have pointed out that voice quality is not the only metric of
interest for evaluating the feasibility of a VoIP service. The availability
of the service also covers a fundamental role.
The major cause of quality degradation is currently link and router
failures, even though failures do not occur very often inside the
backbone. We have observed that despite careful IP route protec-
tion, link failures can significantly, although unfrequently, impact
a VoIP service. That impact is not due to the routing protocols
(i.e. IS-IS or OSPF), but instead to the reliability of routing equip-
ment. Therefore, as the network size increases in number of nodes
and links, more reliable hardware architectures and software implementations
are required as well as automatic validation tools for
router configurations. Further investigation is needed to identify all
the interactions between the various protocols (e.g. IS-IS, I-BGP
and E-BGP) and define proper semantics for the validation tools.
The introduction of circuit or label switching networks will not
help in mitigating the impact of failures. The failure event we have
described in Section 5 is a clear example of this. As long as the failure
is reported in a consistent way by the routers in the network, the
IS-IS protocol can efficiently identify alternative routes (the first
re-routing event completed in 100ms). The MPLS Fast-ReRoute
would provide the same recovery time.
On the other hand, a failing and unstable router that sends invalid
messages would cause MPLS to fail, in addition to any other routing
protocol.
Future work will involve more experiments. Through long-term
measurements, we aim to evaluate the likelihood of link and node
failures in a tier-1 IP backbone. We also intend to address the problem
of VoIP traffic traversing multiple autonomous systems.
Another important area will be the study of metrics to compare
the telephone network availability with the Internet availability. On
telephone networks, the notion of availability is based on the down-time
of individual switches or access lines. The objective of such
metric is to measure the impact of network outages on customers.
The Federal Communications Commission requires telephone operators
to report any outage that affects more that 90,000 lines for
at least minutes. Such rule is however difficult to apply to the
Internet for a few reasons: i) there is no definition of "line" that can
be applied; ii) it is very difficult to count how many customers have
been affected by a failure; iii) from a customer standpoint there
is no difference between outages due to the network or due to the
servers (e.g. DNS servers, web servers, etc.
7.
--R
Advanced topics in MPLS-TE deployment
transmission and quality aspects (STQ)
Provisional planning values for equipment impairment factor Ie.
The case for FEC-based error control for packet audio in the internet
Measurement and interpretation of internet packet loss.
Use of OSI IS-IS for routing in TCP/IP and dual environments
Design principles for accurate passive measurements.
Voice over IP performance
Measuring Internet Telephony quality: where are we today?
Qos measurement of internet real-time multimedia services
Modeling of packet loss and delay and their effect on real-time multimedia service quality
An Engineering Approach to Computer Networking.
Assessment of VoIP quality over Internet backbones.
Precision timestamping of network packets.
Python routeing toolkit
Analysis of measured single-hop delay from an operational backbone network
Multiprotocol Label Switching Architecture.
Measurement and modelling of temporal dependence in packet loss.
--TR
End-to-end packet delay and loss behavior in the internet
An engineering approach to computer networking
End-to-end routing behavior in the Internet
Precision timestepping of network packets
Voice over IP performance monitoring
--CTR
Baek-Young Choi , Sue Moon , Rene Cruz , Zhi-Li Zhang , Christophe Diot, Quantile sampling for practical delay monitoring in Internet backbone networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2701-2716, July, 2007
Baek-Young Choi , Sue Moon , Rene Cruz , Zhi-Li Zhang , Christophe Diot, Practical delay monitoring for ISPs, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Feng Wang , Zhuoqing Morley Mao , Jia Wang , Lixin Gao , Randy Bush, A measurement study on the impact of routing events on end-to-end internet path performance, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Adjusting forward error correction with temporal scaling for TCP-friendly streaming MPEG, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), v.1 n.4, p.315-337, November 2005
Nate Kushman , Srikanth Kandula , Dina Katabi, Can you hear me now?!: it must be BGP, ACM SIGCOMM Computer Communication Review, v.37 n.2, April 2007
Athina P. Markopoulou , Fouad A. Tobagi , Mansour J. Karam, Assessing the quality of voice communications over internet backbones, IEEE/ACM Transactions on Networking (TON), v.11 n.5, p.747-760, October | traffic measurements;routing protocols |
507688 | Topology-aware overlay networks for group communication. | We propose an application level multicast approach, Topology Aware Grouping (TAG), which exploits underlying network topology information to build efficient overlay networks among multicast group members. TAG uses information about path overlap among members to construct a tree that reduces the overlay relative delay penalty, and reduces the number of duplicate copies of a packet on the same link. We study the properties of TAG, and model and experiment with its economies of scale factor to quantify its benefits compared to unicast and IP multicast. We also compare the TAG approach with the ESM approach in a variety of simulation configurations including a number of real Internet topologies and generated topologies. Our results indicate the effectiveness of the algorithm in reducing delays and duplicate packets, with reasonable algorithm time and space complexities. | INTRODUCTION
A variety of issues, both technical and commercial, have hampered
the widespread deployment of IP multicast in the global Internet
[14, 15]. Application-level multicast approaches using over-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
NOSSDAV'02, May 12-14, 2002, Miami, Florida, USA.
lay networks [9, 11, 12, 18, 24, 32, 42] have been recently proposed
as a viable alternative to IP multicast. In particular, End
System Multicast (ESM) [11, 12] has gained considerable attention
due to its success in conferencing applications. The main idea of
ESM (and its Narada protocol) is that end systems exclusively handle
group management, routing information exchange, and overlay
forwarding tree construction. The efficiency of large-scale overlay
multicast trees, in terms of both performance and scalability, is the
primary subject of this paper.
We investigate a simple heuristic, which we call Topology Aware
Grouping (TAG), to exploit underlying network topology data in
constructing efficient overlays for application-level multicast. Our
heuristic works well when underlying routes are of good quality,
e.g., in intra-domain environments, and when final hop delays are
small. Each new member of a multicast session first determines the
path (route) from the root (primary sender) of the session to itself.
The overlap among this path and other paths from the root is used
to partially traverse the overlay data delivery tree, and determine
the best parent and children for the new member. The constructed
overlay network has a low delay penalty and limited duplicate packets
sent on the same link. TAG nodes maintain a small amount of
state information- IP addresses and paths of only their parent and
children nodes.
Unlike ESM, the TAG heuristic caters to applications with a large
number of members, which join the session at different times. TAG
works best with applications which regard delay as a primary performance
metric and bandwidth as a secondary metric. For exam-
ple, in a limited bandwidth streaming application or multi-player
on-line game, latency is an important performance measure. TAG
constructs its overlay tree based on delay (as used by current Internet
routing protocols), but uses bandwidth as a loose constraint.
Bandwidth is also used to break ties among paths with similar de-
lays. We investigate the properties of TAG and model its economies
of scale factor, compared to unicast and IP multicast. We also
demonstrate via simulations the effectiveness of TAG in terms of
delay, number of identical packets, and available bandwidth in a
number of large-scale configurations.
The remainder of this paper is organized as follows. Section 2
describes the basic algorithm, its extensions, and some design con-
siderations. Section 3 analyzes the properties of TAG and its economies
of scale factor. Section 4 simulates the proposed algorithm and
compares it to ESM using real and generated Internet topologies.
Section 5 discusses related work. Finally, section 6 summarizes our
conclusions and discusses future work.
2. TOPOLOGY AWARE OVERLAYS
Although overlay multicast has emerged as a practical alterna-
Figure
1: Example of topology aware overlay networks
tive to IP multicast, overlay network performance in terms of delay
penalty and number of identical packets (referred to as "link
stress") in large groups have been important concerns. Moreover,
exchange of overlay end-to-end routing and group management information
limits the scalability of the overlay multicast approach.
Most current overlay multicast proposals employ two basic mecha-
nisms: (1) a protocol for collecting end-to-end measurements among
members; and (2) a protocol for building an overlay graph or tree
using these measurements.
We propose to exploit the underlying network topology information
for building efficient overlay networks, assuming the underlying
routes are of good quality. By "underlying network topology,"
we mean the shortest path information that IP routers maintain. The
definition of "shortest" depends on the particular routing protocol
employed, but usually denotes shortest in terms of delay or number
of hops, or according to administrative policies. Using topology information
is illustrated in figure 1. In the figure, source S (the root
node) and destinations D1 to D4 are end systems that belong to the
multicast group, and R1 to R5 are routers. Thick solid lines denote
the current data delivery tree from S to D1 D4. The dashed lines
denote the shortest paths to a new node D5 from S and D1. If D5
wishes to join the delivery tree, which member is the best parent
node to D5? If D1 becomes the parent of D5, a relay path from
D1 to D5 is consistent with the shortest path from S to D5. More-
over, no duplicate packet copies are necessary in the sub-path S to
(packets in one direction are counted separately from packets
in the reverse direction). This heuristic is similar to determining if
a friend is, more or less, on your way to work, so giving him/her
a ride will not excessively delay you, and you can reduce overall
traffic by car pooling. If he/she is out of your way, however, you
decide to drive separately. In addition, this heuristic is subject to
both capacity (e.g., space in your car) and latency (car pooling will
not make the journey excessively long) constraints.
Of course, it is difficult to determine the shortest path and the
number of identical packets, in the absence of any knowledge of
the underlying network topology. If network topology information
can be obtained by the multicast participant (as discussed in
section 2.6), nodes need not exchange complete end-to-end mea-
surements, and topology information can be exploited to construct
high quality overlay trees. Therefore, our heuristic is: a TAG destination
selects as a parent the destination whose shortest path from
the source has maximal overlap with its own path from the source.
This heuristic minimizes the increase in number of hops (and hence
delay if we assume low delay of the last hop(s)) over the shortest
unicast path. We also use loose bandwidth constraints.
As with all overlay multicast approaches, TAG does not require
class D addressing, or multicast router support. A TAG session can
be identified by (root IP addr, root port), where "root" denotes the
primary sender in a session. The primary sender serves as the root
of the multicast delivery tree. The case of multiple senders will be
discussed in section 2.8.
2.1 Assumptions
TAG makes a number of assumptions:
1. TAG is used for single-source multicast or core-based mul-
ticast: The source node or a selected core node is the root of
the multicast forwarding tree (similar to single-source multi-cast
[21] or core-based multicast [6] for IP multicast).
2. Route discovery methods exist: TAG can obtain the shortest
path for a sender-receiver pair on the underlying network.
A number of route discovery tools are discussed in section 2.6.
3. All end systems are reachable: Any pair of end systems on
the overlay network can communicate using the underlying
network. Recent studies, however, indicate that some Internet
routes are unavailable for certain durations of time [31,
26, 7].
4. Underlying routes are of good quality (in terms of delay):
Intra-domain routing protocols typically compute the shortest
path in terms of delay for a sender-receiver pair. Recent
studies, however, indicate that the current Internet demonstrates
a significant percentage of routing pathologies [31,
36]. Many of these arise because of policy routing techniques
employed for inter-domain routing. TAG is best suited for
well-optimized routing domains.
5. The last hop(s) to end systems exhibit low delay: A long
latency last hop to an end system, e.g., a satellite link, adversely
affects TAG performance. TAG works best with low-delay
a last hop to an end system (or last few hops for the partial
path matching flavor of TAG, as discussed in section 2.5).
2.2 Definitions
We define the following terms, which will be used throughout
the paper.
DEFINITION 1. A path from node A to node B in TAG, denoted
by P (A; B), is a sequence of routers comprising the shortest
path from node A to node B according to the underlying routing
protocol. P (S; will be referred to as the spath of A where S
is the root of the tree. The length of a path P or len(P ) is the
number of routers in the path.
DEFINITION 2. A B if P (S; A) is a prefix of P (S; B),
where S is the root of the tree.
For example, the path from S to D5 (or spath of D5) in figure 1
is
A TAG node maintains a family table (FT) defining parent-child
relationships for this node. One FT entry is designated for the parent
node, and the remaining entries are for the children. As seen
in figure 2, an FT entry consists of a tuple (address, spath). The
address is the IP address of the node, and the spath is the shortest
path from the root to this node.
2.3 Complete Path Matching Algorithm
The path matching algorithm traverses the overlay data delivery
tree to determine the best parent (and possibly children) for a new
node. The best parent for a new node N in a tree rooted at S is:
<IPaddr(Bi), P(S,Bi)>
<IPaddr(A), P(S,A)>
Children
Parent
FT
Figure
2: Family table
New member
of
New member subscribes here
(a) (b)
FT
New member subscribes here
(c)
Children
FT FT
New Member New Member
New Member
Children
Figure
3: The three conditions for complete path matching
A node C (C 6= N ) in the tree, such that C N and
len(P (S; C)) len(P (S; R)) for all nodes R N
in the tree.
The algorithm considers three mutually exclusive conditions, as
depicted in figure 3. Let N be a new member wishing to join a current
session. Let C be the node being examined, and S be the root
of the tree. If possible, we select a node A such that A is a child of
and continue traversing the sub-tree rooted at A (figure 3(a)). Oth-
erwise, if there are children A i of C such that N A i for some
becomes a child of C with A i as its children (figure 3(b)).
In case no child of C satisfying the first or second conditions ex-
ists, N becomes a child of C (figure 3(c)). Note that no more than
one child satisfying the first condition can exist. The complete path
matching algorithm is presented in figure 4. In the algorithm, N
denotes a new member; C is the node currently being examined by
target is the next node which N will probe, if necessary.
There are two reasons for selecting a node (in the first and second
whose spath is the longest prefix of the spath of the new
member. First, the path from the source to the new member will be
consistent with the shortest path determined by routing algorithms.
This reduces the additional delay introduced by overlays. Second,
sharing the longest prefix curtails the number of identical packet
copies in the overlay network, since a single packet is generated
over the shared part.
2.4 Tree Management
In this section, we discuss the multicast tree management proto-
including member join and member leave operations, and fault
resilience issues.
2.4.1 Member Join
A new member joining a session sends a JOIN message to the
primary sender S of the session (the root of the tree). Upon the receipt
of a JOIN, S computes the spath to the new member, and executes
the path matching algorithm. If the new member becomes a
child of S, the FT of S is updated accordingly. Otherwise, S propagates
a FIND message to its child that shares the longest spath
prefix with the new member spath. The FIND message carries the
IP address and the spath of the new member. The FIND is processed
by executing path matching and either updating the FT, or
propagating the FIND. The propagation of FIND messages continues
until the new member finds a parent. The process is depicted in
figure 5.
An example is illustrated in figure 6. Source S is the root of
the multicast tree, through R4 are routers, and D1 through D5
are destinations. The thick arrows denote the multicast forwarding
tree computed by TAG. The FT of each member (showing only the
children) is given next to it. The destinations join the session in
the order D1 to D5. Upon the receipt of a JOIN message from
proc
ch := first child of C ;
f lag := condition(3);
while (ch is NOT NULL) do
if (ch N )
then
target := ch;
f lag := condition(1); fi;
if (N ch )
then
add ch to children(N) ;
f lag := condition(2); fi;
if (flag is NOT
then
ch := next child of C ;
else
ch
if (flag is
then
else
add N to children(C) ;
currently being examined
new node joining the group
ch : a child of C
target : next node N will examine
Figure
4: Complete path matching algorithm
creates an entry for D1 in its FT (figure 6(a)). S computes
the shortest path for a destination upon receiving the JOIN message
of that destination. When D2 joins the session (figure 6(b)),
S executes the path matching algorithm with the spath of D2. S
determines that D1 is a better parent for D2 than itself, and sends
a FIND message to D1 which takes D2 as its child. D3 similarly
determines D2 to be its parent (figure 6(c)). When D4 joins
the session, D4 determines D1 to be its parent and takes D2 and
D3 as its children. The FTs of D1 to D4 are updated accordingly
(figure 6(d)). Finally, D5 joins the session as a child of D4
(figure 6(e)). Figure 6(e) depicts the final state of the multicast
forwarding tree and the FT at each node.
2.4.2 Member Leave
A member can leave the session by sending a LEAVE message
to its parent. For example, if D4 wishes to leave the session (fig-
Root
New Member
Path
Matching
FIND
FIND
JOIN
Request/Reply
Request/Reply
Request/Reply
Figure
5: A member join process in TAG
FT
(c)
(b)
(a)
(d)
FT
D3: (R1,R2,R4)
FT
FT
D4: (R1, R2)
FT
FT
FT
FT
D5: (R1,R2,R3)
D4: (R1,R2)
D3: (R1,R2,R4)
FT
FT
FT
FT
D3: (R1, R2, R4)
FT
FT
Figure
Member join in TAG
ure 7(a)), D4 sends a LEAVE message to its parent D1. A LEAVE
message includes the FT of the leaving member. Upon receiving
LEAVE from D4, D1 removes D4 from its FT and adds FT entries
for the children of D4 (D2 and D5 in this case). The updated
multicast forwarding tree is illustrated in figure 7(b).
2.4.3 Fault Resilience
Failures in end systems participating in a multicast session (not
an uncommon occurrence) affect all members of the subtree rooted
at the failing node. To detect failures, a parent and its children periodically
exchange reachability messages in the absence of data.
When a child failure is detected, the parent simply discards the
child from its FT, but when a parent failure is detected, the child
must rejoin the session.
2.5 Bandwidth Considerations
Since TAG targets delay-sensitive applications, delay (as defined
by the underlying IP routing protocol) is the primary metric used in
path matching. The complete path matching algorithm presented in
figure 4 can reduce the delay from source to destination, and reduce
the total number of identical packets. However, high link stress [12]
(and limited bandwidth to each child) may be experienced near a
few high degree or limited bandwidth nodes in the constructed tree.
To alleviate this problem, we loosen the path matching rule when
a node is searching for a parent. The new rule allows a node B to
(a) (b)
D5: (R1, R2, R3)
D3: (R1, R2, R4)
D4: (R1, R2)
D5: (R1, R2, R3)
D3: (R1, R2, R4)
FT
FT
FT
FT
FT
FT
FT
Figure
7: Member leave in TAG
attach to a node A as a child, if A has a common spath prefix of
length len(P (S; A)) k with B (S is the root of the tree), even if
the remaining k elements of the spath of A do not match the spath
of B. We call this method partial path matching or minus-k path
matching. We use the symbol partial(k) to denote minus-k path
matching.
Minus-k path matching allows children of a bandwidth-constrained
node to take on new nodes as their children, mitigating the high
stress and limited bandwidth near the constrained node. When
the available bandwidth at a given node falls below a threshold
bwthresh, minus-k path matching is activated. The threshold bwthresh
does not give a strict guarantee, but it gives an indication that alternate
paths should be explored. The k parameter controls the deviation
allowed in the path matching. A large value of k may increase
the delay to a node, but it reduces the maximum stress a node may
experience, therefore increasing available bandwidth to the node.
With minus-k matching, a new member can examine several
delay-based paths while traversing the tree, and select the path
which maximizes bandwidth. When a node C is being probed by
the new member, all children of C which are eligible to be potential
ancestors of the new member (by the minus-k path matching algo-
rithm) constitute a set of nodes to examine next. The node which
gives the maximum bandwidth among these nodes is the one se-
lected. The partial path matching algorithm is presented in figure 8.
The member leave operation also employs minus-k path matching.
When the parent of a leaving member receives a LEAVE message,
the parent first removes the leaving member from its FT. Children
of the leaving member (included in the LEAVE message) then execute
minus-k path matching at the parent of the leaving member
to find their new parents. Figure 9 illustrates an example of member
join and leave with new member D3 takes D1 as
its parent since D1 provides more bandwidth than D2. When D1
leaves, D4 becomes a child of D3. D3 maximizes bandwidth to
D4 among the children of D0 (D2 and D3).
Table
1 shows the tradeoff between the delay (mean RDP as defined
in section 4), total link stress on the tree, and maximum link
stress in the tree, for a variety of bwthresh values (in kbps). The
simulation setup used will be discussed in section 4. The configuration
used here is TAG-TS2 with 1000 members. We use a fixed
these simulations, though we are currently investigating
dynamic adaptation of k. As seen in the table, as bwthresh
increases, minus-k path matching is activated more often. Conse-
quently, a larger bwthresh value reduces the total number of identical
packets and maximum stress, but increases the RDP value.
If TAG does not use minus-k path matching (bwthresh=0), TAG
trees suffer from a large number of identical packets and high maximum
stress, yielding little bandwidth for many connections.
leave
Member Leave (k=1)
Member Join (k=1)
D0-D4: 100 kbps
D1-D4: 800 kbps
D2-D4: 300 kbps
D3-D4: 600 kbps
Figure
9: Member join and leave with partial path matching
Table
1: Tradeoffs with different bwthresh values (in kbps)
bwthresh (in kbps) RDP Total stress Max. stress
300 1.559278 1928 104
200 1.526061 2142 105
100 1.384193 2680 146
2.6 Obtaining Topology and Bandwidth Data
When a new member joins a multicast session in TAG, the root
must obtain the path from itself to the new member. We propose
two possible approaches for performing this. The first approach
is to use a network path-finding tool such as traceroute. Traceroute
has been extensively used for network topology discovery [19,
31]. Some routers, however, do not send ICMP Time-Exceeded
messages when Time-To-Live (TTL) reaches zero for several rea-
sons, the most important of which is security. Recently, we conducted
simple experiments where we used traceroute for 50 sites
in different continents at different times and on different days. Approximately
90% of the routers responded. The average time
taken to obtain and print the entire path information was 5.2 sec-
onds, with a maximum of 42.6 seconds and a minimum of 0.2 sec-
onds. 5 8% traceroute failures were reported in [31]. A recent
study [20] indicates that router ICMP generation delays are generally
in the sub-millisecond range (< 500 secs). This shows that
only a few routers in today's Internet are slow in generating ICMP
Time-Exceeded messages.
The second option is to exploit topology servers. For example,
an OSPF topology server [38] can track intra-domain topology, either
by listening to OSPF link state advertisements, or by pushing
and pulling information from routers via SNMP. Network topology
can also be obtained from periodic dumps of router configuration
files [17], from MPLS traffic engineering frameworks [4], and from
policy-based OSPF monitoring frameworks [5]. Internet topology
discovery projects [16, 8, 10] can also supply topology information
to TAG when a new member joins or changes occur. Topology
servers may, however, only include partial or coarse-grained (e.g.,
AS-level) information [29, 16, 8, 10]. Partial information can still
be exploited by TAG for partial path matching of longest common
subsequences.
Bandwidth estimation tools are important for TAG, in conjunction
with in-band measurements, to estimate the available band-width
between nodes under dynamic network conditions. Tools
similar to pathchar [22] estimate available bandwidth, delay, average
queue, and loss rate of every hop between any source and destination
on the Internet. Nettimer [27] is useful for low-overhead
measurement of per-link available bandwidth. Other bandwidth or
throughput measurement tools are linked through [1].
2.7 Adaptivity and Scalability
If network conditions change and the overlay tree becomes inefficient
(e.g., when a mobile host moves or paths fail), TAG must
adapt the overlay tree to the new network conditions. An accurate
adaptation would entail that the root probe every destination
periodically to determine if the paths have changed. When path
changes are detected, the root initiates a rejoin process for the destinations
affected. This mechanism, however, introduces scalability
problems in that the root is over-burdened and many potential probe
packets are generated.
We propose three mechanisms to mitigate these scalability prob-
lems. First, intermediate nodes (non-leaf nodes) participate in periodic
probing, alleviating the burden on the root. The intermediate
nodes only probe the paths to their children. Second, path-based
aggregation of destinations can substantially reduce the number of
hops and destinations probed. Destinations are aggregated if they
have the same spath. Only one destination in a destination group
is examined every round. During the next round, another member
of the group is inspected. When changes are detected for a certain
all members of that group are updated. Third, when changes
in part of the spath are detected for a destination, not only the destination
being probed, but also all the destinations in the same group
and all the destinations in groups with overlapping spaths, are up-dated
Although these TAG reorganizing mechanisms help reduce over-
head, the root is likely to experience a high load when a large number
of members join or rejoin simultaneously. The root is also a single
failure point. To address these limitations, mechanisms similar
to those used in Overcast [24] can be used. For example, requests
from members to the root are redirected to less-burdened replicated
roots.
Another important consideration is the quality of the TAG tree in
terms of both delay and bandwidth. TAG aligns overlay routes with
underlying routes, assuming underlying routes are of good quality
(fourth assumption in section 2.1). Unfortunately, today's Internet
routing, particularly inter-domain routing, exhibits pathologies,
such as slow convergence and node failures. Savage et al. [36,
35] found that there is an alternate path with significantly superior
quality to the IP path in 30 80% of the cases (30 55% for
latency, 75 85% for loss rate, and 70 80% for bandwidth). Intra-domain
network-level routes, however, are generally of good qual-
ity, and, in the future, research on inter-domain routing and traffic
proc PathMatch (C ; N)
ch := first child of C ;
f lag := condition(3);
then
minus-k path matching in condition(1) activated ;
while (ch is NOT NULL) do
if (ch partial(k) N && bandwidth(ch) > maxbw)
then
target := ch;
f lag := condition(1); fi;
if (N ch&& flag is NOT
then
add ch to children(N) ;
f lag := condition(2); fi;
ch := next child of C ;
if (flag is
then
else
add N to children(C) ;
currently being examined
new node joining the group
ch : a child of C
target : next node N will examine
among potential
parents of N
Figure
8: Partial path matching algorithm, with bandwidth as
a secondary metric. The bandwidth() function gives the available
bandwidth between a node and the new member
engineering may improve route quality in general. Another important
consideration is that a long latency last hop(s) to a TAG parent
node may yield high delay to its children (section 2.1). A delay-constrained
overlay tree construction mechanism can be combined
with the TAG heuristic to prevent such high delay paths.
As for bandwidth, it is only considered as a secondary metric (as
a tie breaker among equal delay paths) in tree construction (sec-
tion 2.5). Minus-k path matching does not guarantee bandwidth-
it simply explores more potential paths when bandwidth is scarce.
A bandwidth-constrained tree construction mechanism can be incorporated
into TAG if bandwidth guarantees are required.
2.8 Multiple Sender Groups
In the current version of TAG, a sender other than the root of the
tree must first relay its data to the root. The root then multicasts
the data to all members of the session. This approach is suitable
for mostly single-sender applications, where the primary sender is
selected as the root, and other group members may occasionally
transmit. In applications where all members transmit with approximately
equal probabilities, the root of the tree should be carefully
selected. This is similar to the core selection problem in core-based
tree approaches for multicast routing [6]. Multiple (backup) roots
are also important for fault tolerance.
3. ANALYSIS OF TAG
In this section, we investigate the properties of TAG, and study
its bandwidth penalty compared to IP multicast. For simplicity, we
use TAG with complete path matching (figure 4) in our analysis,
except for the complexity analysis where we analyze both complete
and mins-k path matching.
3.1 Properties of TAG
In this section, we study the conditions used in the path matching
algorithm, and the properties of the trees constructed by TAG.
LEMMA 1. Node A is an ancestor of node B in the TAG tree iff
A B.
Proof: We first show that if node X is the parent of node Y ,
denoted by . In the path matching
algorithm, X can become the parent of Y by the second or by the
third path matching conditions. Both cases guarantee X Y .
Then, we generalize to the case when node A is an ancestor (not
necessarily the parent) of node B. In this case, there must be n
(n > 0) nodes such that
holds, according to the previous case. Similarly, M2 M1
B. Transitively,
(: This follows from conditions 1 and 2 in the path matching
algorithm. 2
In figure 1, node D1 is an ancestor of node D5 because P (S; D1)
is a prefix of P (S; In
contrast, the fact that P (S; is not a prefix of
implies that node D2 is not an ancestor
of D5. We now investigate the conditions of the path matching
algorithm.
LEMMA 2. The three conditions in the TAG path matching algorithm
(given in figure are mutually exclusive (no two of the
three conditions can occur simultaneously) and complete (no other
case exists).
Proof: We first prove mutual exclusion. To show mutual exclusion
is equivalent to proving no two conditions can hold simultaneously.
The first and the third conditions, and the second and the third conditions
cannot co-exist by definition. Therefore, we need to show
that the first and second conditions cannot both hold at the same
time. Suppose the first and the second conditions occur simultaneously
for a node C that is being examined. A new member N
selects B, a child of C, such that B N for further probing by
the first condition. By the second condition, there must exist a node
another child of C, such that N B 0 , and B and B 0 are sib-
lings. However, in this case, the path matching algorithm would
have previously ensured that B 0 is a descendant of B, not a child
of C, by lemma 1, since B N B 0 . This is a contradiction.
Since the third condition includes the complement set of the first
and the second conditions, the conditions are complete. 2
Now we study the number of trees TAG can construct.
LEMMA 3. TAG constructs a unique tree if all members have
distinct spaths, regardless of the order of joins. If there are at least
two members with the same spaths, the order of joins alters the
constructed tree.
Proof: By lemma 1, a unique relationship among every two nodes
(i.e., parent, ancestor, child, descendant, or none) is established
R
R
R
more hops
Figure
10: Bound on the number of hops in TAG
among every two nodes which have different spaths, independent
of the order of joins. If two members have the same spath, one
must be an ancestor of the other. Therefore, n! distinct trees can
be constructed by TAG if n group members have the same spaths
(according to the order of their joins). 2
We now study the properties of a parent node.
LEMMA 4. For all i, the spath of the parent of nodes A i has
the longest prefix of the spath P (S; A i ), where "longest" denotes
longest in comparison to the spaths of all members in a session and
S is the root of the tree.
Proof: Consider two nodes B and C where B is the parent of
C. By lemma 1, B C, i.e., P (S; B) is a prefix of P (S; C).
Suppose there exists a node A such that P (S; A) is a prefix of
are prefixes of the same path P (S; C) and len(P (S; A))
> len(P (S; B)), then P (S; B) is a prefix of P (S; A). By definition
A. Therefore, A must be a descendant of B according
to lemma 1. The path matching algorithm, however, would make
C a child of A, instead of B, since B A and A C. This
is a contradiction. Hence, P (S; B) must be the longest prefix of
is the parent of C. 2
Finally, we give a bound for the number of hops on the path from
the root to each member.
LEMMA 5. For every destination i in a TAG tree, SPD(i)
E(i) 3SPD(i) 2, where SPD(i) is the number of hops on
the shortest path from root S to i, and E(i) is the actual number of
hops from S to i in the TAG tree.
Proof: Consider P (S; i), the path from root S to i. By the definition
of len, since SPD(i) is the
number of hops on the path P (S; i). The fact that SPD(i) is the
number of hops on the shortest path P (S; i) ensures SPD(i)
E(i). The maximum E(i) occurs when i has as many ancestors
as len(P (S; i)). This situation is depicted in figure 10. In the fig-
ure, R denotes a router; i is a destination, and nodes M are all
ancestors of i. For every M , 2 hops are added to the path. Thus,
are added to SPD(i). Therefore, the maximum
E(i) is 3 SPD(i) 2. 2
3.2 Time Complexity
In this section, we analyze the time complexities of both complete
path matching and minus-k path matching. For simplicity, we
assume that an overlay multicast tree has n end systems, and that
each end system has an average of m children. We also assume
that an average of v routers exist over the link between a parent and
its child. After the source discovers the path to a new member, the
member join process requires: (1) operations for each node, and (2)
tree traversal. For (1), suppose that the new member has matched
its path to a node at level i 1 and is searching for a matching
node at level i in the overlay multicast tree, where 0 < i < log m n.
(The height of the tree is log m n, which is assumed to be an inte-
ger.) In complete path matching, operations in (1) require mvi
operations for the new member at to search for the next matching
node at level i, in the worst case. The new member can find
the next matching node at level i by examining at most vi routers
per child for each of the m children of . Operation (2) is of order
log m n from the root to a leaf node. Therefore, the time complexity
of member join is:
log
n) (1)
Member leave requires one deletion and m additions of FT en-
tries. Each entry requires v log m n operations for the worst case
path length. Thus, the time complexity of member leave is O(mv log m n).
In minus-k path matching, (1) requires m(k operations if
k +vi v log m n. Otherwise, (1) requires mv log m n operations.
A new member matched with node examines k more routers than
in the complete path matching case. Hence, k + vi routers are examined
by the new member per child to find the next matching node
routers are examined
(the maximum path length). This is performed for each of the m
children of . Since operation (2) requires log m n operations and
assuming that k is small, the time complexity of member join is:
log
c)
n) (2)
As discussed in section 2.5, children of a leaving member re-join
the session starting from the parent of the leaving member in
minus-k path matching. In this process, each of m children of the
leaving member requires O(mv log 2
n) operations for the rejoin.
Hence, the time complexity of member leave is O(m 2 v log 2
m n) in
this case.
3.3 Modeling the Economies of Scale Factor
Two important questions to answer about an overlay multicast
tree are: (1) how much bandwidth it saves compared to naive uni-
cast; and (2) how much additional bandwidth it consumes compared
to IP multicast. IP multicast savings over naive unicast have
been studied in [2, 13, 33]. Chuang and Sirbu [13] investigated
the cost of IP multicast for a variety of real and generated network
topologies. Their work was motivated by multicast pricing. They
found that L(m) / m 0:8 , where L(m) is the ratio between total
number of multicast links and average unicast path length, and m is
the number of distinct routers to which end systems are connected.
They also found that the cost of a multicast tree saturates when the
number of subscribing end systems exceeds a certain value. Based
on these results, they suggested membership-based pricing until m
reaches the saturation point, and flat-rate pricing beyond that point.
In this section, we quantify the network resources consumed by
TAG. We derive a bound for the function LTAG (n), which denotes
the sum of link stress values on all router-to-router links,
for a multicast group of size n. Although the number of distinct
routers to which end systems are connected is used in [13], we use
Router
End system
Figure
11: A k-ary Tree Model
d
level
(2)
l
Figure
12: TAG Trees
n, the number of end systems in a multicast group. As discussed
in [2], using the number of end systems is intuitively appealing and
makes the analysis simpler. Note that m can be approximated by
is the total number of possible
routers to which end systems can be connected. m n when
M 1.
For simplicity, we assume a k-ary data dissemination tree in
which tree nodes denote routers (as in [2, 33]), as depicted in figure
11. The height of the tree is H and all nodes except the leaves
have degree k. We assume that no unary nodes (nodes at which
no branching occurs) exist. Therefore, our results are approximate.
An end system can be connected to any router (node in the tree).
Suppose that n end systems join the multicast session. The probability
that at least one end system is connected to a given node
is:
1 is the number of
possible locations for the subnet of an end system, which is equal
to the number of nodes in the tree.
We now evaluate the cost of transmission at each level of the
tree. In figure 12, B l indicates the cost over the link between node
- at level l and its parent at level l 1, and B l+1 (a) denotes the
cost over the links between node - at level l and its children at level
considering two different
cases: when at least one end system is connected to node -, and
when no end system is connected to -. Let B 1
l be the cost in the
first case and B 2
l be the cost in the second case. TAG enforces
that the first case costs one, for transmission between node - and
its parent. This is because node - sends packets from the parent
to the children (B l = 1). In the second case, however, since no
system relays the packets at -, the cost over outgoing links of -
towards the leaves is equal to the cost over the link between - and its
parent. Therefore, B 1
We assume the end systems are uniformly distributed to tree nodes.
This assumption implies that E[B l+1
l
Normalized
Overlay
Tree
Cost
Number of Members
Figure
13: TAG tree cost versus group size
Hence, E[B l ] is defined as follows:
l
l
Solving the recurrence in (4) and (5), we obtain:
l E[B l
The cost LTAG (n) is given by (6):
Figure
13 plots the normalized overlay tree cost of TAG for a variety
of k and H values on a log-log scale. The normalized overlay
tree cost LTAG (n)=^u is defined as LTAG (n), the cost of an overlay
with n members, divided by the average number of hops (only
counting router-to-router links) for a unicast path from the source
to receivers, ^ u. Since we assume end systems are uniformly distributed
at nodes, ^ u is the average number of hops from the root to
a node on the overlay tree:
All curves stabilize for group sizes exceeding 1000 5000 mem-
bers. The slope decreases because as group size grows, more end
systems can share the links yielding more bandwidth savings. This
is an important advantage of TAG over unicast. The figure shows
that, approximately, LTAG (n)=^u / n 0:95 before the curves sta-
bilize. The factor 0.95 is smaller than unicast, but larger than the
factor for IP multicast (L IPmulticast (n)=^u / n 0:8 ), where replication
at the routers, together with good multicast routing algorithms
yield additional savings. We will verify these results via
simulations in section 4.3.
4. PERFORMANCE EVALUATION
We first discuss the simulation setup and metrics, and then analyze
the results.
4.1 Simulation Setup and Metrics
We have implemented session-level (not packet-level) simulators
for both TAG and ESM [12] to evaluate and compare their
performance. The simulators model propagation delays, but not
queuing delays and packet losses. Two sets of simulations were
performed on different topologies. The first set uses Transit-Stub
topologies generated by GT-ITM [41]. The Transit-Stub model
generates router-level Internet topologies. We also simulate AS-level
topologies, specifically the actual Internet AS topologies from
NLANR [29] and topologies generated by Inet [25].
The Transit-Stub model generates a two-level hierarchy: inter-connected
higher level (transit) domains and lower level (stub) do-
mains. We use three different topologies with different numbers of
nodes: 492, 984, and 1640. When the total is 492 nodes, there are
nodes per transit domain, 5 stub domains per
transit node, and 8 nodes per stub domain. Similar distributions are
used when the total number of nodes is 984 and 1640. We label
the 3 transit-stub topologies TS1, TS2, and TS3 respectively, e.g.,
label "TAG-TS1" denotes the results of TAG on the Transit-Stub
topology with 492 nodes. Multicast group members are assigned
to stub nodes randomly. The multicast group size ranges from 60 to
5000 members. GT-ITM generates symmetric link delays ranging
from 1 to 55 ms for transit-transit or transit-stub links. We use 1 to
ms delays within a stub. We randomly assign bandwidth ranging
between 100 Mbps and 1 Gbps to backbone links. We use 500 kbps
to 10 Mbps for the links from edge routers to end systems.
The AS topologies from NLANR and Inet give AS-level connec-
tivity. AS-level maps have been shown to exhibit a power-law [16].
This means that a few nodes have high-degree connectivities, while
most other nodes exhibit low-degree connectivities. We use the
1997 and 1998 NLANR data sets, named AS97 and AS98, respec-
tively. We also use the 1997 Inet data set (named Inet97) and the
1998 Inet data set (Inet98), which have the same number of ASes
as the NLANR data sets: 3015 and 3878. We have 4000 (for 1997)
and 5000 (for 1998) members in a multicast session, and assign
members to ASes randomly. Link delays and bandwidths in the
same ranges as the Transit-Stub configuration are used for the AS
configurations. The link delays are asymmetric.
We assume that the IP layer routing algorithm uses delay as a
metric for finding shortest paths. The routing algorithm for the
mesh in ESM uses discretized levels of available bandwidth (in
200 kbps increments) as the primary metric, and delay as a tie
breaker. The minus-k path matching algorithm is used in TAG
with a fixed
specified. We use the same parameters for ESM used in the simulations
in [11, 12] (lower degree bound = 3, upper degree
high delay penalty = 3), except for delay-related parameters (close
neighbor delay = 85 ms) since we assign a wider range of delays to
the links.
We use the following performance metrics [11, 12] for evaluating
TAG and ESM:
1. Mean Relative Delay Penalty (RDP): RDP is the relative
increase in delay between the source and a receiver in TAG
against unicast delay between the source and the same re-
ceiver. The RDP from source s to receiver dr is the ratio
latency(s;dr
delay(s;dr
. The latency latency(s; dr ) from s to dr is defined
to be delay(s; d0
assuming s delivers data to dr via the sequence of end systems
denotes the end-
to-end delay from d i to d i+1 . We compute the mean RDP of
all receivers.
2. Link Stress: Link stress is the total number of identical
copies of a packet over a physical link. We compute the total
stress for all tree links. We also compute the maximum value
of link stress among all links.This is clearly a network-level2610
Mean
Group Size
Figure
14: Mean RDP: TAG versus ESM5000150002500035000
Total
Stress
Group Size
Figure
15: Total stress: TAG versus ESM
metric and is not of importance to the application user.
3. Mean Available Bandwidth (in kbps): This is the mean of
the available bottleneck bandwidth between the source and
all receivers.
The TAG tree cost is also computed in section 4.3, and compared
to IP multicast and unicast cost.
4.2 Performance Results
4.2.1 Transit-Stub Topologies
The mean RDP values for TAG and ESM on the three different
Transit-Stub topologies (TS1, TS2, TS3) are plotted in figure 14.
From the figure, TAG exhibits lower mean RDP values than ESM
for different group sizes in all 3 topologies. The mean RDP values
for TAG-TS1, TS2, and TS3 are all in the range of 1 to 2, while
the mean RDP values for ESM range from 2 to 6. This is because
TAG considers delay as the primary metric while ESM uses delay
only as a tie breaker. Although TS3 is a larger scale topology than
TS2, and TS2 is larger than TS1, the mean RDP values for TAG
are similar for the different topologies. Mean RDP values for TAG
increase with the increase in group size. As more end systems join
in TAG, the mean RDP values increase due to the bandwidth constraint
in partial path matching (even though lower latency paths
may become available). We observe that, unlike TAG, the mean
RDP values for ESM do not always increase with the increase in
group size.
Figure
15 illustrates the total stress of TAG and ESM for the
three different topologies. For all group sizes and topologies, TAG
Cumulative
Percentage
Stress (log-scale)
Figure
Cumulative distribution of stress20060010000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Stress
Group Size
Figure
17: Maximum stress: TAG versus ESM
total stress is below 8000. In contrast, ESM exhibits higher stress.
The total stress for all 6 configurations increases in proportion to
the group size, since more identical packets traverse physical links
when more end systems join the session. TAG-TS1 and TAG-
TS2 exhibit the lowest total stress. TAG can avoid some duplicate
packets since TAG aligns overlay and underlying routes, subject to
bandwidth availability. Packets from the source to a receiver are not
duplicated along the path from the source to the parent of that re-
ceiver. Figure 16 depicts the cumulative distribution of total stress.
The figure shows that the three TAG configurations have slightly
more low-stress links than the three ESM configurations.
Figure
17 illustrates that TAG, with the correct parameters, can
reduce the maximum stress value as well. With the complete path
matching algorithm, a strategically located end system attached to
a high-degree router can be the parent of numerous nodes, which
severely constrains the bandwidth available to each of these nodes,
and increases the stress at this end system. The minus-k path matching
algorithm remedies this weakness, as shown in the figure.
The mean bandwidth, depicted in figure 18, denotes the average
of the available bottleneck bandwidths from the source to all mem-
bers. The available bottleneck bandwidth with ESM is high (from
600 to 1600 kbps) for up to 500 members, and then stabilizes at
approximately 600 kbps for groups exceeding 500 members. TAG
gives 200 800 kbps bandwidth for up to 500 members. For larger
groups, the bandwidth rapidly drops to under 200 kbps. The bottle-neck
bandwidth given by TAG continues to decrease as the group
size increases. TAG bottleneck bandwidth is very sensitive to the
number of members, and to the bandwidth threshold
200 kbps here). This and the fact that ESM optimizes bandwidth200600100014000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
Mean
Bandwidth
Group Size
Figure
Mean
Mean
Upper Degree Bound (UDB)
Figure
19: Mean RDP for different UDB values in ESM
explain why ESM performs better than TAG.
An important point to note is that in the ESM algorithm, the
lower and upper degree bounds (LDB and UDB, respectively) for
each group member play a key role. The two parameters control
the number of neighbors that each member communicates with.
In particular, the upper degree bound is significant, as it impacts
both protocol scalability and performance. In our simulations, we
observe that increasing the upper degree bound for ESM reduces
the delay penalty in some cases, but not always. Figure 19 plots
the mean RDP values of ESM versus different UDB values on the
3 topologies TS1, TS2, and TS3 for 1000 members. In the figure,
the mean RDP values decrease as UDB increases for ESM-TS1 and
ESM-TS3 (except for an increase between UDB=10 and UDB=15).
The mean RDP stabilizes beyond a certain UDB value. Increasing
UDB generally helps a member find the best paths in terms of delay
penalty and bandwidth. However, due to the discretized levels
of available bandwidth used as a primary metric in ESM, changes
using different UDB values are not substantial. A higher UDB,
of course, increases the volume of routing information exchanged,
which is detrimental to scalability. We have observed no significant
change in the mean bandwidth, total stress, or maximum stress,
with higher UDB values.
The parameter choices for TAG, most significantly bwthresh
(which should be tuned according to application bandwidth require-
ments), significantly affect the results. For example, setting bwthresh
to zero and using complete path matching dramatically improves
the RDP values for TAG, at the expense of the maximum stress and
bandwidth results, as discussed in section 2.5.
4.2.2 AS and Inet Topologies
Table
2: Performance of TAG and ESM in AS and Inet
Configuration Algorithm Mean RDP Total link stress Max. stress Mean parent-child bandwidth (kbps)
Table
2 shows the performance of TAG and ESM on AS97, AS98,
Inet97, and Inet98. We run three different versions of TAG with respect
to bwthresh. TAG-50, TAG-100, and TAG-200 denote TAG
with
TAG-50 gives lower mean RDP than ESM over all configura-
tions. In contrast, the mean RDPs of TAG-100 and TAG-200 are
similar to, or even worse than, the mean RDP of ESM. All the TAG
configurations exhibit lower mean parent-child bandwidth than ESM
(in these simulations, we measure parent-child, not sender to re-
ceiver, bottleneck bandwidth). Among the TAG configurations,
achieves higher mean bandwidth than TAG-100, which
gives higher mean bandwidth than TAG-50. Since TAG only considers
bandwidth as a secondary metric, it does not consider band-width
in its primary tree construction choices. This result shows
that bwthresh in TAG must be chosen carefully. A tradeoff between
RDP and bandwidth is clearly observed.
In addition, note that fanout of nodes in AS-level topologies
is higher than that in router-level topologies. The minus-k path
matching algorithm in TAG increases the RDP of the nodes which
can no longer take a high-degree node as a parent. The available
bandwidth at the high-degree node is reduced by the possibly large
number of children. TAG with a bwthresh of zero dramatically reduces
the RDP (to 1.5 for AS97 and 1.6 for Inet97) at the expense
of significant decrease in mean bandwidth.
Total link stress values do not widely vary for TAG-50, TAG-
100, TAG-200, and ESM. However, the maximum stress of TAG-
50 is higher than the maximum stress of ESM on AS97 and Inet97.
The maximum stress of TAG-50 and ESM on AS98 and Inet98 are
similar. With a small bwthresh for TAG-50, TAG allows a node to
have a large number of children, which results in a high maximum
stress. The maximum stress decreases from TAG-50 to TAG-100 to
TAG-200. As previously discussed, running the same simulations
for TAG with larger bwthresh values reduces the maximum stress.
4.3 Economies of Scale Factor
We compute overlay tree cost via simulations, in order to validate
our analytical results from section 3.3. In order to compare
results, we assume that one hop used by one point-to-point transfer
represents a unit of bandwidth. We therefore add the total stress
values for all router-to-router links, and use this quantity to denote101000100000
Normalized
Overlay
Tree
Cost
Number of Members
Overlay Tree Cost
Figure
20: Overlay tree cost versus group size (log-log scale)
tree cost. We run three sets of simulations for unicast, TAG, and
IP multicast on the TS2 configuration with 1280 end systems. The
complete (not minus-k) path matching TAG is used. This is done
to give a fair comparison of simulation results with the analytical
results, which modeled complete path matching. The simulation
results show that unicast, TAG, and IP multicast cost 16627, 4574,
and 1265 respectively. We also plot the normalized overlay tree
cost of TAG for a variety of group sizes (using the same methodology
as in [13]) in figure 20. The normalized overlay tree cost
LTAG (m)=^u is defined as in section 3.3. The figure shows that
. The overlay tree cost stabilizes with tree
saturation, as with IP multicast. This is consistent with our modeling
results.
5. RELATED WORK
End System Multicast (or Narada) [11, 12] is a clever overlay
multicast protocol targeting sparse groups, such as audio and video
conferencing groups. End systems in End System Multicast (ESM)
exchange group membership information and routing information,
build a mesh, and finally run a DVMRP-like protocol to construct a
multicast forwarding tree. The authors show that it is important to
consider both bandwidth (primarily) and latency, when constructing
conferencing overlays. Other application-level multicast architectures
include ScatterCast [9], Yoid [18], ALMI [32]. These architectures
either optimize delay or optimize bandwidth. In par-
ticular, Overcast [24] provides scalable and reliable single-source
overlay multicast using bandwidth as a primary metric.
More recently, Content-Addressable Network (CAN)-based multicast
[34] was proposed to partition member nodes into bins using
proximity information obtained from DNS and delay measure-
ments. Node degree constraints and diameter bounds in the constructed
overlay multicast network are employed in [39]. Liebeherr
et al. investigate Delaunay triangulations for routing in overlay networks
in [28]. A prefix-based overlay routing protocol is used in
Bayeux [42]. Hierarchical approaches to improve scalability are
also currently being investigated by several researchers. A protocol
that was theoretically proven to build low diameter and low degree
peer-to-peer networks was recently described in [30].
In addition to overlay multicast proposals, several recent studies
are related to the TAG approach. A unicast-based protocol for
multicast with limited router support (that includes some ideas that
inspired TAG) is the REUNITE protocol [40]. Overlay networks
that detect performance degradation of current routing paths and
re-route through other end systems include Detour and RON [3].
Jagannathan and Almeroth [23] propose an algorithm which uses
multicast tree topology information (similar to the manner in which
we exploit path information in TAG) and loss reports from receivers
for multicast congestion control.
6. CONCLUSIONS AND FUTURE WORK
We have designed and studied a heuristic topology-aware application-level
multicast protocol called TAG. TAG is single-source or core-based
multicast protocol that uses network topology information to
construct an overlay network with low delay penalty and a limited
number of identical packets. Bandwidth is also considered in tree
construction as a secondary metric. TAG, however, works best with
high quality underlying routes, and assumes low delay on the last
hop(s) to end systems. We have studied the properties of TAG, and
analyzed its economies of scale factor, compared to both unicast
and IP multicast. Simulation results on the Transit-Stub model (GT-
ITM), Inet, and NLANR data indicate the effectiveness of TAG in
building efficient trees for a large number of group members.
We are currently extending TAG to incorporate a tight bandwidth
constraint, and delay constrains. With dynamically varying values
of the path deviation parameter k and the bandwidth threshold
bwthresh, a new member can find a better parent, in terms of both
latency and bandwidth. We are also considering a hierarchical approach
for increasing adaptivity and scalability. This includes using
partial topology in a subsequence matching algorithm. We will
extend TAG to include other QoS parameters such as power availability
in wireless nodes. In addition, we will incorporate TAG into
two different applications (a multi-player online game and a video
streaming application), and conduct experiments for evaluating the
practical aspects and performance of a TAG implementation in the
Internet.
7.
ACKNOWLEDGMENTS
The authors would like to thank the NOSSDAV 2002 reviewers
for their valuable comments that helped improve the paper. This
research is sponsored in part by the Purdue Research Foundation,
and the Schlumberger Foundation technical merit award.
8.
--R
Performance Measurement Tools Taxonomy.
Multicast Tree Structure and the Power Law.
Resilient Overlay Networks.
RATES: A server for MPLS traffic engineering.
Monitoring OSPF routing.
Core based trees (CBT): An architecture for scalable multicast routing.
Towards Capturing Representative AS-Level Internet Topologies
An Architecture for Internet Content Distribution as an Infrastructure Service.
The Origin of Power Laws in Internet Topologies Revisited.
Enabling Conferencing Applications on the Internet using an Overlay Multicast Architecture.
A Case for End System Multicast.
Pricing Multicast Communications: A Cost-Based Approach
Multicast routing in datagram inter-networks and extended LANs
Deployment Issues for the IP Multicast Service and Architecture.
On Power-Law Relationships of the Internet Topology
IP network configuration for traffic engineering.
Yoid: Your Own Internet Distribution
A Global Internet Host Distance Estimation Service.
Estimating Router ICMP Generation Delays.
Using Tree Topology for Multicast Congestion Control.
Reliable multicasting with an overlay network.
Inet: Internet Topology Generator.
Internet Routing Instability.
A Tool for Measuring Bottleneck Link Bandwidth.
Building Low-Diameter P2P Networks
ALMI: an Application Level Multicast Infrastructure.
Scaling of multicast trees: Comments on the Chuang-Sirbu scaling law
Detour: a Case for Informed Internet Routing and Transport.
The End-to-End Effects of Internet Path Selection
Network Address Translator (NAT)-Friendly Application Design Guidelines
An OSPF Topology Server: Design and Evaluation.
Routing in Overlay Multicast Networks.
REUNITE: A Recursive Unicast Approach to Multicast.
How to model an internetwork.
An Architecture for Scalable and Fault-tolerant Wide-area Data Dissemination
--TR
Multicast routing in datagram internetworks and extended LANs
Core based trees (CBT)
End-to-end routing behavior in the Internet
Scaling of multicast trees
On power-law relationships of the Internet topology
A case for end system multicast (keynote address)
Bayeux
Enabling conferencing applications on the internet using an overlay muilticast architecture
Resilient overlay networks
IDMaps
Using Tree Topology for Multicast Congestion Control
Building Low-Diameter P2P Networks
--CTR
Sonia Fahmy , Minseok Kwon, Characterizing overlay multicast networks and their costs, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.373-386, April 2007
K. K. To , Jack Y. B. Lee, Parallel overlays for high data-rate multicast data transfer, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.1, p.31-42, 17 January 2007
Xing Jin , Kan-Leung Cheng , S.-H. Gary Chan, Scalable island multicast for peer-to-peer streaming, Advances in Multimedia, v.2007 n.1, p.10-10, January 2007
Yongjun Li , James Z. Wang, Cost analysis and optimization for IP multicast group management, Computer Communications, v.30 n.8, p.1721-1730, June, 2007
Yi Cui , Klara Nahrstedt, High-bandwidth routing in dynamic peer-to-peer streaming, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Chao Gui , Prasant Mohapatra, Overlay multicast for MANETs using dynamic virtual mesh, Wireless Networks, v.13 n.1, p.77-91, January 2007
Chiping Tang , Philip K. McKinley, Topology-aware overlay path probing, Computer Communications, v.30 n.9, p.1994-2009, June, 2007
Rongmei Zhang , Y. Charlie Hu, Borg: a hybrid protocol for scalable application-level multicast in peer-to-peer networks, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Mojtaba Hosseini , Nicolas D. Georganas, Design of a multi-sender 3D videoconferencing application over an end system multicast protocol, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Amit Sehgal , Kenneth L. Calvert , James Griffioen, A flexible concast-based grouping service, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.14, p.2532-2547, 5 October 2006
Tackling group-to-tree matching in large scale group communications, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3069-3089, August, 2007
Mojtaba Hosseini , Nicolas D. Georganas, End system multicast protocol for collaborative virtual environments, Presence: Teleoperators and Virtual Environments, v.13 n.3, p.263-278, June 2004
Minseok Kwon , Sonia Fahmy, Path-aware overlay multicast, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.47 n.1, p.23-45, 14 January 2005 | routing;overlay networks;application level multicast;topology;network |
507695 | Distributing streaming media content using cooperative networking. | In this paper, we discuss the problem of distributing streaming media content, both live and on-demand, to a large number of hosts in a scalable way. Our work is set in the context of the traditional client-server framework. Specifically, we consider the problem that arises when the server is overwhelmed by the volume of requests from its clients. As a solution, we propose Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server. We discuss the proposed solution in some detail, pointing out the interesting research issues that arise, and present a preliminary evaluation using traces gathered at a busy news site during the flash crowd that occurred on September 11, 2001. | INTRODUCTION
There has been much work in recent years on the topic of
content distribution. This work has largely fallen into two cat-
egories: (a) infrastructure-based content distribution, and (b)
peer-to-peer content distribution. An infrastructure-based
content distribution network (CDN) (e.g., Akamai) complements
the server in the traditional client-server framework. It
employs a dedicated set of machines to store and distribute
content to clients on behalf of the server. The dedicated in-
frastructure, including machines and networks links, is engineered
to provide a high level of performance guarantees.
On the other hand, peer-to-peer content distribution relies
on clients to host content and distribute it to other clients.
The P2P model replaces rather than complements the client-server
framework. Typically, there is no central server that
holds content. Examples of P2P content distribution systems
include Napster and Gnutella.
In this paper, we discuss Cooperative Networking (Coop-
Net), an approach to content distribution that combines aspects
of infrastructure-based and peer-to-peer content distri-
bution. Our focus is on distributing streaming media content,
both live and on-demand. Like infrastructure-based content
distribution, we seek to complement rather than replace the
traditional client-server framework. Specically, we consider
the problem that arises when the server is overwhelmed by the
volume of requests from its clients. For instance, a news site
may be overwhelmed because of a large \
ash crowd" caused
by an event of widespread interest, such as a sports event or an
earthquake. A home computer that is webcasting a birthday
For more information, please visit the CoopNet project Web
page at http://www.research.microsoft.com/ e padmanab/pro-
jects/CoopNet/.
party live to friends and family might be overwhelmed even
by a small number of clients because of its limited network
bandwidth. In fact, the large volume of data and the relatively
high bandwidth requirement associated with streaming
media content increases the likelihood of the server being
overwhelmed in general. Server overload can cause signicant
degradation in the quality of the streaming media content
received by clients.
CoopNet addresses this problem by having clients cooperate
with each other to distribute content, thereby alleviating the
load on the server. In the case of on-demand content, clients
cache audio/video clips that they viewed in the recent past.
During a period of overload, the server redirects new clients to
other clients that had downloaded the content previously. In
the case of live streaming, the clients form a distribution tree
rooted at the server. Clients that receive streaming content
from the server in turn stream it out to one or more of their
peers.
The key distinction between CoopNet and pure P2P systems
like Gnutella is that CoopNet complements rather than
replaces the client-server framework of the Web. There is still
a server that hosts content and (directly) serves it to clients.
CoopNet is only invoked when the server is unable to handle
the load imposed by clients. The presence of a central server
simplies the task of locating content. In contrast, searching
for content in a pure P2P system entails an often more
expensive distributed search [20, 21, 24].
Individual clients may only participate in CoopNet for a
short period of time, say just a few minutes, which is in contrast
to the much longer participation times reported for systems
such as Napster and Gnutella [23]. For instance, in the
case of live streaming, a client may tune in for a few minutes
during which time it may be willing to help distribute the con-
tent. Once the client tunes out, it may no longer be willing to
participate in CoopNet. This calls for a content distribution
mechanism that is robust against interruptions caused by the
frequent joining and leaving of individual peers.
To address this problem, CoopNet employs multiple description
coding (MDC). The streaming media content, whether
live or on-demand, is divided into multiple sub-streams using
MDC and each sub-stream is delivered to the requesting client
via a dierent peer. This improves robustness and also helps
balance load amongst peers.
The rest of this paper is organized as follows. In Section 2,
we discuss related work. In Section 3, we discuss the operation
of CoopNet for live and on-demand content, and present an
outline of multiple description coding. In Section 4, we use
traces from the
ash crowd that occurred on September 11,
2001 to evaluate how well CoopNet would have performed for
live and on-demand content. We present our conclusions in
Section 5.
2. RELATED WORK
As noted in Section 1, two areas of related work are infrastructure-based
CDNs and peer-to-peer systems. Infrastructure-based
CDNs such as Akamai employ a dedicated network of
thousands of machines in distributed locations, often with
leased links inter-connecting them, to serve content on behalf
of servers. When a client request arrives (be it for streaming
media or other content), the CDN redirects the client to a
nearby replica server. The main limitation of infrastructure-based
CDNs is that their cost and scale is only appropriate for
large commercial sites such as CNN and MSNBC. A second
issue is that it is unclear how such a CDN would fare in the
face of a large
ash crowd that causes a simultaneous spike
in tra-c at many or all of the sites hosted by the CDN.
Peer-to-peer systems such as Napster and Gnutella depend
on little or no dedicated infrastructure 1 . There is, however,
the implicit assumption that the individual peers participate
for a signicant length of time (for instance, [23] reports a
median session duration of about an hour both for Napster
and for Gnutella). In contrast, CoopNet seeks to operate in
a highly dynamic situation such as a
ash crowd where an
individual client may only participate for a few minutes. The
disruption that this might cause is especially challenging for
streaming media compared to static le downloads, which is
the primary focus of Napster and Gnutella. The short life-time
of the individual nodes poses a challenge to distributed
search schemes such as CAN [20], Chord [24], Pastry [21], and
Tapestry [29].
Work on application-level multicast (e.g., ALMI [17], End
System Multicast [3], Scattercast [2]) is directly relevant to
the live streaming aspect of CoopNet. CoopNet could benet
from the e-cient tree construction algorithms developed in
previous work. Our focus here, however, is on using real traces
to evaluate the e-cacy of CoopNet. Thus we view our work as
complementing existing work on application-level multicast.
We also consider the on-demand streaming case, which does
not quite t in the application-level multicast framework.
Existing work on distributed streaming (e.g., [13]) is also
directly relevant to CoopNet. A key distinction of our work
is that we focus on the distruption and packet loss caused
by node arrivals and departures, which is likely to be signi-
cant in a highly dynamic environment. Using traces from the
September 11
ash crowd, we are able to evaluate this issue
in a realistic setting.
Systems such as SpreadIt [5], Allcast [31] and vTrails [33]
are perhaps closest in spirit to our work. Like CoopNet, they
attempt to deliver streaming content using a peer-to-peer ap-
proach. SpreadIt diers from CoopNet is a couple of ways.
First, it uses only a single distribution tree and hence is vulnerable
to disruptions due to node departures. Second, the
tree management algorithm is such that the nodes orphaned
by the departure of their parent might be bounced around
between multiple potential parents before settling on a new
parent. In contrast, CoopNet uses a centralized protocol (Sec-
tion 3.3), which enables much quicker repairs.
It is hard for us to do a specic comparison with Allcast
1 Napster has central servers, but these only hold indices, not
content.
and vTrails, in the absence of published information.
3. COOPERATIVE NETWORKING (COOPNET)
In this section, we present the details of CoopNet as it
applies to the distribution of streaming media content. We
rst consider the live streaming case, where we discuss and
analyze multiple description coding (MDC) and distribution
tree management. We then turn to the on-demand streaming
case.
3.1 Live Streaming
Live streaming refers to the synchronized distribution of
streaming media content to one or more clients. (The content
itself may either be truly live or pre-recorded.) Therefore
multicast is a natural paradigm for distributing such content.
Since IP multicast is not widely deployed, especially at the
inter-domain level, CoopNet uses application-level multicast
instead.
A distribution tree rooted at the server is formed, with
clients as its members. Each node in the tree transmits the
received stream to each of its children using unicast. The out-degree
of each node is constrained by the available outgoing
bandwidth at the node. In general, the degree of the root
node (i.e., the server) is likely to be much larger than that of
the other nodes because the server is likely to have a much
higher bandwidth than the individual client nodes.
One issue is that the peers in CoopNet are far from being
dedicated servers. Their ability and willingness to participate
in CoopNet may
uctuate with time. For instance, a client's
participation may terminate when the user tunes out of the
live stream. In fact, even while the user is tuned in to the live
stream, CoopNet-related activity on his/her machine may be
scaled down or stopped immediately when the user initiates
other, unrelated network communication. Machines can also
crash or become disconnected from the network.
With a single distribution tree, the departure or reduced
availability of a node has a severe impact on its descendants.
The descendants may receive no stream at all until the tree
has been repaired. This is especially problematic because
node arrivals and departures may be quite frequent in
ash
crowd situations. To reduce the disruption caused by node
departures, we advocate having multiple distribution trees
spanning a given set of nodes and transmitting a dierent
MDC description down each tree. This would diminish the
chances of a node losing the entire stream (even temporarily)
because of the departure of another node. We discuss this
further in Section 3.2.
The distribution trees need to be constantly maintained as
new clients join and existing ones leave. In Section 3.3, we
advocate a centralized approach to tree management, which
exploits the availability of a resourceful server node, coupled
with client cooperation, to greatly simplify the problem.
3.2 Multiple Description Coding (MDC)
Multiple description coding is a method of encoding the
audio and/or video signal into M > 1 separate streams, or
descriptions, such that any subset of these descriptions can
be received and decoded into a signal with distortion (with
respect to the original signal) commensurate with the number
of descriptions received; that is, the more descriptions re-
ceived, the lower the distortion (i.e., the higher the quality) of
encoder decoder encoder decoder
base layer
Figure
1: (a) Multiple description coding. (b) Layered
coding.
Bits
Distortion
R
Packet 3
Packet 4
Packet M
RS
Bit stream
Figure
2: Priority encoded packetization of a group
of frames (GOF). Any m out of M packets can recover
the initial Rm bits of the bit stream for the GOF.
the reconstructed signal. This diers from layered coding 2 in
that in MDC every subset of descriptions must be decodable,
whereas in layered coding only a nested sequence of subsets
must be decodable, as illustrated in Figure 1. For this extra
exibility, MDC incurs a modest performance penalty relative
to layered coding, which in turn incurs a slight performance
penalty relative to single description coding.
A simple MDC system for video might be the following.
The original video picture sequence is demultiplexed into M
subsequences, by putting every Mth picture, m
into the mth subsequence, . The
subsequences are independently encoded to form the M de-
scriptions. Any subset of these M descriptions can be decoded
and the pictures can be remultiplexed to reconstruct a video
sequence whose frame rate is essentially proportional to the
number of descriptions received.
More sophisticated forms of multiple description coding
have been investigated over the years; some highlights are [25,
26, 27, 6]. For an overview see [7]. A particularly e-cient and
practical system is based on layered audio or video coding [18,
Reed-Solomon coding [28], priority encoded transmission
[1], and optimized bit allocation [4, 19, 11, 12]. In such a system
the audio and/or video signal is partitioned into groups
of frames (GOFs), each group having duration
or so. Each GOF is then independently encoded, error
protected, and packetized into M packets, as shown in Figure
2. If any m M packets are received, then the initial
Rm bits of the bit stream for the GOF can be recovered, re-
Layered coding is also known as embedded, progressive, or
scalable coding.
description
description
description
GOF i-1 GOF i+1 .
Figure
3: Construction of MDC streams from packetized
GOFs.
sulting in distortion D(Rm ), where
and consequently D(R0 ) D(R1 ) D(RM ). Thus
all M packets are equally important; only the number of received
packets determines the reconstruction quality of the
GOF. Further, the expected distortion is
where p(m) is the probability that m out of M packets are re-
ceived. Given p(m) and the operational distortion-rate function
D(R), this expected distortion can be minimized using
a simple procedure that adjusts the rate points
subject to a constraint on the packet length [4, 19, 11, 12].
By sending the mth packet in each GOF to the mth descrip-
tion, the entire audio and/or video signal is represented by M
descriptions, where each description is a sequence of packets
transmitted at rate 1 packet per GOF, as illustrated in Figure
3. It is a very simple matter to generate these optimized
M descriptions on the
y, assuming that the signal is already
coded with a layered codec.
3.2.1 CoopNet Analysis: Quality During Multiple Failures
Let us consider how multiple description coding achieves
robustness in CoopNet. Suppose that the server encodes its
AV signal into M descriptions as described above, and transmits
the descriptions down M dierent distribution trees,
each rooted at the server. Each of the distribution trees conveys
its description to all N destination hosts. Ordinarily, all
destination hosts receive all M descriptions. However, if
any of the destination hosts fail (or leave the session), then
all of the hosts that are descendents of the failed hosts in
the mth distribution tree will not receive the mth descrip-
tion. The number of descriptions that a particular host will
receive depends on its location in each tree relative to the
failed hosts. Specically, a host n will receive the mth description
if none of its ancestors in the mth tree fail. This
happens with probability (1 ) An , where An is the number
of the host's ancestors and is the probability that a host fails
(assuming independent failures). If hosts are placed at random
sites in each tree, then the unconditional probability that
any given host will receive its mth description is the average
hosts in the tree. Thus
the number of descriptions that a particular host will receive is
randomly distributed according to a Binomial(M; N ) distri-
bution, i.e.,
. Hence for large M ,
the fraction of descriptions received is approximately Gaussian
with mean N and variance N (1 N ). This can be seen
in
Figure
4, which shows (in bars) the distribution p(m) for
various values of In
the gure, to compute N we assumed balanced binary trees
with N nodes and probability of host failure
that as N grows large, performance slowly degrades, because
the depth of the tree (and hence 1 N ) grows like log 2 N .
The distribution p(m) can be used to optimize the multiple
description code by choosing the rate points R0 ;
Figure
4: SNR in dB (line) and probabililty distribution
(bars) as a function of the number of descriptions
received, when the probability of host failure is
to minimize the expected distortion P M
subject
to a packet length constraint. Figure 4 shows (in lines),
the quality associated with each p(m), measured as SNR in
as a function of the number
of received descriptions, . In the gure, to
compute the rate points R0 ; we assumed an operational
distortion-rate function which is
asymptotically typical for any source with variance 2 , where
R is expressed in bits per symbol, and we assumed a packet
length constraint given as
3.2.2 CoopNet Analysis: Quality During Single Failure
The time it takes to repair the trees is called the repair
time. If of the hosts fail during each repair time, then the
average length of time that a host participates in the session
is 1= repair times. When the number of hosts is small
compared to 1=, then many repair times may pass between
single failures. In this case, most of the time all hosts receive
all descriptions, and quality is excellent. Degradation occurs
only when a single host fails. Thus, it may be preferable
to optimize the MDC system by minimizing the distortion
expected during the repair interval in which the single host
fails, rather than minimizing the expected distortion over all
time. To analyze this case, suppose that a single host fails
randomly. A remaining host n will not receive the mth description
if the failed host is an ancestor of host n in the
mth tree. This happens with probability An=(N 1), where
An is the number of ancestors of host n. Since hosts are
place at random sites in each tree, the unconditional probability
that any given host will receive its mth description is
the average
1)). Thus the
number of descriptions that a particular host will receive is
randomly distributed according to a Binomial(M; N ) distri-
bution. Equivalently, the expected number of hosts that receive
descriptions during the failure is (N 1)p(m), where
This distribution can be used to
optimize the multiple description code for the failure of a single
host. Figure 5 illustrates this distribution and the corresponding
optimized quality as a function of the number of descriptions
received, for
Note that as M increases, for xed N , the distribution again
becomes Gaussian. One implication of this is that the expected
number of hosts that receive 100% of the descriptions
Figure
5: SNR in dB (line) and probabililty distribution
(bars) as a function of the number of descriptions
received during the failure of a single host.
decreases. However it is also the case that the expected number
of hosts that receive fewer than 50% of the descriptions
decreases, resulting in an increase in quality on average. Fur-
ther, as N increases, for xed M , performance becomes nearly
perfect, since N 1 log 2 N=N , which goes to 1. However,
for large N , it becomes increasingly di-cult to repair the trees
before a second failure occurs.
3.2.3 Further Analyses
These same analyses can be extended to d-ary trees. It
is not di-cult to see that for d 2, a d-ary trees with
log 2 d N nodes has the same height, and hence the same
performance, as a binary tree with only N nodes. Thus when
each node has a large out-degree, i.e., when each host has a
large uplink bandwidth, much larger populations can be han-
dled. Interestingly, the analysis also applies when
if each host can devote only as much uplink bandwidth as
its downlink video bandwidth (which is typically the case for
modem users), then the descriptions can still be distributed
peer-to-peer by arranging the hosts in a chain, like a bucket
brigade. It can be shown that when the order of the hosts
in the chain is random and independent for each description,
then for a single failure the number of hosts receiving m out
of M descriptions is binomially distributed with parameters
M and N , where . Although this holds
for any N , it is most suitable for smaller N . For larger N , it
may not be possible to repair the chains before other failures
occur. In fact, as N goes to innity, the probability that any
host receives any descriptions goes to zero.
In this section we have proposed optimizing the MDC system
to the unconditional distribution p(m) derived by averaging
over trees and hosts. Given any set of trees, however,
the distribution of the number of received descriptions varies
widely across the set of hosts as a function of their upstream
connectivity. By optimizing the MDC system to the unconditional
distribution p(m), we are not minimizing the expected
distortion for any given host, but rather minimizing the sum
of the expected distortions across all hosts, or equivalently,
minimizing the expected sum of the distortions over all hosts.
3.3 Tree Management
We now discuss the problem of constructing and maintaining
the distribution trees in the face of frequent node arrivals
and departures. There are many (sometimes con
for the tree management algorithm:
1. Short and wide tree: The trees should be as short
as possible so as to minimize the latency of the path
from the root to the deepest leaf node and to minimize
the probability of disruption due to the departure of an
ancestor node. For it to be short, the tree should be
balanced and as wide as possible, i.e., the out-degree
of each node should be as much as its bandwidth will
allow. However, making the out-degree large may leave
little bandwidth for non-CoopNet (and higher priority)
tra-c emanating from the node. Interference due to
such tra-c could cause a high packet loss rate for the
CoopNet streams.
2. E-ciency versus tree diversity: The distribution
trees should be e-cient in that their structure should
closely re
ect the underlying network topology. So, for
instance, if we wish to connect three nodes, one each
located in New York (NY), San Francisco (SF), and
Los Angeles (LA), the structure NY!SF!LA would
likely be far more e-cient than SF!NY!LA (! denotes
a parent-child relationship). However, striving for
e-ciency may interfere with the equally important goal
of having diverse distribution trees. The eectiveness
of MDC-based distribution scheme described in Section
3.2 depends critically on the diversity of the distribution
trees.
3. Quick join and leave: The processing of node joins
and leaves should be quick. This would ensure that the
interested nodes would receive the streaming content
as quickly as possible (in the case of a join) and with
minimal interruption (in the case of a leave). However,
the quick processing of joins and leaves may interfere
with the e-ciency and balanced tree goals listed above.
4. Scalability: The tree management algorithm should
scale to a large number of nodes, with a correspondingly
high rate of node arrivals and departures. For instance,
in the extreme case of the
ash crowd at MSNBC on
September 11, the average rate of node arrivals and de-
parturtes was 180 per second while the peak rate was
about 1000 per second.
With these requirements in mind, we now describe our approach
to tree construction and management. We rst describe
the basic protocol and then discuss optimizations.
3.3.1 Basic Protocol
We exploit the presence of a resourceful server node to
build a simple and e-cient protocol to process node joins and
leaves. While it is centralized, we argue that this protocol can
scale to work well in the face of extreme
ash crowd situations
such as the one that occurred on September 11. Despite the
ash crowd, the server is not overloaded since the burden of
distributing content is shared by all peers. Centralization also
simplies the protocol greatly, and consequently makes joins
and leaves quick. In general, a criticism of centralization is
that it introduces a single point of failure. However, in the
context of CoopNet, the point of centralization is the server,
which is also the source of data. If the source (server) fails, it
may not really matter that the tree management also breaks
down. Also, recall from Section 1 that the goal of CoopNet is
to complement, not replace, the client-server system.
The server has full knowledge of the topology of all of the
distribution trees. When a new node wishes to join the sys-
tem, it rst contacts the server. The new node also informs
the server of its available network bandwidth to serve furture
downstream nodes. The server responds with a list of designated
parent nodes, one per distribution tree. The designated
parent node in each tree is chosen as follows. Starting at the
server, we work our way down the tree until we get to a level
where there are one or more nodes that have the necessary
spare capacity (primarily network bandwidth) to serve as the
parent of the new node. (The server could itself be the new
parent if it has su-cient spare capacity, which it is likely to
have during the early stages of tree construction.) The server
then picks one such node at random to be the designated parent
of the new node. This top-down procedure ensures a short
and largely balanced tree. The randomization helps make the
trees diverse. Upon receiving the server's message, the new
node sends (concurrent) messages to the designated parent
nodes to get linked up as a child in each distribution tree. In
terms of messaging costs, the server receives one message and
sends one. Each designated parent receives one message and
sends one (an acknowledgement). The new node sends and
receives is the number of MDC
descriptions (and hence distribution trees) used.
Node departures are of two kinds: graceful departures and
node failures. In the former case, the departing node informs
the server of its intention to leave. For each distribution tree,
the server identies the children of the departing node and
executes a join operation on each child (and implicitly the
subtree rooted at the child) using the top-down procedure
described above. The messaging cost for the server would at
most be P
receives, where d i is the number
of children of the departing node in the ith distribution
tree. (Note that the cost would be somewhat lower in general
because a few of the children may be in common across multiple
trees.) Each child sends and receives M messages.
To reduce its messaging load, the server could make the determination
of the designated parent for each child in each
tree and then leave it to another node (such as the departing
node, if it is still available) to convey the information to each
child. In this case, the server would have to send and receive
just one message.
A node failure corresponds to the case where the departing
node leaves suddenly and is unable to notify either the server
or any other node of its departure. This may happen because
of a computer crashing, being turned o, or becoming disconnected
from the network. We present a general approach for
dealing with quality degradation due to packet loss; node failure
is a special case where the packet loss rate experienced by
the descendants of the failed node is 100%. Each node monitors
the packet loss rate it is experiencing in each distribution
tree. When the packet loss rate reaches an unacceptable level
(a threshold that needs to be ne-tuned based on further re-
search), a node contacts its parent to check if the parent is
experiencing the same problem. If so, the source of the problem
(network congestion, node failure, etc.) is upstream of the
parent and the node leaves it to the parent to deal with it.
(The node also sets a su-ciently long timer to take action on
its own in case its parent has not resolved the problem within
a reasonable period of time.) If the parent is not experiencing
a problem or it does not respond, the aected node will contact
the server and execute a fresh join operation for it (and
its subtree) to be moved to a new location in the distribution
tree.
3.3.2 Optimizations
We now discuss a few optimizations of the basic protocol.
The rst optimization seeks to make the distribution trees
e-cient, as discussed above. The basic idea here is to preferentially
attach a new node as the child of an existing node
that is \nearby" in terms of network distance (i.e., latency).
The denition of \nearby" needs to be broad enough to accomodate
signicant tree diversity. When trying to insert a
new node, the server rst identies a (su-ciently large) sub-set
of nodes that are close to the new node. Then using the
randomized top-down procedure discussed in Section 3.3.1, it
tries to nd a parent for the new node (in each tree) among
the set of nearby nodes. Using this procedure, it is quite likely
that many of the parents of the new node (on the the various
distribution trees) will be in the same vicinity, which is ben-
ecial from an e-ciency viewpoint. We argue that this also
provides su-cient diversity since the primary failure mode we
are concerned with is node departures and node failures. So it
does not matter much that all of the parents may be located
in the same vicinity (e.g., same metropolitan area).
To determine the network distance between two nodes, we
use a procedure based on previous work on network distance
estimation [14], geographic location estimation [16], overlay
construction [20], and nding nearby hosts [8]. Each node determines
its network \coodinates" by measuring the network
latency (say using ping) to a set of landmark hosts (about
well-distributed landmark hosts should su-ce in practice).
The coordinate of a node is the n-tuple (d1
n is the number of landmarks. The server keeps track of the
coordinates of all nodes currently in the system (this information
may need to be updated from time to time). When
the new node contacts it, the server nds nearby nodes by
comparing the coordinates of the new node with those of existing
nodes. This comparison could involve computing the
Eucledian distance between the coordinates of two nodes (as
in [16]), computing a dierent distance metric such as the
Manhattan distance, or simply comparing the relative ordering
the various landmarks based on the measured latency (as
in [20]).
The second optimization is motivated by the observation
that it would be benecial to have have more \stable" nodes
close to the root of the tree. In this context, \stable" nodes
are ones that are likely to participate on CoopNet for a long
duration and have good network connectivity (e.g., few dis-
truptions due to competing tra-c from other applications).
Having such nodes close to the root of the tree would benet
their many descendants. As a background process, the server
could identify stable nodes by monitoring their past behavior
and migrate them up the tree. Further research is needed to
determine the feasibility of identifying stable nodes, the ben-
ets of migrating such nodes up the tree, and the impact this
might have on tree diversity.
3.3.3 Feasibility of the Centralized Protocol
The main question regarding the feasibility of the centralized
tree management protocol is whether the server can keep
up. To answer this question, we consider the September 11
ash crowd at MSNBC, arguably an extreme
ash crowd sit-
uation. At its peak, there were 18,000 nodes in the system
and the rate of node arrivals and departures was 1000 per
second. 3 (The average numbers were 10000 nodes and 180
arrivals and departures per second.) In our calculations here,
we assume that the number of distribution trees (i.e., the
number of MDC descriptions) is 16 and that on average a
node has 4 children in a tree. We consider various resources
that could become a bottleneck at the server (we only focus
on the impact of tree management on the server):
Memory: To store the entire topology of one tree in
memory, the server would need to store as many pointers
as nodes in the system. Assuming a pointer size of
8 bytes (i.e., a 64-bit machine) and auxiliary data of 24
bytes per node, the memory requirement would be about
576 KB. Since there are 16 trees, the memory requirement
for all trees would be 9.2 MB. In addition, for each
node the server needs to store its network coordinates.
Assuming this is a 10-dimensional vector of delay values
bytes each), the additional memory requirement
would be 360 KB. So the total memory requirement at
the server would be under 10 MB, which is a trivial
amount for any modern machine.
Network departures are more expensive
than node arrivals, so we focus on departures.
The server needs to designate a new parent in each distribution
tree for each child of the departing node. Assuming
that nodes are identied by their IP addresses
bytes assuming IPv6) and that there are 4 children
per tree on average, the total amount of data that the
server would need to send out is 1 KB. If there are
1000 departures per second, the bandwidth requirement
would be 8 Mbps. This is likely to be a small fraction
of the network bandwidth at a large server site such as
MSNBC.
CPU: Node departure involves nding a new set of parents
for each child of the departing node. So the CPU
cost is roughly equal to the number of children of the departing
node times the cost of node insertion. To insert
a node, the server has to scan the tree levels starting
with the root until it reaches a level containing one or
more nodes with the spare capacity to support a new
child. The server picks one such node at random to be
the new parent. Using a simple array data structure to
keep track of the nodes in each level of the tree that have
capacity, the cost of picking a parent at random can
be made (a small) constant. Since the number of levels
in the tree is about log(N ), where N is the number of
nodes in the system, the node insertion cost (per tree)
is O(log(N)). (With and an average of 4
children per node, the depth of the tree will be about
9.)
A departure rate of 1000 per second would result in
64,000 insertions per second (1000 departures times 4
children per departing node times 16 trees). Given that
memory speed by far lags CPU speed, we only focus
on how many memory lookups we can do per insertion.
Assuming a 40 ns memory cycle, we are allowed about
memory accesses per insertion, which is likely to be
more than su-cient.
3 One reason for the high rate of churn may be that users were
discouraged by the degradation in audio/video quality caused
by the
ash crowd, and so did not stay for long. However, we
are not in a position to conrm that this was the case.
In general, the centralized approach can be scaled up (at
least in terms of CPU and memory resources) by having a
cluster of servers and partitioning the set of clients across the
set of server nodes.
We are in the process of benchmarking our implementation
to conrm the rough calculations made above.
3.3.4 Distributed Protocol
While the centralized tree management protocol appears
to be adequate for large
ash crowd situations such as that
experienced by MSNBC on September 11, it is clear that there
are limits to its scalability. For instance, in the future it is
conceivable that
ash crowds for streaming media content on
the Web will in some cases be as large as television audiences
during highly popular events | hundreds of millions or even
billions of clients. A centralized solution may break down in
such a situation, necessitating an alternative approach to tree
management.
We could leverage recent work on distributed hash tables
(DHTs), such as CAN [20], Chord [24], Pastry [21], and Tapestry
[29], to build construct and maintain the trees in a distributed
fashion. Brie
y, DHTs provide a scalable unicast routing
framework for peer-to-peer systems. A multicast distribution
tree can be constructed using reverse-path forwarding (as in
systems such as Bayeux [30] and Scribe [22]). To construct
multiple (and diverse) distribution trees, each node could be
assigned multiple IDs, one per tree.
There are a number of open research issues. First, while
there exist algorithms to support node joins and leaves, the
dynamic behavior of DHTs is poorly understood. Second,
it is unclear how to incorporate constraints, such as limited
node bandwidth, into the DHT framework. Some systems
such as Pastry maintain multiple alternate routes at each hop.
This should make it easier to construct multicast trees while
accomodating node capacity constraints.
3.4 On-demand Streaming
We now turn to on-demand streaming, which refers to the
distribution of pre-recorded streaming media content on demand
(e.g., when a user clicks on the corresponding link). As
such, the streams corresponding to dierent users are not syn-
chronized. When the server receives such a request, it starts
streaming data in response if its current load condition per-
mits. However, if the server is overloaded, say because of a
ash crowd, it instead sends back a response including a short
list of IP addresses of clients (peers) who have downloaded
(part or all of) the requested stream and have expressed a
willingness to participate in CoopNet. The requesting client
then turns to one or more of these peers to download the
desired content. Given the large volume of streaming media
content, the burden on the server (in terms of CPU, disk, and
network bandwidth) of doing this redirection is quite minimal
compared to that of actually serving the content. So we
believe that this redirection procedure will help reduce server
load by several orders of magnitude.
While the procedure described above is similar to one that
might apply to static le content, there are a couple of important
dierences arising from the streaming nature of the
content. First, a peer may only have a part of the requested
content because, for instance, the user may have stopped the
stream halfway or skipped over portions. So in its initial
handshake with a peer, a client nds out which part of the requested
content is available at the peer and accordingly plans
to make requests to other peers for the missing content, if any.
A second issue is that, as with the live streaming case, peers
may fail, depart, or scale back their participation in CoopNet
at any time. In contrast with le download, the time-sensitive
nature of streaming media content makes it especially susceptible
to such disruptions. As a solution, we propose the use of
distributed streaming where a stream is divided into a number
of substreams, each of which may be served by a dierent
peer. Each substream corresponds to a description created
using MDC (Section 3.2). Distributed streaming improves
robustness to disruptions caused by the untimely departure
of peer nodes and/or network connectivity problems with respect
to one or more peers. It also helps distribute load more
evenly among peers.
4. PERFORMANCE EVALUATION
We now present a performance evaluation of CoopNet based
on simulations driven by traces of live and on-demand content
served by MSNBC on September 11, 2001.
4.1 Live Streaming
We evaluate the MDC-based live streaming design using
traces of a 100kbps live stream. The trace started at 18:25
GMT (14:25 EST) and lasted for more than one hour (4000
seconds).
4.1.1 Trace Characteristics
Figure
6 shows the time series of the number of clients simultaneously
tuned in to the live stream. The peak number of
simultaneous clients exceeds 17,000. On average, there are 84
clients departing every second. (We are unable to denitely
explain the dip around the 1000-seond mark, but it is possibly
due to a glitch in the logging process.) Figure 7 shows
the distribution of client lifetimes. Over 70% of the clients
remain tuned in to the live stream for less than a minute. We
suspect that the short lifetimes could be because users were
frustrated by the poor quality the video stream during the
ash crowd. If the quality were improved (say using CoopNet
to relieve the server), client lifetimes may well become longer.
This, in turn, would increase the eectiveness of CoopNet.
4.1.2 Effectiveness of MDC
We evaluate the impact of MDC-based distribution (Sec-
tion 3.2) on the quality of the stream received by clients in
the face of client departures. When there are no departures,
all clients receive all of the MDC descriptions and hence perceive
the full quality of the live stream.
We have conducted two simulation experiments. In the
rst experiment, we construct completely random distribution
trees at the end of the repair interval following a client
departure. We then analyze the stream quality received by
the remaining clients. The random trees are likely to be diverse
(i.e., uncorrelated), which improves the eectiveness of
MDC-based distribution. In the second experiment, we simulate
the tree management algorithm described in Section 3.3.
Thus the distribution trees are evolved based on the node arrivals
and departures recorded in the trace. We compare the
results of these two experiments at the end of the section.
In more detail, we conducted the random tree experiment
as follows. For each repair interval, we construct M distribution
trees (corresponding to the M descriptions of the MDC
coder) spanning the N nodes in the system at the beginning
of the interval. Based on the number of departing clients, d,
Node Arrivals and Departures200060001000014000180000 500 1000 1500 2000 2500 3000 3500 4000
Time (seconds)
of
Nodes
Figure
Number of clients and departures.
Duration
Minutes
Percentage
of
Nodes
Figure
7: Duration distribution.
Table
1: Random Tree Experiment: probability distribution
of descriptions received vs. number of distribution
trees
recorded through the end of the repair interval, we randomly
remove d nodes from the tree, and compute the number of
descriptions received by the remaining nodes. The perceived
quality of the stream at a client is determined by the fraction
of descriptions received by that client. The set of distribution
trees is characterized by three parameters: the number
of trees (or, equivalently, descriptions), the maximum out-degree
of nodes in each tree, and the out-degree of the root
(i.e., the live streaming server). The out-degree of a node is
typically a function of its bandwidth capacity. So the root
(i.e., the server) tends to have a much larger out-degree than
bandwidth-constrained clients. In our random tree construc-
tion, each client is assigned a random degree subject to a
maximum. We varied the degree of the root and the number
of descriptions to study their impact on received stream qual-
ity. We set the repair time to 1 second; we investigate the
impact of repair time in Section 4.1.3.
Table
shows how the number of distribution trees, M , affects
the fraction of descriptions received (expressed as a per-
centage, P ). We compute the distribution of P by averaged
across all client departures. We set the maximum out-degree
Figure
8: Random Tree Experiment: SNR in dB
(line) and probabililty distribution (bars) as a function
of the number of descriptions received
Quality
(SNR
in
Time (seconds)
Random Trees
Multiple Descriptions (M=16)
Single Description (M=1)
Figure
9: Random Tree Experiment: The SNR over
time for the MDC and SDC cases. At each time in-
stant, we compute the average SNR over all clients.
of a client to 4 and the root degree to 100. We vary the
number of descriptions among 1, 2, 4, 8, or 16. Each column
represents a range of values of P . For each pair of the range
and number of descriptions, we list the average percentage
of clients that receive at that level of quality. For example,
the rst table entry indicates that when using 2 descriptions,
94.80% of clients receive 100% of the descriptions (i.e., both
the descriptions).
As the number of descriptions increases, the percentage of
clients that receive the all of the descriptions (i.e.,
decreases. Nonetheless, the percentage of clients corresponding
to small values of P decreases dramatically. With 8 de-
scriptions, 96% (82.07% + 14.02%) of clients receive more
than 87.5% of the descriptions. For both 8 and
tions, all clients receive at least one description. Figure 8
shows the corresponding SNR. Figure 9 compares the SNR
over time for the 16-description case and the single description
case. MDC demonstrates a clear advantage over
SDC.
Table
shows how the root degree aects the distribution
of descriptions received. We set the number of descriptions
to 8 and the maximum out-degree of a client to 4. As the
root degree increases, the distribution shows an improvement.
Figure
shows the SNR and probability distribution. Compared
to the case where all nodes (including the root) have the
same degree d, a root degree of R shortens the tree by log d R.
This means fewer ancestors for nodes in the tree, which as
discussed in Section 3.2 increases the probability that a node
will receive a particular description.
R 100% [87.5,100) [75,87.5) [50,75) [25,50) 0
Table
2: Random Tree Experiment: probability distribution
of the descriptions received vs. root degree
Figure
10: Random Tree Experiment: SNR in dB
(line) and probabililty distribution (bars) as a function
of the number of descriptions received and the
root degree
Table
3: Evolving Tree Experiment: probability distribution
of descriptions received vs. number of distribution
trees
In our second experiment, we evolved the distribution trees
by simulating the tree management algorithm from Section
3.3. We set the root (i.e., server) out-degree to 100. The
maximum out-degree of a client is set to 4. Table 3 shows
the probability distribution of the descriptions received upon
client departures. Figure 11 shows the corresponding SNR.
The results are comparable to those of the random tree ex-
periment. This suggests that our tree management algorithm
is able to preserve signicant tree diversity even over a long
period of time (more than an hour in this case).
4.1.3 Impact of Repair Time
Finally, we evaluate the impact of the time it takes to repair
the tree following a node departure. Clearly, the longer the
repair time, the greater is the impact on the aected nodes.
Also, a longer repair time increase the chances of other nodes
departing before the repair is completed, thereby causing further
disruption.
We divide time into non-overlapping repair intervals and assume
that all leaves happen at the beginning of each interval.
We then compute fraction of descriptions received averaged
over all nodes (this is the quantity N discussed in Section
3.2). As in Section 3.2, assume a balanced binary tree at all
times.
Figure
12 shows the average number of descriptions received
as a function of time for four dierent settings of repair time:
seconds. With a repair time of 1 second, clients
would receive 90% of the descriptions on average. With a 10
second repair time, the fraction drops to 30%. We believe that
these results are encouraging since in practice tree repair can
be done very quickly, especially given that our tree management
algorithm is centralized (Section 3.1). Even a 1-second
Figure
Evolving Tree Experiment: SNR in dB
(line) and probabililty distribution (bars) as a function
of the number of descriptions received0.10.30.50.70.90 500 1000 1500 2000 2500 3000 3500 4000
Time (seconds)
Average
fraction
of
descriptions
received
Figure
12: The average fraction of descriptions received
for various repair times.
repair interval would permit multiple round-trips between the
server and the nodes aected by the repair (e.g., the children
of the departed node).
4.2 On-Demand Streaming
We now evaluate the potential of CoopNet in the case of on-demand
streaming. The goals of our evaluation are to study
the eects of client cooperation on:
reduction in load at the server
additional load incurred by cooperating peers
amount of storage provided by cooperating peers
likelihood of cooperating with proximate peers to improve
performance.
The cooperation protocol used in our simulations is based
on server redirection as in [15]. The server maintains a xed-
size list of IP addresses (per URL) of CoopNet clients that
have recently contacted it. To get content, a client initially
sends a request to the server. If the client is willing to co-
operate, the server redirects the request by returning a short
list of IP addresses of other CoopNet clients who have recently
requested the same le. In turn, the client contacts
these other CoopNet peers and arranges to retrieve the content
directly from them. Each peer may have a dierent portion
of the le, so it may be necessary to contact multiple
peers for content. In order to select a peer (or a set of peers
when using distributed streaming) to download content from,
peers run a greedy algorithm that picks out the peer(s) with
the longest portion of the le from the list returned by the
server. If a client cannot retrieve content through any peer,
it retrieves the entire content from the server. Note that the
server only provides the means for discovering other CoopNet
peers. Peers independently decide who they cooperate with.
The server maintains a list of 100 IP addresses per URL, and
returns a list of 10 IP addresses in the redirection messages
in our simulations.
We use traces collected at MSNBC during the
ash crowd
of Sep 11, 2001 for our evaluation. The
ash crowd started
at around 1:00 pm GMT (9:00 am EDT) and persisted for
the rest of the day. The peak request rate was three orders
of magnitudes more than the average. We report simulation
results for the beginning of the
ash crowd, between 1:00 pm
to 3:00 pm GMT. There were over 300,000 requests during
the 2-hour period. However, only 6% or 18,000 requests were
Time of Day
Average
Bandwidth
(bps)
Server
Server (CoopNet)
Clients (CoopNet)
(a) Average bandwidth at server and cooperating peers.
Degree of Parallelism
Average
Bandwidth
of
CoopNet
Clients
(bps)
(b) Average bandwidth at peers when using distributed streaming.
100.10.30.50.70.9Bandwidth of Active Peers (bps)
Cumulative
Distribution
(Least Loaded)
(c) Distribution of bandwidth at active peers.
Figure
13: Performance of CoopNet for on-demand
streaming.
successfully served at an average rate of 20 Mbps with a mean
session duration of 20 minutes. Unsuccessful requests were
not used in the analysis because of the lack of content byte-
range and session duration information.
4.2.1 Bandwidth Load
In our evaluation, load is measured as bandwidth usage.
We do not model available bandwidth between peers. We
assume that peers can support the full bit rate (56 kbps, 100
kbps) of each encoded stream. We also do not place a bound
on the number of concurrent connections at each peer. In
practice, nding peers with su-cient available bandwidth and
not overloading any one peer are important considerations,
and we are investigating these issues in ongoing work.
Figure
13(a) depicts the bandwidth usage during the 2-hour
period for two systems: the traditional client-server system,
and the CoopNet system. The vertical axis is average band-width
and the horizontal axis is time. There are two peaks
at around 1:40 pm and 2:10 pm, when two new streams were
added to the server. In the client-server system, the server
was distributing content at an average of 20 Mbps. However,
client cooperation can reduce that bandwidth by orders of
magnitude to an average of 300 kbps. As a result, the server
is available to serve more client requests. The average band-width
contribution that CoopNet clients need to make to the
system is 45 kbps. Although the average bandwidth contribution
is reasonably small, peers are not actively serving content
all the time. We nd that typically less than 10% of peers are
active at any second. The average bandwidth contribution
that active CoopNet peers need to make to the system is as
high as 465 kbps, where average bandwidth of active peers is
computed as the total number of bits served over the total
length of peers' active periods.
To further reduce load at individual CoopNet clients, disjoint
portions of the content can be retrieved in parallel from
multiple peers using distributed streaming (Section 3.4). (The
bandwidth requirement placed on each peer is correspondingly
reduced.) Figure 13(b) depicts the average bandwidth
contributed versus the degree of parallelism. The degree of
parallelism is an upper-bound on the number of peers that can
be used in parallel. For example, clients can retrieve content
from up to 5 peers in parallel in a simulation with a degree of
parallelism of 5. The actual number of peers used in parallel
may be less than 5 depending on how many peers can provide
content in the byte-range needed by the client. The load
at each active peer is reduced as the degree of parallelism in-
creases. When the degree of parallelism is 5, peers are serving
content at only 35 kbps. However, the bandwidth of active
peers (not depicted in this gure) is only slightly reduced to
400 kbps. This is because the large amount of bandwidth required
to serve content during the two surges at 1:40 pm and
2:10 pm in
uence the average bandwidth.
The cumulative distribution of bandwidth contributed by
active CoopNet peers, depicted in Figure 13(c), illustrates
the impact of distributed streaming on bandwidth utiliza-
tion. Each solid line represents the amount of bandwidth
peers contribute when using 1, 5, and 10 degrees of paral-
lelism. The median bandwidth requirement is 390 kbps when
content is streamed from one peer, and only 66 bps for
degrees of parallelism. The bandwidth requirement imposed
on each peer is reduced as the degree of parallelism increases.
Although this reduction is signicant, a small portion of peers
still contribute more than 1 Mbps even when using 10 degrees
of parallelism. We believe that the combination of the following
two factors contribute to the wide range in bandwidth
usage: the greedy algorithm a client uses to select peers and
the algorithm the server uses to select a set of IP addresses
to give to clients.
For better load distribution, the server can run a load-aware
algorithm that redirects clients to recently seen peers that are
the least loaded (in terms of network bandwidth usage). In
order to implement this algorithm, the server needs to know
the load at individual peers. Therefore, peers constantly report
their current load status to the server. We use a report
interval of once every second in our simulations. Because
the server caches a xed-size list of IP addresses, only those
peers currently in the server's list need to send status up-
dates. Given this information, the server then selects the 10
least loaded peers that have recently accessed the same URL
as the requesting client to return in its redirection message.
This algorithm replaces the one described earlier in this section
where the server redirects clients to peers that were re-0.20.40.60.81
Storage Allocated At Each Peer (Bytes)
Cumulative
Distribution
Figure
14: Storage requirement at CoopNet peers.
cently seen. Clients, however, use the same greedy algorithm
to select peers. We nd that using this new algorithm, active
clients serve content at 385 kbps. The dashed line in Figure
13(c) depicts the cumulative distribution of bandwidth
contributed by CoopNet clients when the load-aware algorithm
is used at the server. In this simulation, clients stream
content from at most one other peer (degree of parallelism of
1). For the most part, the distribution is similar to the one
observed when the server redirects the request to recently seen
peers. The dierence lies in the tail end of the distribution.
About 6% of peers contributed more than 500 kbps of band-width
when the server runs the original algorithm, compared
to only 2% when the server runs the load-aware algorithm.
In addition, the total number of active peers in the system
doubles when the load-aware algorithm is used.
We nd that client cooperation signicantly reduces server
load, freeing up bandwidth to support more client connec-
tions. In addition, the combination of distributed streaming
and a load-aware algorithm used by the server further reduces
the load on individual peers.
4.2.2 Storage Requirement
In order to facilitate cooperation, clients also contribute
storage for caching content. In our simulations, peers cache
streams that they have downloaded for the entire duration of
the simulation. Figure 14 depicts the cumulative distribution
of the amount of storage each peer needs to provide. Storage
sizes range from 200 B to 100 MB. Over half of the peers store
less than 1 MB of content, and only 5% of peers store over
6MB of content. The storage requirement is reasonable for
modern computers.
4.2.3 Nearby Peers
Next, we look at the likelihood of cooperating with nearby
peers. Finding nearby peers can greatly increase the e-ciency
of peer-to-peer communications. In our evaluation, peers are
close if they belong in the same BGP prex domain [9]. We
cluster over 9,000 IP addresses of clients who successfully received
content in the 2-hour trace based on BGP tables obtained
from a BBNPlanet router [32] on Jan 24, 2001. The
trace is sampled by randomly drawing ten 5-minute windows.
We look the probability of nding at least n peers in the same
AS domain, where n is the degree of parallelism, ranging from
1 to 10. The sampling is repeated for window sizes of 10 and
minutes.
For a window of 5 minutes, the probability of nding at least
one peer who has requested the same content and belongs to
the same BGP prex cluster is 12%. As the window size increases
to 10 and 15 minutes, the probability slightly increases
to 16% and 17%, accordingly. For distributed streaming, as
the degree of parallelism increases, the probability of nding
nearby peers decreases. Using a 10-minute window, the probability
of nding at least 5 peers and 10 peers in the same
BGP prex cluster are as low as 5% and 2%.
To better understand whether the small number of IP addresses
aects the probabilities of nding proximate peers, we
also clustered over 90,000 IP addresses from the entire 2-hour
trace, including unsuccessful requests. For the most part, the
probabilities are the same or 1-2% higher than those those
reported above for successful requests. Finding a proximate
peer with su-cient available bandwidth is part of ongoing
work.
In summary, our initial results suggest that client cooperation
can improve overall system performance. Distributed
streaming and load-aware server are promising solutions to
reduce load at individual peers while improving robustness.
5. CONCLUSIONS
In this paper, we have presented CoopNet, a peer-to-peer
content distribution scheme that helps servers tide over crisis
situations such as
ash crowds. We have focussed on the
application of CoopNet to the distribution of streaming median
content, both live and on-demand. One challenge is that
clients may not participate in CoopNet for an extended length
of time. CoopNet employs distributed streaming and multiple
description coding to improve the robustness of the distributed
streaming content in face of client departures.
We have evaluated the feasibility and potential performance
of CoopNet using traces gathered at MSNBC during the
ash
crowd that occurred on September 11, 2001. This was an
extreme event even by
ash crowd standards, so using these
traces helps us stress test the CoopNet design. Our results
suggest that CoopNet is able to reduce server load significantly
without placing an unreasonable burden on clients.
For live streams, using multiple independent distribution trees
coupled with MDC improves robustness signicantly.
We are currently building a prototype implementation of
CoopNet for streaming media distribution.
Acknowledgements
We are grateful to Steven Lautenschlager, Ted McConville,
and Dave Roth for providing us the MSNBC streaming media
logs from September 11. We would also like to thank
the anonymous NOSSDAV reviewers for their insightful comments
6.
--R
Joint source and channel coding for image transmission over lossy packet
Multiple description coding: Compression meets the network.
Unequal loss protection: Graceful degradation of image quality over packet erasure channels through forward error correction.
Approximately optimal assignment for unequal loss protection.
Embedded video subband coding with 3D SPIHT.
Multiple description source coding through forward error correction codes.
Design of multiple description scalar quantizers.
Design of entropy-constrained multiple description scalar quantizers
Optimal pairwise correlating transforms for multiple description coding.
Control Systems for Digital Communication and Storage.
BBNPlanet publically available route server
--TR
systems for digital communication and storage
Enabling conferencing applications on the internet using an overlay muilticast architecture
Chord
A scalable content-addressable network
An investigation of geographic mapping techniques for internet hosts
Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility
Towards global network positioning
The Case for Cooperative Networking
Tapestry: An Infrastructure for Fault-tolerant Wide-area Location and
Scattercast
--CTR
Xinyan Zhang , Jiangchuan Liu, Gossip based streaming, Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, May 19-21, 2004, New York, NY, USA
Duc A. Tran , Kien A. Hua , Tai T. Do, Scalable media streaming in large peer-to-peer networks, Proceedings of the tenth ACM international conference on Multimedia, December 01-06, 2002, Juan-les-Pins, France
Mubashar Mushtaq , Toufik Ahmed , Djamal-Eddine Meddour, Adaptive packet video streaming over P2P networks, Proceedings of the 1st international conference on Scalable information systems, May 30-June 01, 2006, Hong Kong
Meng Zhang , Li Zhao , Yun Tang , Jian-Guang Luo , Shi-Qiang Yang, Large-scale live media streaming over peer-to-peer networks through global internet, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Guang Tan , Stephen A. Jarvis , Xinuo Chen , Daniel P. Spooner, Performance Analysis and Improvement of Overlay Construction for Peer-to-Peer Live Streaming, Simulation, v.82 n.2, p.93-106, February 2006
Reza Rejaie , Antonio Ortega, PALS: peer-to-peer adaptive layered streaming, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Karthik Lakshminarayanan , Venkata N. Padmanabhan, Some findings on the network performance of broadband hosts, Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, October 27-29, 2003, Miami Beach, FL, USA
Chuan Wu , Baochun Li, rStream: resilient peer-to-peer streaming with rateless codes, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Zongpeng Li , Baochun Li , Lap Chi Lau, On achieving maximum multicast throughput in undirected networks, IEEE/ACM Transactions on Networking (TON), v.14 n.SI, p.2467-2485, June 2006
Sachin Agarwal , Jatinder Pal Singh , Shruti Dube, Analysis and implementation of Gossip-based P2P streaming with distributed incentive mechanisms for peer cooperation, Advances in Multimedia, v.2007 n.2, p.1-12, April 2007
Song Ye , Fillia Makedon, Collaboration-aware peer-to-peer media streaming, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Thorsten Strufe , Jens Wildhagen , Gnter Schfer, Towards the Construction of Attack Resistant and Efficient Overlay Streaming Topologies, Electronic Notes in Theoretical Computer Science (ENTCS), 179, p.111-121, July, 2007
Dan Rubenstein , Sambit Sahu, Can unstructured P2P protocols survive flash crowds?, IEEE/ACM Transactions on Networking (TON), v.13 n.3, p.501-512, June 2005
Chow-Sing Lin , Yi-Chi Cheng, P2MCMD: A scalable approach to VoD service over peer-to-peer networks, Journal of Parallel and Distributed Computing, v.67 n.8, p.903-921, August, 2007
Yi Cui , Klara Nahrstedt, Layered peer-to-peer streaming, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Raj Kumar Rajendran , Dan Rubenstein, Optimizing the quality of scalable video streams on P2P networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.15, p.2641-2658, October 2006
Chuan Wu , Baochun Li, Optimal peer selection for minimum-delay peer-to-peer streaming with rateless codes, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Chun-Chao Yeh , Lin Siong Pui, On the frame forwarding in peer-to-peer multimedia streaming, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Yu-Wei Sung , Michael Bishop , Sanjay Rao, Enabling contribution awareness in an overlay broadcasting system, ACM SIGCOMM Computer Communication Review, v.36 n.4, October 2006
Yohei Okada , Masato Oguro , Jiro Katto , Sakae Okubo, A new approach for the construction of ALM trees using layered video coding, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Yang Guo , Kyoungwon Suh , Jim Kurose , Don Towsley, P2Cast: peer-to-peer patching for video on demand service, Multimedia Tools and Applications, v.33 n.2, p.109-129, May 2007
Yi-Cheng Tu , Jianzhong Sun , Mohamed Hefeeda , Sunil Prabhakar, An analytical study of peer-to-peer media streaming systems, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), v.1 n.4, p.354-376, November 2005
Shiwen Mao , Xiaolin Cheng , Y. Thomas Hou , Hanif D. Sherali, Multiple description video multicast in wireless ad hoc networks, Mobile Networks and Applications, v.11 n.1, p.63-73, February 2006
Kunwadee Sripanidkulchai , Aditya Ganjam , Bruce Maggs , Hui Zhang, The feasibility of supporting large-scale live streaming applications with dynamic application end-points, ACM SIGCOMM Computer Communication Review, v.34 n.4, October 2004
Zhichen Xu , Chunqiang Tang , Sujata Banerjee , Sung-Ju Lee, RITA: receiver initiated just-in-time tree adaptation for rich media distribution, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Leonardo Bidese de Pinho , Claudio Luis de Amorim, Assessing the efficiency of stream reuse techniques in P2P video-on-demand systems, Journal of Network and Computer Applications, v.29 n.1, p.25-45, January 2006
Yi Cui , Baochun Li , Klara Nahrstedt, On achieving optimized capacity utilization in application overlay networks with multiple competing sessions, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Kunwadee Sripanidkulchai , Bruce Maggs , Hui Zhang, An analysis of live streaming workloads on the internet, Proceedings of the 4th ACM SIGCOMM conference on Internet measurement, October 25-27, 2004, Taormina, Sicily, Italy
Yang Guo , Kyoungwon Suh , Jim Kurose , Don Towsley, P2Cast: peer-to-peer patching scheme for VoD service, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Padmavathi Mundur , Poorva Arankalle, Optimal server allocations for streaming multimedia applications on the internet, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.18, p.3608-3621, 21 December 2006
Daria Antonova , Arvind Krishnamurthy , Zheng Ma , Ravi Sundaram, Managing a portfolio of overlay paths, Proceedings of the 14th international workshop on Network and operating systems support for digital audio and video, June 16-18, 2004, Cork, Ireland
Mohamed Hefeeda , Ahsan Habib , Boyan Botev , Dongyan Xu , Bharat Bhargava, PROMISE: peer-to-peer media streaming using CollectCast, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Mohamed M. Hefeeda , Bharat K. Bhargava , David K. Y. Yau, A hybrid architecture for cost-effective on-demand media streaming, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.3, p.353-382, 20 February 2004
Zongpeng Li , Anirban Mahanti, A progressive flow auction approach for low-cost on-demand P2P media streaming, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks, August 07-09, 2006, Waterloo, Ontario, Canada
Ji Li , Karen Sollins , Dah-Yoh Lim, Implementing aggregation and broadcast over Distributed Hash Tables, ACM SIGCOMM Computer Communication Review, v.35 n.1, p.81-92, January 2005
Alan Kin Wah Yim , Rajkumar Buyya, Decentralized media streaming infrastructure (DeMSI): An adaptive and high-performance peer-to-peer content delivery network, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.12, p.737-772, December, 2006
Dejan Kosti , Adolfo Rodriguez , Jeannie Albrecht , Amin Vahdat, Bullet: high bandwidth data dissemination using an overlay mesh, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
Miguel Castro , Peter Druschel , Anne-Marie Kermarrec , Animesh Nandi , Antony Rowstron , Atul Singh, SplitStream: high-bandwidth multicast in cooperative environments, Proceedings of the nineteenth ACM symposium on Operating systems principles, October 19-22, 2003, Bolton Landing, NY, USA
Konstantin Andreev , Bruce M. Maggs , Adam Meyerson , Ramesh K. Sitaraman, Designing overlay multicast networks for streaming, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA
Toufik Ahmed , Mubashar Mushtaq, P2P Object-based adaptivE Multimedia Streaming (POEMS), Journal of Network and Systems Management, v.15 n.3, p.289-310, September 2007
Karthik Lakshminarayanan , Ananth Rao , Ion Stoica , Scott Shenker, End-host controlled multicast routing, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.6, p.807-825, 13 April 2006
Zongming Fei , Mengkun Yang, A proactive tree recovery mechanism for resilient overlay multicast, IEEE/ACM Transactions on Networking (TON), v.15 n.1, p.173-186, February 2007
Mojtaba Hosseini , Nicolas D. Georganas, End system multicast protocol for collaborative virtual environments, Presence: Teleoperators and Virtual Environments, v.13 n.3, p.263-278, June 2004
Ying Cai , Zhan Chen , Wallapak Tavanapong, Caching collaboration and cache allocation in peer-to-peer video systems, Multimedia Tools and Applications, v.37 n.2, p.117-134, April 2008 | content distribution networks;multiple description coding;peer-to-peer networks;streaming media |
507723 | A cryptographic solution to implement access control in a hierarchy and more. | The need for access control in a hierarchy arises in several different contexts. One such context is managing the information of an organization where the users are divided into different security classes depending on who has access to what. Several cryptographic solutions have been proposed to address this problem --- the solutions are based on generating cryptographic keys for each security class such that the key for a lower level security class depends on the key for the security class that is higher up in the hierarchy. Most solutions use complex cryptographic techniques: integrating these into existing systems may not be trivial. Others have impractical requirement: if a user at a security level wants to access data at lower levels, then all intermediate nodes must be traversed. Moreover, if there is an access control policy that does not conform to the hierarchical structure, such policy cannot be handled by existing solutions. We propose a new solution that overcomes the above mentioned shortcomings. Our solution not only addresses the problem of access control in a hierarchy but also can be used for general cases. It is a scheme similar to the RSA cryptosystem and can be easily incorporated in existing systems. | Figure
1: Enterprise Wide Personnel Hierarchy
2. RELATED WORK
A number of works [1, 2, 6, 9, 12, 14, 18, 23] relating to access
control in a hierarchy have been proposed. In almost all these
works, there is a relationship between the key assigned to a node
and those assigned to its children. The difference between the related
works lie mostly in the different cryptographic techniques
employed for key generation. Some of these techniques [1, 6, 9,
12, 14] are extremely complex. Below we outline a few important
works in this area.
One of the early solutions to the hierarchical access control problem
was proposed by Akl and Taylor [1, 2]. Their solution was
based on the RSA cryptosystem [17]. In this work the authors
choose the exponents in such a way that the key of a child node
can be easily derived from the key of its parent. Mackinnon et
al. [12] gave an optimal algorithm for selecting suitable exponents.
One potential drawback of these schemes is that if a user at a node
wishes to access information stored at a descendant node, he must
traverse all the intermediate nodes between his node and the descendant
node. This may not be very desirable for cases where the
length of the hierarchy is large. Another drawback is that addition
of a new node Ni to the leaf of the hierarchy results in key generation
of all ancestors of Ni.
Sandhu [18] proposed a key generation scheme for a tree hierar-
chy. The solution was based on using different one-way functions
to generate the key for each child node in the hierarchy. The one-way
function was selected based on the name or identity of the
child. When a new child is added, the keys for the ancestors do not
have to be recomputed. This work, however, does not deal for the
case of a general poset. Zheng et al. [23] proposed solutions for
the general poset. The authors present two solutions ? one for indirect
access to nodes (in which to access data at a lower node, the
user has to traverse the intermediate nodes) and the other for direct
access to nodes.
3.
OVERVIEW
OF OUR APPROACH
Our approach is simple. We formulate a new encryption protocol
that is used to encrypt the data stored in a database. The encryption
ensures data integrity as well as data con?dentiality. The data is encrypted
with appropriate keys at the same time it is generated. Different
portions of the database are encrypted with different keys. A
user, who has to retrieve information from the database will attempt
to decrypt the entire database with a decrypting key that is assigned
to him. However, the user is able to decrypt successfully only that
portion(s) of the database for which the user is authorized. The
remaining portion of the database is not decrypted successfully.
Since the access control technology is based on encrypting with
the appropriate key, we ?rst present the theory on which the key
generation is based.
4. THEORY BEHIND THE CRYPTOGRAPHIC
TECHNIQUES
For the following exposition we use the term message to denote
any piece of information that needs to be stored in the database.
De?nition 1. The set of messages M is the set of non negative
integers m that are less than an upper bound N, i.e.
De?nition 2. Given an integer a and a positive integer N,the
following relationship holds,
a qN r where 0 r N and q a N (2)
where x denotes the largest integer less than x.Thevalueqis
referred to as the quotient and r is referred to as the remainder.The
remainder r, denoted a mod N, is referred to as the least positive
residue of a mod N.
De?nition 3. For positive integers a, b and N,wesayais equivalent
to b, modulo N, denoted by a b mod n,ifamod n b mod
n.
De?nition 4. Two integers a, b are said to be relatively prime if
their only common divisor is 1, that is, gcd a b 1.
De?nition 5. The integers n1,n2, ,nkare said to be pairwise
relatively prime,ifgcdni nj 1 for i j.
De?nition 6. The Euler's totient function f N is de?ned as the
number of integers that are less than N and relatively prime to N.
Below we give some properties of totient functions that are of importance
1. f N N1ifNis prime.
2. f N f N1 f N2 fNkif N N1N2 Nkand N1, N2,
,Nkare pairwise relatively prime.
THEOREM 1. Euler's theorem states that for every a and N that
are relatively prime,
a 1modN
PROOF. We omit the proof of Euler's theorem and refer the interested
reader to any book on number theory [13] or cryptography
[20].
COROLLARY 1. If 0 m N and N N1N2 Nkand N1,N2,
,Nk are primes, then mxf
De?nition 7. A key K is de?ned to be the ordered pair e N ,
where N is a product of distinct primes, N M and e is relatively
prime to f N ; e is the exponent and N is the base of the key K.
De?nition 8. The encryption of a message m with the key K THEOREM 3. For any two messages m and m^ , such that m m^
e N , denoted as [m,K], is de?ned as N1 N2,
De?nition 9. The inverse of a key K e N , denoted by
K 1, is an ordered pair d N , satisfying ed 1modfN.
THEOREM 2. For any message m.
where K e N and K 1 d N .
PROOF. We ?rst show that
me mod N K 1 (Def.
me mod N d mod N (Defs. 8,
med mod N (mod arith)
(Defs. 2,
RHS
By symmetry m K 1 K m.
COROLLARY 2. An encryption, m K ,isone-to-one if it satis-
?es the relation
De?nition 10. Two keys K1 e1 N1 and K2 e2 N2
are said to be compatible if e1 e2 and N1 and N2 are relatively
prime.
De?nition 11. If two keys K1 e N1 and K2 e N2
are compatible, then the product key, K1 K2,isde?nedas
eN1N2 .
LEMMA 1. For positive integers a, N1 and N2,
a mod N1N2 a mod N1
PROOF. Let a N1 N2 x are integers.
a mod N1N2 mod N1
N1N2x N1y z N1 N2Nx1N21 y z N1N2 mod N1
N1y z mod N1
z
RHS
a mod N1
N1N2x N1y z mod N1
z
Hence the proof.
where K1 is the key e N1 ,K2is the key e N2 and K1 K2
is the product key e N1 N2 .
PROOF. The proof for (6) is the same as that for (5). We just
consider the proof for (5).
[If part]
Given m m^ we have to prove that
or
me mod N1N2 mod N1 (Def. 8 and Def. 11)
me mod N1 (subs. me for a lemma 1)
R H S m^e mod N1 mod N1
(idempotency of mod op.)
me mod N1 (since m m^,given)
[Only If part]
Given m K1 K2 m^ K1 mod N1,wehavetoprovem m^
me mod N1N2 m^e mod N1 (Defs. 8, 11)
me mod N1 m^e mod N1 mod N1 (lem. 1)
(encryption is one-to-one)
5. KEY GENERATION
Most often in an organization the access requirements match its
hierarchical structure. That is, a person higher up in the hierarchy
will usually have access to the documents that can be accessed by
persons lower in the hierarchy. So we present the key generation
technique for this case ?rst. Later on, we describe how to accomodate
the cases where the access requirement does not follow this
hierarchical pattern.
An organization is usually structured in the form of a hierarchy.
Thus, we de?ne a hierarchy of levels as follows.
De?nition 12. We consider an organization structure that can be
represented as a partially ordered set (poset), L .Lis the set of
levels of the organization and is the dominance relation between
the levels.
1. If Li and Lj are two levels such that Li Lj, then we say that
strictly dominates Li and Li is strictly dominated by Lj.
2. If Li and Lj are two levels such that Li Lj and Lj Li,then
we say that the two levels Li and Lj are equal and denote this
by Li Lj.
3. If levels Li and Lj are such that either Li Lj or Li Lj,
then we say that Li is dominated by Lj or Lj dominates Li
and denote this by Li Lj.
4. Two levels Li and Lj are said to be incomparable if nether
5. We say Ly is a parent of Lx if Lx Ly and there is no element
Lz such that Lx Lz Ly.IfLx Ly,thenLxis said to be a
descendant of Ly and Ly is said to be an ancestor of Lx.
Since the organizational structure is a poset, it can be depicted in
the form of a Hasse diagram [19].
De?nition 13. Each level Li in the organizational hierarchy isassociated with a unique pair of keys, (Ki Ki ) which we term the
default keys for level Li. Ki is the default encryption key of level
Li and Ki 1 is the default decryption key of level Li. If the access
requirements match the hierarchical structure of the organization,
then a person at level Li uses the default encryption key Ki for encrypting
documents and the key K 1 for decrypting the encrypted
documents.
A document encrypted with key Ki possesses the following property
1. it can be decrypted by key K 1 where K 1 is the default
decryption key of level Li,
2. it can be decrypted by key K 1 where K 1 is the default
decryption key of level Lj such that Lj Li.
3. it cannot be decrypted by key K 1 where K 1 is the default
decryption key of level Lk and Lk Li or Lk is incomparable
with Li.
5.1 Determining the Default Encryption Key
and Default Decryption Key
ALGORITHM 1. Default keys generation
Input: (i) L - the set of levels and (ii) - the dominance relation
Output: (i) K - the set containing the default encryption key for
each level and (ii) K 1 - the set containing the default decryption
key for each level.
choose exponent e obeying the requirements in Def. 7
for each Li Ldo
begin
K: K Ki
return K
The default encryption and decryption keys can adequately provide
access control for cases where the access requirements match
the hierarchical structure of the organization.
5.2 Example 1: Student Transcript Informatio
To illustrate our approach we consider an academic organization
? the College of Engineering at some hypothetical university. The
College of Engineering is headed by a Dean. Below the Dean are
the Department Chairpersons who are responsible for managing
the individual departments. Each Faculty (except for cases of joint
appointment) is overseen by the respective Department Chair. The
Students are at the very bottom of this hierarchy. Each student is
advised by one or more faculty members. The organizational hierarchy
is shown in ?gure 2.
Dean
CS Chair ECE Chair
Procedure GenerateDefaultKeys((L,
begin
for each Li Ldo CS Faculty 1 CS Faculty 2 ECE Faculty 1 ECE Faculty 2
while F Ldo
begin
choose a maximal element Li in L F;
for each parent Lj of Li do Student 1 Student 2 Student 3
choose random Ni where gcd Ni factori 1 Figure 2: Access Requirements matching Enterprise Wide Per-
Ni Ni factori sonnel Hierarchy
// Remove factors that have been included multiple times. We need to maintain the information about the transcripts of in-
dividual students. Since this is sensitive information, we need to
while D Ldo protect it. The following are the access requirements.
begin
1. Each student is able to view his or her transcript.
choose a maximal element Li in L D;
for each ancestor Lk of Li do 2. A faculty advising a student is able to see his transcript.
begin
c := no. of distinct paths from Li to Lk 3. The chair of the department in which a student is majoring is
Ni Ni Nk c 1 able to view the student's transcript.
4. The dean is able to view the student's transcript. 3.
For this example, we consider the case for three students, namely,
Student1, Student2, Student3. Each of these students have a faculty
advisor who monitors and advises the student. Student1's advisor
is CS Faculty1. Student2 is co-advised by CS Faculty2 and
ECE Faculty1. Student3's advisor is ECE Faculty2. This is illustrated
in ?gure 2. These are the access requirements for this
example:
4.
1. Student1, CS Faculty1, CS Chair, Dean can view Student1's
transcript.
2. Student2, CS Faculty2, ECE Faculty1, CSChair, ECE Chair,
Dean can view Student2's transcript.
3. Student3, ECE Faculty2, ECE Chair, Dean can view Student3's
transcript.
5.
Note that, in this case the access requirements match the organizational
hierarchical structure. That is, if a person X has access to
some information, then a person Y at a higher level in the hierarchy
will also have access to that information.
5.2.1 Access Control for Example 1
The access control requirement for Example 1 follows the hierarchical
structure of the organization. Thus, using the default encryption
keys the access can be appropriately restricted. Consider 6.
the hierarchy shown in ?gure 2. The keys for the various people
are as follows.
1. The default encryption, decryption keys for Studenti are denoted
respectively by KSi , KSi 1 where 1 i 3.
2. The default encryption, decryption keys for CS Facultyi are
denoted respectively by KCFi , KCF1i where i 1or2.
7.
3. The default encryption, decryption keys for ECE Facultyi
are denoted respectively by KEFi, KEF1i where i 1or2.
4. The default encryption, decryption keys for CS Chair are denoted
respectively by KCChair , KCC1hair .
5. The default encryption, decryption keys for the ECE Chair
are denoted respectively by KEChair , KEC1hair
6. The default encryption, decryption keys for the Dean are de- 8.
noted respectively by KDean, KDe1an.
The key KDean is chosen as per de?nition 7. Once the Dean's
key has been ?xed, the other keys as generated by Algorithm 1 are
as follows:
1. Encryption Key of Dean :
9.
KDean e NDean
Decryption Key of Dean :KDean dDean NDean ,
where e dDean 1modfNDean
2. Default Encryption Key of CS Chair :
KCChair KDean KC ,
Chair
where KC is compatible with KDean
Chair
that is, KCChair e NDean NCChair 10.
Decryption Key of CS Chair :
KCC1hair dCChair NCChair ,
where e dCChair 1modfNChair
Encryption Key of ECE Chair :
KEChair KDean KEChair ,
where KEChair is compatible with KDean
that is, KEChair e NDean NEChair
Decryption Key of ECE Chair :
KEC1hair dEChair NEChair ,
where e dEChair 1modfNEChair
Encryption Key of CS Faculty1 :
KCF1 KCChair KCF1 ,
where KCF1 is compatible with KCChair
that is, KCF1 e NDean NCChair NCF1
Decryption Key of CS Faculty1 :KCF1 dCF1 NCF1 ,
where e dCF1 1modfNCF1
Encryption Key of CS Faculty2 :
KCF2 KCChair KCF2 ,
where KCF2 is compatible with KCChair
that is, KCF2 e NDean NCChair NCF2
Decryption Key of CS Faculty2 :KCF2 dCF2 NCF2 ,
where e dCF2 1modfNCF2
Encryption Key of ECE Faculty1 :
KEF1 KEChair KEF1,
where KEF1 is compatible with KEChair
that is, KEF1 e NDean NEChair NEF1
Decryption Key of ECE Faculty1 :KEF1 dEF1 NEF1 ,
where e dEF1 1modfNEF1
Encryption Key of ECE Faculty2 :
KCF2 KEChair KEF2,
where KEF2 is compatible with KEChair
that is, KCF2 e NDean NEChair KEF2
Decryption Key of ECE Faculty2 :KEF2 dEF2 NEF2 ,
where e dEF2 1modfNEF2
Encryption Key of Student1 :
KS1 KCF1 KS1 , where KS1 is compatible with KCF1
that is, KS1 e NDean NCChair NCF1 NS1
Decryption Key of Student1 :KS1 dS1 NS1 ,
where e dS1 1modfNS1
Encryption Key of Student2 :
KS2 KCF2 KEF1 KS2,
where KS1 is compatible with KCF1 and KEF1
that is, KS2 e NDean NCChair NEChair NCF2 NEF1 NS2
Decryption Key of Student2 :KS2 dS2 NS2 ,
where e dS2 1modfNS2
Encryption Key of Student3 :
KS3 KEF2 KS3,
where KS3 is compatible with KEF2
that is, KS3 e NDean NEChair NEF2N
Decryption Key of Student3 :
KS31 dS3 NS3 ,
where e dS3 1modfNS3
Consider Student1's transcript. If this transcript is encrypted
with key KS1 , then any of the keys KDean, KCChair , KCF1 , KS1 can
be used to decrypt it. Thus all the personnel higher up in the hierarchy
can decrypt this transcript using their own default decryption
key.
5.3 Determining a Customized Encryption Key
As illustrated by Examples 2 and 3, sometimes the access requirement
does not match the organizational structure and then a
different encryption key, which we term customized encryption key,
must be used to protect this information. The decryption key, how-
ever, remains the same.
ALGORITHM 2. Customized encryption key generation
Input: (i) L - the set of levels, (ii) - the dominance relation, (iii)
- the level for which the default encryption key is being gener-
ated, (iv) A - the set of levels that do not dominate Li but who are to
be given access, (v) D - the set of levels that dominate Li and who
are to be denied access, (vi) K - the set of default encryption keys.
Output: Kci - the customized key generated for level Li.
Procedure GenerateCustomEncKey(L, ,Li, D, A, K)
begin
while A allow do
begin
for each Lj A allow that is minimal do
begin
// Deny access to ancestors of Lj
for each parent Lk of Lj do
allow : allow Lj
while D deny do
begin
for each Lj D deny that is minimal do
begin
Give access to ancestors of Lj
for each parent Lk of Lj do
// If Nk has been included multiple times:
for each parent Lk of Lj do
begin
c := no. of paths from Lk to Lj
Kci e Nci
return Kci
Dean
CS Chair ECE Chair
Student 1 Student 1
(CS
Figure
3: Access Requirements for Individual Courses
5.4 Example 2: Individual Course Grade Informatio
The organizational hierarchy in this case is the same as before
(please refer to ?gure 2). We need to maintain information about
the grades each student receives in each of the courses he/she has
taken. The access requirements are complicated in this case:
1. Each student is able to view his or her grades for the courses
he or she has taken.
2. The faculty offering the course is able to see the grades for
all students who have taken the course under the faculty.
3. The chair of the department which has offered the course can
view the grades of the students who have taken that course.
4. A student's advisor is able to see his grades in all the courses.
5. The student's department chair can view the student's grades
for all the courses.
6. The dean is able to view the grades of all students that have
taken a course offered by a department in the college.
Suppose Student1 takes two courses: (i) CS 350 offered by the
Computer Science Department and taught by CS Faculty2 and (ii)
offered by the ECE Department and taught by ECE Faculty1.
This is shown in ?gure 3. Note that ?gure 3 shows only the relevant
portion of the hierarchy given in ?gure 2.
The access requirements are as follows:
1. Student1's CS 350 grade can be viewed by Student1,
CS Faculty1, CS Faculty2, CS Chair, Dean.
2. Student1's ECE 373 grade can be viewed by Student1,
CS Faculty1, ECE Faculty1, ECE Chair, CS Chair, Dean.
Note that, in this case the access requirements do not match
the organizational hierarchy given in ?gure 2. More access is required
than permitted by the organizational hierarchy - for exam-
ple, ECE Faculty1 must be given access to the ECE 373 grades of
Student1.
5.4.1 Access Control for Example 2
In this case the access patterns do not match the organizational
structure. For example, CS Faculty2 must have access to the CS
350 grade of Student1 even though CS Faculty2 is not his advisor.
Thus default keys cannot be used and custom keys are required to
encrypt the grades obtained in the individual courses.
1. Key for encrypting Student1's CS 350 grade:
KcS1 C350 e NDean NCChair NCF1 NCF2 NS1
2. Key for encrypting Student1's ECE 373 grade:
KcS1 E373 e NDean NCChair NEChair NCF1 NEF1 NS1
If key KcS1 C350 is used for encrypting CS 350 grade of Student1,
then the encrypted grade can be decrypted by any of the default decryption
keys of Student1, CS Faculty1,CS Faculty2,CSChair and
the Dean.IfkeyKcS1 E373 is used for encrypting ECE 373 grade of
Student1, then the encrypted grade can be decrypted by any of the
default decryption keys of Student1, CS Faculty1, ECE Faculty1,
CS Chair, ECE Chair and the Dean.
Dean
CS Chair ECE Chair
Student 2
(Sensitive File F)
Figure
4: Access Requirements for a Sensitive Project
5.5 Example 3: Sensitive Project Information
As a part of the curriculum, the students are required to do a
Software Design Project. Some of these projects involve proprietary
data whose disclosure should be kept to a minimum level.
Thus, there is a need for encrypting the ?les associated with the
project. The access requirements are as follows.
1. The faculty members who advise the student on this project
have access to the ?les.
2. The student has access to the ?les.
3. No other person is given access to the ?les.
Student2 is working on a project with faculty membersCS Faculty2
and ECE Faculty1. File F contains sensitive information which
only the student and the project advisors can view. This case is illustrated
in ?gure 4. Note that in ?gure 4 the entire organizational
hierarchy is not shown ? only the part pertinent to the example is
given.
The access requirements are as follows:
1. Sensitive ?le F can be viewed by Student2, CS Faculty2 and
ECE Faculty1.
This is an example where the access requirement does not follow
the organizational hierarchy. People higher up in the hierarchy (the
Dean, the Department Chairperson) is not given the access to these
?les.
5.5.1 Access Control for Example 3
In this case the organizational structure is that given in ?gure 2.
The sensitive ?le F must be protected such that only the faculty
advisors (CS Faculty2, ECE Faculty2 and ECE Faculty1)andthe
student (Student2) have access to this ?le. No other person can have
access. Thus, for protecting the project work the default encryption
key is not adequate. Customized encryption key must be used.
The student encrypts ?le F using the following customized encryption
KcS2. The customized encryption key KcS2 is generated
using Algorithm 2. For this example,
The encrypted ?le F can be decrypted by the default decryption
keys of CS Faculty2, ECE Faculty2, ECE Faculty1 or Student1.
6. SECURITY OF THE PROPOSED MECH-
Our scheme is based on the RSA cryptosystem. Its security is
based on the dif?culty of factoring large prime numbers. We do
need to mention, however, that the low exponent attack on the
RSA cryptosystem [10] does not arise in our case. The low exponent
attack occurs in the following way: suppose the same message
m is encrypted with different keys sharing the same exponent.
Let the exponent e 3 and the different keys are K1 e N1 ,
K2 e N2 , K3 e N3 , etc. By using the Chinese Remainder
Theorem [13] an attacker can get me. Now if he can guess e
correctly, then by extracting the eth root of me, he can obtain m.
To avoid this problem, we choose a large exponent e (e is substantially
larger than the number of levels). Since the complexity of
raising a number to the eth power grows as loge, choosing a large
exponent does not signi?cantly increase the computational com-
plexity. Also in our mechanism, the data is appropriately encrypted
and stored only in one place. Having multiple copies of same data
encrypted with different keys does not arise in our case.
7. ORGANIZATIONAL CHANGES AFFECTING
KEY MANAGEMENT
As outlined in Section 5, the default encryption keys generated
are dependent on the hierarchical structure of the organization. If
restructuring takes place in the organization, the Hasse diagram
representing the personnel hierarchy will be modi?ed resulting in
the change of the default encryption keys. In this section we give
algorithms that result in the modi?cation of default encryption keys
when the organization structure is changed.
Any restructuring can be expressed as modi?cations to the Hasse
Diagram. A Hasse diagram can be modi?ed by using combinations
of the four primitive operations
1. adding an edge between existing nodes ? this corresponds to
the scenario when a new hierarchical relationship is established
between two existing persons in the organization.
2. deleting an edge from an existing node ? this corresponds
to the scenario when an existing hierarchical relationship is
broken.
3. adding a node ? this corresponds to the case when a new
person joins the organization.
4. deleting a node ? this corresponds to the case when a person
leaves the organization.
For each of these operations we give an algorithm stating how the
default encryption key must be changed because of the operation.
The default decryption key, however, remains the same.
ALGORITHM 3. Default enc. key change with edge insertion
Input: (i) L - the set of levels, (ii) - the dominance relation, (iii)
the new directed edge that is to be inserted from level i to
level j, (iv) K - the set of default encryption keys.
Output: K - the set containing the default encryption key for each
level.
Procedure ChangeDefEncKeysEdgeIns(L, ,
begin
for each descendant k of j do
insertion results in a new path from i to k then
for each Li Ldo
begin
return K
ALGORITHM 4. Default enc. key change with edge deletion
Input: (i) L - the set of levels, (ii) - the dominance relation, (iii)
the directed edge from level i to level j that will be removed,
(iv) K - the set of default encryption keys.
Output: K - the set containing the default encryption key for each
level.
Procedure ChangeDefaultEncKeysEdgeDel(L, ,
begin
foreach descendant k of j do
deletion results in no path from i to k then
begin
eliminate access to parents of Li
foreach parent l of i do
if there is a path from l to k after deleting i j do
for each Li Ldo
begin
return K
ALGORITHM 5. Default key generation with node insertion
Input: (i) L - the set of levels, (ii) - the dominance relation, (iii)
- the node that will be added, (iv) K - the set of default encryption
the set of default decryption keys.
Output: (i) K - the set containing the default encryption key for
each level, (ii) K 1 - the set containing the default decryption key
for each level.
Procedure AddDefaultEncKeyNodeIns(L, ,i,K, K 1)
begin
Choose random Ni that is relatively prime to existing Nk
Get the exponent e of any key Kk in K
removal with node deletion
Input: (i) L -thesetoflevels,(i) - the dominance relation, (iii)
- the node that will be added, (iv) K - the set of default encryption
the set of default decryption keys.
Output: (i) K - the set containing the default encryption key for
each level, (ii) K 1 - the set containing the default decryption key
for each level.
Procedure RemoveDefaultKeysNodeDel(L, ,i,K, K 1)
begin
if there are no edges incident on node i then
begin
8. AN ALTERNATE SOLUTION
The problem of access control in a hierarchy can be solved using
an alternative solution1 that can potentially have lesser computational
requirements. Everybody in the organization has a public-private
Suppose an employee wants to share a message m
with his n superiors, denoted by S1, S2, ,Sn. For each superior
Si, the employee encrypts the message m with the public key of superior
Si. Thus, he performs n 1 encryptions (n for his superiors
and one for himself). He stores these n 1 encrypted messages.
Each superior Si can use his private key to retrieve the message.
This alternate solution has a problem: multiple encrypted copies
of data must be stored. Storing multiple copies of encrypted data
can be a source of inconsistency. For example, suppose the employee
decides to change m to m^. After making this change, the
employee is supposed to encrypt m^ with the public keys of his n
superiors. However, the employee forgets to encrypt m^ for superiors
Sj and Sk. In such a case superiors Sj and Sk will be accessing
the previous version of the data which is m. This source of inconsistency
associated with redundant data does not arise in our case
because there is only one copy of the encrypted data. Moreover,
keeping multiple encrypted copies of the same data leads to more
exposure for the data which may not be desirable. This problem
does not arise in our case.
Our solution has another advantage, namely, mutual access awareness
? each person having the encrypted data has the knowledge of
who else can view this data. For any data object we can have the
need-to-know list which speci?es the persons who can access the
document. Anyone having this list can verify that only those persons
in the list and no one else can decrypt the corresponding data
object. This is not possible for the alternate solution.
One might, however, argue that our scheme is more computation
intensive for large hierarchies. This is because the base N of the
1The authors would like to thank the anonymous reviewer for this
alternate solution.
default encryption key increases with the number of levels in the [10]
hierarchy. However there are techniques (for example using Fast
Fourier Transforms) by which the encryption can be done in an [11]
ef?cient manner [5]. These techniques are especially useful if the
base of the keys are large. Details of computational complexity will
be treated in a future work.
9. CONCLUSION AND FUTURE WORK
In this paper we have presented a new encryption technique for
securing information in a database. The implementation that we
propose is completely general; we have shown how the different [13]
access control policies in an organization can be implemented by
our technique. [14]
A lot of work is yet to be done. The ?rst step is to implement the
algorithms that have been proposed; this experience will help us in
detecting subtle ?aws that we may have overlooked. Performance
analysis and scalabilty studies need to be done before our method
can be used in real world scenarios. Finally, we wish to show how
the discretionary and the mandatory access control policies [22] of [15]
an organization can be implemented using the technology that we
proposed.
The need for hierarchical access control arises in other contexts
as well. For example, different kinds of hierarchies, such as, class [16]
composition hierarchy and class inheritance hierarchy arises in object-oriented
database systems [16, 21]. We need to investigate whether
our mechanism can be applied to implement the access control policies
[3, 7, 11] (such as, visibility from above and visiblity from [17]
below policies) desirable in such hierarchies.
10.
--R
The Secure Use of RSA.
In Database Security III
Optimal Algorithm for Assigning Cryptographic Keys to
Access Control in a Hierarchy.
An Introduction to the Theory
of Numbers.
Authentication for Hierarchical Multigroup using the
Advances in Cryptology:
Information Security Policy.
Information Security: An Integrated Collection of Essays
pages 160?
A Model of
Authorization for Next-Generation Database Systems
Transactions on Database Systems
A Method for
Communications of the ACM
Cryptographic Implementation of a Tree
Hierarchy for Access Control.
Mathematics: A Discrete Introduction.
Brooks/Cole
Principles and Practice.
Mandatory Security in
Trusted Computer System Evaluation Criteria (The Orange
New Solutions to the
Problem of Access Control in a Hierarchy.
Preprint 93-2
of Wollongong
http://citeseer.
--TR
An optimal algorithm for assigning cryptographic keys to control access in a hierarchy
Cryptographic implementation of a tree hierarchy for access control
Mandatory security in object-oriented database systems
A cryptographic key generation scheme for multilevel data security
A model of authorization for next-generation database systems
Membership authentication for hierarchical multigroups using the extended Fiat-Shamir scheme
Flexible access control with master keys
Database security
Cryptography and network security (2nd ed.)
Cryptographic solution to a problem of access control in a hierarchy
A method for obtaining digital signatures and public-key cryptosystems
Mathematics
Fundamentals of Database Systems
Access Control in Object-Oriented Database Systems - Some Approaches and Issues
--CTR
H. Ragab Hassen , A. Bouabdallah , H. Bettahar , Y. Challal, Key management for content access control in a hierarchy, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.11, p.3197-3219, August, 2007
Jason Crampton, Applying hierarchical and role-based access control to XML documents, Proceedings of the 2004 workshop on Secure web service, p.37-46, October 29-29, 2004, Fairfax, Virginia
Alfredo De Santis , Anna Lisa Ferrara , Barbara Masucci, Unconditionally secure key assignment schemes, Discrete Applied Mathematics, v.154 n.2, p.234-252, 1 February 2006
Mikhail J. Atallah , Marina Blanton , Keith B. Frikken, Key management for non-tree access hierarchies, Proceedings of the eleventh ACM symposium on Access control models and technologies, June 07-09, 2006, Lake Tahoe, California, USA
Batrice Finance , Sada Medjdoub , Philippe Pucheral, The case for access control on XML relationships, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Luc Bouganim , Franois Dang Ngoc , Philippe Pucheral, Client-based access control management for XML documents, Proceedings of the Thirtieth international conference on Very large data bases, p.84-95, August 31-September 03, 2004, Toronto, Canada
Mikhail J. Atallah , Keith B. Frikken , Marina Blanton, Dynamic and efficient key management for access hierarchies, Proceedings of the 12th ACM conference on Computer and communications security, November 07-11, 2005, Alexandria, VA, USA | access control;cryptography;hierarchy |
508173 | Computational paradigms and protection. | We investigate how protection requirements may be specified and implemented using the imperative, availability and coercion paradigms. Conventional protection mechanisms generally follow the imperative paradigm, requiring explicit and often centralized control over the sequencing and the mediation of security critical operations. This paper illustrates how casting protection in the availability and/or coercion styles provides the basis for more flexible and potentially distributed control over the sequencing and mediation of these operations. | INTRODUCTION
The sequencing of operations in a computation may be
classified in terms of three fundamental paradigms. In the
traditional imperative paradigm, the programmer explicitly
determines the sequencing constraints of operations; in the
availability paradigm, the sequencing of operations depends
only on the availability of operand data; and, in the coercion
paradigm, operations are executed when, and only when,
their results are needed.
These paradigms can be interpreted in the context of pro-
tection. Conventional protection mechanisms generally follow
the imperative paradigm by enforcing explicit mediation
and sequencing on operations. For example, when mediating
a purchase order transaction [order; validate; invoice;
payment], an imperative protection mechanism might ensure
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
that the operations are done in the correct sequence, and
that suitable separation of duties are applied at each stage.
A weakness of the imperative approach is that protection
is based on the explicit control of sequencing and mediation.
Explicit control can become difficult when flexibility over the
sequencing of operations is required. For example, it is often
desirable to allow an un-validated order to progress through
the system with the assumption that validation will be done
at some stage, but before payment is made. Sometimes it
may even be expedient to make payment without any validation
While such requirements can be constructed in terms of
explicit sequencing control, it may be more natural to consider
the requirements in terms of the data dependencies
between the operations. For example, operation pay depends
on operations validate and invoice, operation validate
depends on operation order, and so forth. These relationships
can be expressed in terms of a graph of operations
(nodes) linked together by the data that is passed between
them. The implicit parallelism of operations in the graph
gives rise to two different paradigms for specifying and enforcing
protection requirements.
In the availability paradigm, sequencing and mediation of
operations depend only on the availability of operands. This
data-flow like sequencing of operations gives rise to eager
execution. For example, once an order is proposed then
validation and invoice processing can be done at any stage
(before payment). In the coercion paradigm, sequencing and
mediation of operations is determined on the basis of when
results are needed. This is the functional style of operation
sequencing and gives rise to lazy evaluation. For example,
only if and when payment is finally required should order
validation be sought.
In this paper we investigate how protection requirements
can be specified and implemented using the imperative, availability
and coercion paradigms. This is done using the Condensed
Graphs model of computation [8, 11] which provides
a single framework that unifies these three paradigms. In addition
to providing flexibility in the sequencing and control
over security critical operations, these paradigms have facilitated
the development of novel distributed protection mechanisms
that are also described in this paper. By following
the inherently parallel availability and coercion paradigms,
the proposed protection mechanisms need not necessarily
rely on centralized security state.
Section 2 provides a brief outline of the Condensed Graphs
model. Using the purchase order transaction as an exam-
ple, Section 3 considers various sequencing constraints that
one may wish to enforce. This section also serves to illustrate
the notation and semantics of the Condensed Graph
model. A protection model based on permissions and protection
domains is described in Section 4. Specific protection
mechanisms for this model are considered in Section 5.
2. CONDENSED GRAPHS
Like classical dataflow [1], the Condensed Graphs (CG)
model [8, 11] is graph-based and uses the flow of entities on
arcs to trigger execution. In contrast, Condensed Graphs
are directed acyclic graphs in which every node contains
not only operand ports, but also an operator and a destination
port. Arcs incident on these respective ports carry
other Condensed Graphs representing operands, operators
and destinations. Condensed Graphs are so called because
their nodes may be condensations, or abstractions, of other
Condensed Graphs. (Condensation is a concept used by
graph theoreticians for exposing meta-level information from
a graph by partitioning its vertex set, defining each subset
of the partition to be a node in the condensation, and by
connecting those nodes according to a well-defined rule [6].)
Condensed Graphs can thus be represented by a single node
(called a condensed node) in a graph at a higher level of
abstraction.
The basis of the CG firing rule is the presence of a Condensed
Graph in every port of a node. That is, a Condensed
Graph representing an operand is associated with
every operand port, an operator Condensed Graph with the
operator port and a destination Condensed Graph with the
destination port. This way, the three essential ingredients
of an instruction are brought together (these ingredients are
also present in the dataflow model; only there, the operator
and destination are statically part of the graph).
Any Condensed Graph may represent an operator. It may
be a condensed node, a node whose operator port is associated
with a machine primitive (or a sequence of machine
primitives) or it may be a multi-node Condensed Graph.5
f
Figure
1: Condensed Graphs congregating at a node
to form an instruction
The present representation of a destination in the CG
model is as a node whose own destination port is associated
with one or more port identifications. Figure 1 illustrates
the congregation of instruction elements at a node and the
resultant rewriting that takes place. We decorate connections
to distinguish between different kinds of ports, and
use numbers to distinguish input ports.
Executing a Condensed Graph corresponds to scheduling
its fireable nodes to run on ancillary processors, based on the
constraints of the graph. The nodes in a graph are represented
as triples (operation, operands, destination) and are
constructed by the Triple Manager (TM) as the graph exe-
cutes. Once a node is ready to fire, the triple manager can
schedule it for execution on an ancillary processor. The CG
operators can be divided into two categories: those that are
'value-transforming' and those that only move Condensed
Graphs from one node to another in a well-defined manner.
Value-transforming operators are intimately connected with
the ancillary processors and can range from simple arithmetic
operations to the invocation of software components
that form part of an application system. In contrast, Condensed
Graph moving instructions are few in number and
are architecture independent. These TM primitives include
the condensed node evaporation operator and the ifel node.
A number of working prototypes that use Condensed Graphs
have been developed, demonstrating it usefulness as a general
model of computation. Prototypes include a sequential
TM interpreter [8] and a web-based distributed computing
engine [10]. WebCom [10] schedules the execution of coarse-grain
computations described as Condensed Graphs. Web
clients (ancillary processors) connect to a WebCom server
(Triple Manager) whereupon they are served appropriate
computations (CG operations).
By statically constructing a Condensed Graph to contain
operators and destinations, the flow of operand Condensed
Graphs sequences the computation in a dataflow manner.
Similarly, constructing a Condensed Graph to statically contain
operands and operators, the flow of destination Condensed
Graphs will drive the computation in a demand-driven
manner. Finally, by constructing Condensed Graphs
to statically contain operands and destinations, the flow of
operators will result in a control-driven evaluation. This latter
evaluation order, in conjunction with side-effects, is used
to implement imperative semantics. The power of the CG
model results from being able to exploit all of these evaluation
strategies in the same computation, and dynamically
move between them, using a single, uniform, formalism.
3. OPERATION SEQUENCING
In this section we consider a variety of controls that one
may wish to place on the purchase order transaction example
discussed in Section 1. The examples also serve to illustrate
the notation and semantics of the CG model.
Example 1. Condensed node POI specifies the allowable
behaviour for order processing (Figure 2). By definition [8],
O
PO I
Figure
2: Imperative Style Defintion of POI
the behaviour of a Condensed Node such as POI is constructed
as a Condensed Graph with a single entry node
single exit node (X ). The other nodes in the graph
represent the operations available: propose an order (O),
validate the order (V), process the associated invoice (I) and
make a payment (P).
Arcs represent data paths between the operations which,
in this case, may fire (execute) when data arrives at their
input ports. For example, when the order has been proposed
a value (the details) is output from O and passed to
V, which may, in turn, fire, and so forth. Firing a condensed
node evaporates it into the graph that defines it, with input
available from the E node and final output emanating from
its X node.
Figure
2 may be regarded as specifying sequential ordering
constraints [O;V;I;P] in an imperative style. 4
Example 2. Figure 3 specifies the purchase order trans-action
in a simple dataflow or availability manner. Once the
order has been proposed, details are available to both V and
I, which may fire in either order. Only when both inputs
A
O
I
PO
Figure
3: Availability Style Definition of POA
(outputs from V and I) are available, can payment proceed.Example 3. In Figure 4, orders are validated only when
needed, that is, V is executed in a demand or coercion -
driven manner. The V node acts as an input value to node
O
I
PO
Figure
4: Demand-Driven Validation of Orders POC
which results in V becoming fireable only when needed,
as illustrated in Figure 5. If id represents a transaction identifier
then POC
id evaporates on firing to its defining
graph with id passed on from entry node E to operation O,
which fires (denoted as *) generating, as output, a purchase
order po (Figure 5(a)). This acts as input to V and I. Operation
I fires (Figure 5(b)) since its input value is present and
its output port is bound to a destination. However, while
an input value is present for V, its output port is not bound
to any destination, and therefore, it may not yet fire.
Once operation I has fired, operation P has values at both
ports (a simple value inv, and a graph connected to node V),
has an output destination, and therefore, is fireable. How-
ever, P expects the values on its input ports to be atomic
values (such as po), and not an executable graph object. A
'preliminary' firing of P does not execute P, but `grafts' node
V to the input port to which it acts as a value (Figure 5(c)).
As a result, P is no longer fireable; the output port of V
becomes bound and fires (Figure 5(d)). As a result, operation
P has atomic input values, fires, and generates a check
Figure
5(e)).
In this example, V is executed in an availability or demand-driven
manner: only when a result (validation) is required,
is it scheduled for execution. Execution of I may be regarded
as eager, while execution of V regarded as lazy. 4
Example 4. Figure 6 specifies a variation of the purchase
order graph whereby the validation requirement may be by-passed
for invoices up to a certain limit. The operation lim?
O
I
ifel X
PO L1
lim?
true
Figure
inspects the invoice, returning true if its value is below a certain
limit; otherwise it returns false. Payment operation P'
has the same behaviour as P, except that it takes only one
input (from I). The conditional operation ifel is provided by
the Triple Manager. It takes a boolean value on its B input
port, and if true then it passes the value at its T(hen) port
to output, otherwise it passes the value at its E(lse) port to
output.
Note that the decorations on ports T and E indicate that
they are non-strict , that is, their input values will not be
grafted if they are not atomic. This is unlike strict ports
where non-atomic values are always grafted. Thus, when ifel
fires the graph at its T or E port simply passes through it,
depending on the value at the B port. Figure 7 illustrates
part of the behaviour when lim? returns true. The input
port to X is strict and the output of P' will be grafted to
the input of X , making P' fireable. Note that in this case P
never fires, and consequently, V never fires. If P is selected
by the ifel then it becomes grafted to X and fireable (and
never fires).
If the graph in Figure 6 had, instead, specified a direct
grafted connection from outputs of P and P' to the ifel operation
then then operation V may eventually fire, regardless
of whether it is needed. This is analogous to a kind of speculative
validation (may validate regardless), as opposed to the
conservative validation (validate only if required) specified
in the original graph. 4
Example 5. Non-strictness provides a degree of higher-
orderedness to graphs: an operation/graph may be treated
as data as it moves around the graph with execution deferred
until it arrives at a strict port whereupon it becomes grafted.
In
Figure
8, a revised invoice-processing operation is non-
PO L2
O P"
I'
Figure
8: Lazy Validation II POL2
strict on its 'validation' port; it checks the invoice against
the order and outputs a suitable value (graph) that includes
the yet-to-be executed V operation (and its inputs).
The payment operation P" also has a non-strict input
port; it generates a print-check operation Ck with a dependency
on the V operation (see Figure 9(a)). This graph value
O
po
po
I
O
I
O
I
O
(b)
(a) (c)
(d) (e)
ok
ck
id
po
po
id po
po
ok
id
po
po
Figure
5: Firing Sequence for POC .
true
I
ifel
ifel
lim?
true
(b)
(a)
Figure
7: Snapshot of Lazy Order Validation POL1
may be thought of as representing the behaviour "before issuing
the check, validation must be done".
When this graph arrives at the strict port of X, it is
grafted (Figure 9(b)), which in turn, results in the grafting
of V (Figure 9(c)), which in turn becomes fireable. Only
when validation is done, can the check be printed, and POL2
completed. 4
Condensed Graphs provide an executable notation that allows
us to precisely specify how operations should be 'glued'
together. The next section proposes a protection framework
for this 'glue'.
4. PROTECTION FRAMEWORK
A Triple Manager schedules the nodes of a graph to be
fired on the ancillary processors that are participating in
the computation. These processors could be the components
of a parallel machine, a network of workstations or a variety
of heterogenous systems, connected over local networks
and/or the Internet. From a security perspective, we assume
that when a node fires, it does so within some security
domain, which reflects the resources that can be accessed
by the node. Thus a domain could correspond to a specific
host on a network, a subnet, and so forth. However, we are
not limited to a network computing model: a domain could
represent a traditional protection domain [7]. For example,
a node that performs a secret operation could be scheduled
to domain secret. Alternatively, an authenticated domain
might be represented by the public-key that speaks for it.
An operation may be scheduled to a particular security
domain only if the security domain holds the correct permission
that provides authorization to execute the operation.
Each node has a permission attribute that reflects the necessary
authorization (required by a domain) to execute it.
The Triple Manager provides a primitive operation
Perm perm(NonStrict Node n);
where, perm(n) returns the permission associated with node
n. Non-strictness is required since examining the permission
attribute of a node should not result in its execution.
Permissions are treated as primitive value nodes within
a condensed graph and are assumed to be structured as a
lattice means that permission
y provides no less authorization than x. A simple example
is the powerset lattice of fread; writeg, with ordering defined
by subset, and lowest bound (t) defined by union. Thus
the lowest upper bound operator may be used to compose
permissions.
A Triple Manager schedules the nodes of the graph it is
executing to fire in security domains that have appropriate
permissions. A primitive operation is provided.
Perm sdom(NonStrict Node n);
Given node A, then sdom(A) returns the permission assigned
to the domain that A is scheduled to. If A is not
yet ready to fire then sdom may either return the domain
planned for A or it may block until it is known and/or ready
to fire. Only a single permission need be associated with
each security domain since composite permissions may be
constructed using t.
If the node A is not a primitive operation, that is, it is a
condensed node, and if it is to be scheduled to the same do-
XP"
(b)
(a) (c)
Ck Ck Ck
Figure
9: Strictness and Eventual Validation in POL2
main as that of the current Triple Manager, then the current
Triple Manager will manage the scheduling for the graph
that A defined. If A is scheduled to fire in a different domain
then another Triple Manager running in this domain
will schedule the graph that A defines. The primitive
TM operation
Perm cdom();
returns the permission assigned to the domain of the graph
currently executing, that is, the permission assigned to that
the Triple Manager executing the graph. Figure 10 illustrates
the relationship between the security-related TM prim-
itives: a triple manager has scheduled the condensed node
A (security attribute a) to be executed by another Triple
Manager that is running in a domain with permission x.
The graph defined by A is said to run in a security context
(x,a).
The Triple Manager is regarded as a trusted component
in the sense that the triples that it manages may be accessed
only by the Triple Manager and that it constructs
and schedules triples faithfully and according to the graph
it is executing 1 .
When a node with permission attribute a fires in a domain
with permission x then it is said to have a security
context (x; a). Security is defined in terms of whether a
graph in one security context may schedule a node to fire
in another security context (or possibly the same). A node
with permission attribute b that is part of a graph with a
security context (x; a) may be scheduled to a domain y if
implies is a partial ordering
relation between security contexts. This relation, called
the scheduling constraint , controls how graphs evaporate
We do not prescribe a specific definition for the implies
relation. However, one possible definition could be based on
the permission orderings.
Considering Figure 10, the Triple Manager must have sufficient
permissions to execute (the graph defined by) A (a
x). This Triple Manager must also have sufficient permission
to schedule any node of this graph to another domain
(y x). Similarly, B must be authorized to run in this
domain (b y), and thus we have implies((x; a); (y; b)).
Example 6. A Triple Manager schedules the nodes of a
graph to be fired on the ancillary processors participating in
the computation. Suppose that the purchase order system
is implemented across a network of personal workstations
connected to a trusted server.
Define the set of permissions as the powerset of fclk; mgrg,
with subset as the ordering relation. Operations O and I
We believe that assuring the correctness of the Triple Manager
should be straightforward; the core of its current implementation
stands at a few hundred lines of C code.
have permission attribute fclkg; operations V and P have
permission attribute fmgrg, and condensed node POI has
permission attribute fg. Alice is a manager and is bound to
permission fmgrg, while Bob, a clerk, is bound to permission
fclkg.
Suppose that Alice requests that an instance of POI is to
be executed on a trusted server (domain fclk; mgrg). This
provides a context (fclk; mgrg; fg) from which the operations
O, I, V and P will be scheduled. Operations O and I may
be scheduled to Alice's domain (context (fclkg,fg) on her
workstation). Similarly, V and P may be scheduled to Bob.5. PROTECTION MECHANISMS
The security of a graph (based on the scheduling con-
straint) can be defined in an operational or denotational
manner. Operationally, a graph is secure if the Triple Manager
schedules only those nodes that uphold the scheduling
constraint. The disadvantage of this approach is that the security
mechanism must be hard-coded as part of the Triple
Manager and is implementation dependent. The alternative
is to define security in a denotational way, that is, define the
enforcement of the scheduling constraint in terms of a Condensed
Graph. We take this approach, guaranteeing that
our proposed security mechanisms can be implemented, not
having to worry at this stage about low-level operational de-
tails. Another advantage of defining security in this way is
that we can program alternative protection mechanisms.
5.1 Fragile Protection
The fragile protection operation F takes a node
A as its input operand and if the scheduling constraint is
upheld then A may fire, that is, A F evaluates to A.
If the scheduling constraint fails then A may not fire and
the result of the evaluation is null. Figure 11 defines the
operation as a condensed node. For the purposes of this
cdom
perm
sdom
F
null
ifel
Figure
11: Definition of Fragile Protection Operator
paper we assume that the graph that is defined by F
executes
Context (y,b)
fires/evaporates
sdom
cdom
perm
A
a
x
z
Graph defined by A scheduled
to aTriple Manager running
sdom
cdom
Context (x,a)
perm
y
x
in domain with permission x
Graph containing condensed
node A runs in a domain that
holds permission w.
Context (w,p)
Operation B executes
in a domain that holds
permission y
schedules
Figure
Condensed Node B Scheduled to Fire
executes (is scheduled) in the same protection domain as
its parent. This ensures that the value cdom referenced in
Figure
11 corresponds to the cdom of its parent, that is, the
domain that schedules the node input (A) to F .
Since the input port of the fragile protection operation
is non-strict, its operand A passes into its graph without
grafting/firing. Lazy evaluation within the graph ensures
that A passes to the X node only if it is to be scheduled
to an appropriate domain, whereupon it becomes grafted to
the strict port of X and fires.
Example 7. Figure 12 protects the ordering process defined
in Figure 8. The protection nodes that protect operations
O, I' and P'' are immediately available to fire. Figure 13
illustrates the result of these nodes firing. An alternate firing
sequence might fire the protection node of O, followed by
O, and so forth. The validation operation is mediated on a
lazy basis: The protected V operation (sub-graph
passes through the I' and P'' nodes. On becoming an input
to the X node the protection operation becomes grafted,
and fires, mediating the scheduling of V. 4
In this paper we consider only the security constraints
on the scheduling of nodes. How exactly a Triple Manager
decides when to schedule fireable nodes must be left to the
Triple Manager. It would be straightforward to implement a
Triple Manager that tried to ensure that the scheduling constraint
was always upheld when scheduling. In practice, we
expect that a fragile protection node would be implemented
as a TM primitive, rather than as a condensed node.
An implementation of the Triple Manager must also decide
whether the protection operation should fire as soon as
possible or whether it should wait until the node it mediates
has all of its input ports bound. Immediately firing a protection
node gives rise to the notion of speculative protection,
whereby the Triple Manager schedules, in advance, an (au-
thorized) domain for an operation before it is ready to fire.
Alternatively, deferring the firing of the protection node until
the operation it mediates has all of its input ports bound
gives rise to conservative protection. Like speculative and
conservative computation these can be controlled within the
Triple Manager.
5.2 Tenacious Protection
The disadvantage of the fragile protection operator is that
potential results are lost if the scheduling constraint is not
upheld. Rather than failing, it would be preferable to re-schedule
the node for later evaluation, or allow it to be
scheduled by another Triple Manager that has authority
to assign an appropriate domain. This is achieved by the
tenacious protection operation defined in Figure 14. Graph
A T is defined recursively. If the scheduling constraint is
cdom
perm
sdom
ifel
Figure
14: Definition of the Tenacious Protection
Operator
upheld then node A becomes grafted to the input port of X
and may be fired. If the scheduling constraint is not upheld
then the result is A, lazily protected, that is, A T . Unlike
fragile protection, the tenacious protection operator behaves
like a security wrapper that can be repeatedly probed,
but can only be unwrapped (scheduled) in an authorized domain
The tenacious protection operator could be implemented
as a TM primitive. One interpretation is that the TM postpones
the scheduling of a node until an authorized domain
is available. However, more general interpretations are pos-
sible. For example, if the current Triple Manager cannot
assign an authorized domain then the protected operation
can be scheduled to another Triple Manager that can assign
an authorized domain.
Example 8. Suppose that a network is partitioned in
terms of a clerk subnet and a management subnet and each
subnet has its own server which routes traffic to other sub-
nets. A Triple Manager on the clerk server starts a PO
F
F
F
F
Figure
12: Protecting the Ordering Process
F
I'
Figure
13: Protecting the Ordering Process
transaction graph, and schedules the requested O operation
to a clerk's workstation. Since it cannot schedule a management
operation, it passes the 'wrapped' operation to the
Triple manager on the management server for, scheduling,
which in turn schedules it to an appropriate management
workstation. 4
Domain scheduling heuristics, such as that discussed in
Example 8 should be considered part of the implementation
of the Triple Manager. The tenacious protection operator
could be thought of as a scout node that can be sent out
across the network looking for a suitable Triple Manager to
schedule the protected node. Once found, the underlying
Triple Manager transparently retrieves the protected node.
An alternative and speculative approach would be to multicast
the protection operator across the network; as soon
as one Triple Manager can schedule the protected node, the
node migrates and all other speculative protection nodes are
garbage collected. Low-level protocols to support, what is
in effect, remote node invocation has been investigated else-
where: a Triple Manager scheduling PVM processes [9] and
a traditional dataflow system [13]. Investigating suitable
domain scheduling heuristics is a topic for future research.
Example 9. Condensed Graphs are used to exploit parallelism
in a computation and the Triple Manager(s) can
schedule the computation across networks of workstations
[10].
Figure
15 gives an example of a graph that schedules a
distributed brute-force key search given known plain/cipher
text. The key space is split into a series of intervals indexed
as Primitive operation
cr(Int interval);
searches a specified interval for the key. If found the key is
returned, otherwise 0 is returned. Operation search is defined
recursively and has a high degree of parallelism that
can be exploited by the Triple Manager which schedules operation
cr to be executed on participating processors. Operation
search is passed the initial value maxindex.
An organization wishes to use this application to find a
particular key. For the purposes of security, search operations
may be scheduled only to systems within the organization
intranet, while the cr operation may be scheduled
to any recognized system. Figure 15 illustrates how these
requirements are selectively programmed within an appli-
cation. Operations search and cr are assigned permission
attributes in and out, respectively. A permission may be
associated with a node by introducing an additional permission
input port to the node and is illustrated by using
ifel
ifel
searchsearch0
-cr
out
in
Figure
15: Programming Protection
a solid input arrow-head. Given the permission ordering
(out in), then systems (domains) within the intranet are
given permission in, and recognized external systems are
given permission out. Schedules implies((in,in),(in,in)), im-
plies((in,in),(out,out)) and implies((in,in),(in,out)) hold, while
does not. 4
5.3 Emergent Protection
Many protection policies base access decisions on previous
decisions and/or behaviour, for example Chinese Walls [3]
and Dynamic Separation of Duties [12]. Condensed Graphs
represent distributed computation and it is preferable not
to rely upon a centralized-state approaches such as [4] to
provide mechanisms that enforce these requirements.
The wrapping protection mechanism, specified in Figure 16,
can be used to support, in a distributed fashion, a limited
form of dynamic separation of duties. The wrapping operator
takes as input a node A, and permits it to fire
in any domain y that is strictly more authorized, or has an
uncomparable authorization, to x. The result R from firing
A is then 'wrapped' as W(x t R). Thus, the first parameter
of W is used to continue a local state (for this node)
by acting as a high-water mark of the permissions of past
domains.
Figure
17 gives a snapshot of this mechanism in opera-
tion. Given W(A;x) and if we have sdom(A) 6! x, then A
is explicitly grafted to the second input port of a new W
operation. This makes A fireable, but the non-strictness of
this port of W will not graft the resulting output R. This
resulting output R of A is protected and may fire only in
a suitably different domain, and so forth. This wrapping
operation is tenacious and is easily extended to enforce the
(v=w z)
x
A
R
z S
R fires in domain w!!=z, outputs S
A fires in domain y!!=x, outputs R
R
z
(z=x y)
Figure
17: Snapshot of a Wrapping and Unwrapping
sdom
sdom
ifel
Figure
Wrapping Protection Mechanism
scheduling constraint.
Example 10. Consider a simplified version of the purchase
order transaction (Figure 18). The order operation
takes as input an order-id, and (non-strict) a payment op-
eration, and outputs the payment operation P appropriately
transformed to include order value, and so forth. The
fg
O W X
Figure
Dynamic Separation of Duty
order operation is mediated as W(O; fg), where fg is the
empty permission. A manager (permission fmgrg) may execute
the O operation, and the result is the wrapped node
Payment P may now fire only in domains with
permissions fclkg or fclk,mgrg. 4
While tailored to a specific requirement, the proposed
wrapping mechanism illustrates the flexibility in using Condensed
Graphs to specify (and implement) protection re-
quirements. Rather than maintaining a centralized security
state, the operator W can be thought of as providing emergent
protection: mediation results in the emergence of a
further protection mechanism to mediate a subsequent op-
eration. Investigating how this mechanism might be applied
in practice and developing emergent mechanisms for general
protection policies is a topic for future research.
6. DISCUSSION AND CONCLUSION
The Condensed Graphs model provides a single frame-work
in which protection requirements can be specified and
implemented within the imperative, availability and coercion
paradigms. Section 3 illustrated how using these paradigms
provide flexibility in the sequencing and control over security
critical operations. Sections 4 and 5 draw on these techniques
and develop novel protection mechanisms. A Tena-
ciously protected node (operation or data) can be repeatedly
probed, and passed around, but may only be unwrapped in
the appropriate domain. Referential transparency in the
Condensed Graphs model means that this tenacity may be
further applied to the results generated by an operation
which emerge protected by a mechanism created on the fly.
Triple Managers transparently schedule graph operations
to appropriate security domains. This allows protection requirements
to be coded as part of the graph program, independently
of the underlying architecture. Graph-based protection
operators such as tenacious protection can be viewed
as a protection wrapper that may be unwrapped only in an
authorized security domain. Scheduling a tenaciously protected
node to an authorized domain is completely trans-
parent, even though it may have been necessary to migrate
the protected node through a number of Triple Managers
before it could be successfully scheduled.
Secure WebCom [5] provides one possible implementation
of the protection model described in this paper. We-
bCom [10] Masters schedule Condensed Graph applications
over remote WebCom clients (ancillary processors). Web-
Com Masters use KeyNote credentials [2] to determine the
operations that the client is authorized to execute; Web-
Com master credentials are used by clients to determine if
the master had the authorization to schedule the (trusted)
mobile-computation that the client is about execute. This
implementation can be interpreted in terms of the protection
mechanisms described in this paper. Client and Master
public keys provide security domains, while credentials define
their associated permissions. The authorization check
is similar to a fragile mediation on every node in the graph.
Much work remains to be done investigating how the protection
model described in this paper might be used in prac-
tice. The protection model might also be used as part of a
conventional secure system. A Condensed Graph can be regarded
as a sophisticated job-control language used to schedule
operations, such as multilevel transactions, to the protection
domains of a separation kernel [14].
Acknowledgements
Thanks to the anonymous referees and the Workshop audience
for their useful comments on this paper. This research
was suported in part by Enterprise Ireland National Software
Directorate.
--R
The keynote trust-management system version 2
The Chinese Wall security policy.
The specification and implementation of commercial security requirements including dynamic segregation of duties.
Exploiting KeyNote in Web- Com: Architecture neutral glue for trust management
Structural models: An introduction to the theory of directed graphs.
ACM Operating Systems Review 8
Condensed Graphs: Unifying Availability-Driven
Facilitating Parallel Programming in PVM using Condensed Graphs.
A Condensed Graphs Engine to Drive Metacomputing.
Managing and exploiting speculative computations in a flow driven
Some conundrums concerning separation of duty.
The design and verification of secure systems.
--TR
The specification and implementation of MYAMPERSANDldquo;commercialMYAMPERSANDrdquo; security requirements including dynamic segregation of duties
Facilitating Parallel Programming in PVM Using Condensed Graphs
Protection
Design and verification of secure systems | protection mechanisms;condensed graphs;functional and dataflow programming;security models;imperative |
508810 | A fault-tolerant directory service for mobile agents based on forwarding pointers. | A reliable communication layer is an essential component of a mobile agent system. We present a new fault-tolerant directory service for mobile agents, which can be used to route messages to them. The directory service, based on a technique of forwarding pointers, introduces some redundancy in order to ensure resilience to stopping failures of nodes containing forwarding pointers; in addition, it avoids cyclic routing of messages, and it supports a technique to collapse chains of pointers that allows direct communications between agents. We have formalised the algorithm and derived a fully mechanical proof of its correctness using the proof assistant Coq; we report on our experience of designing the algorithm and deriving its proof of correctness. The complete source code of the proof is made available from the WWW. | INTRODUCTION
Mobile agents have emerged as a major programming paradigm
for structuring distributed applications [3, 5]. For in-
stance, the magnitude project [13] investigates the use of
mobile agents as intermediary entities capable of negotiating
access to information resources on behalf of mobile users.
Several important issues remain to be addressed before mobile
agents become a mainstream technology for such appli-
cations: among them, a communication system and a security
infrastructure are needed respectively for facilitating
communications between mobile agents and for protecting
agents and their hosts.
Here, we focus solely on the problem of communications, for
which we have adopted a peer-to-peer communication model
using a performative-based agent communication language
[11], as prescribed by KQML and FIPA. Various authors
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage, and that copies
bear this notice and the full citation of the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC 2002, Madrid, Spain
ACM 1-58113-445-2/02/03 . $5.00
have previously investigated a communication layer for mobile
agents based on forwarding pointers [16, 10]. In such
an approach, when mobile agents migrate, they leave forwarding
pointers that are used to route messages. A point
of concern is to avoid cyclic routing when agents migrate
to previously visited sites; additionally, lazy updates and
piggy-backing of information on messages can be used to
collapse chains of pointers [12]. For structuring and clarity
purposes, a communication layer is usually dened in terms
of a message router and a directory service; the latter tracks
mobile agents' locations, whereas the former forwards messages
using the information provided by the latter.
Directory services based on forwarding pointers are currently
not tolerant to failures: the failure of a node containing
a forwarding pointer may prevent nding agents'
positions. The purpose of this paper is to present a directory
service, fully distributed and resilient to failures exhibited
by intermediary nodes, possibly containing forwarding
pointers. This algorithm may be used by a fault-tolerant
message router (which itself will be the object of another
publication).
We consider stopping failures according to which processes
are allowed to stop during the course or their execution [7].
The essence of our fault-tolerant distributed directory service
is to introduce redundancy of forwarding pointers, typically
by making N copies of agents' location information.
This type of redundancy ensures the resilience of the algorithm
to a maximum of N 1 failures of intermediary nodes.
We will show that the complexity of the algorithm remains
linear in N . Our specic contributions are:
1. A new directory service based on forwarding pointers,
fault-tolerant, preventing cyclic routing, and not involving
any static location;
2. A full mechanical proof of its correctness, using the
proof assistant Coq [1]; the complete source code of the
proof (involving some 25000 tactic invocations) may be
downloaded from the following URL [9].
We begin this paper by a survey of background work (Sec-
tion 2) and follow by a summary of a routing algorithm
based on forwarding pointers (Section 3). We present our
new directory service and its formalisation as an abstract
machine (Section 4). The purpose of Section 5 is to summarise
the correctness properties of the algorithm: its safety
states that the distributed directory service correctly and
uniquely identies agents' positions, whereas the liveness
property shows that the algorithm reaches a stable state
after a nite number of transitions, once agents stop mi-
grating. Then, in Section 6, we report on our experience of
designing the algorithm and deriving its proof of correctness,
and we suggest possible variants or extensions.
2. BACKGROUND
The topic of mobile agent tracking and communication has
been researched extensively by the mobile agent commu-
nity. Very early on, location-aware communications were
proposed: they consist of sending messages to locations
where agents are believed to be, but typically result in failure
when the receiver agent has migrated [15, 19].
For a number of applications, such a service is not satisfactory
because the key property is to get messages reliably
delivered to a recipient, wherever its location and whatever
the route adopted (for instance, when two mobile agents
undertake a negotiation on how to solve a specic prob-
lem). Location-transparent communication services were
introduced as a means to route and deliver messages automatically
to mobile agents, independently of their migra-
tion. (Such services have been shown to be implementable
on top of a location-aware communication layer [19].)
In the category of location-transparent communication lay-
ers, there are essentially two approaches, respectively based
on home agents and forwarding pointers. In systems based
on home agents, such as Aglets [5], each mobile agent is
associated with a non-mobile home agent. In order to communicate
with a mobile agent, a message has to be sent to
its associated home agent, which forwards it to the mobile
one; when a mobile agent migrates, it informs its home agent
of its new position. Alternatively, in mobile agent systems
such as Voyager [16], agents that migrate leave trails of forwarding
pointers, which are used to route messages.
In situations such as the pervasive computing environment,
the mechanism of a home agent may defeat the purpose
of using mobile agents by re-introducing centralisation: the
home agent approach puts a burden on the infrastructure,
which may hamper its scalability, in particular, in massively
distributed systems. A typical illustration is two mobile
agents with respective home bases in the US and Europe
having to communicate at a host in Australia. In such a
scenario, routing via home agents is not desirable, and may
not be possible when the host is temporarily disconnected
from the network. If we introduce a mechanism by which
home agents change location dynamically according to the
task at hand, we face the problem of how to communicate
reliably with a home agent, which is itself mobile. Alter-
natively, we could only use the home agent to bootstrap
communication, and then shortcut the route, but this approach
becomes unreliable once agents migrate. Finally, the
home agent also appears as a single point of failure: when it
exhibits a failure, it becomes impossible to track the mobile
agent or to route messages to it.
A naive forwarding pointer implementation causes communications
to become more expensive as agents migrate, because
chains of pointers increase. Chains of pointers need
to be collapsed promptly so that mobile agents become independent
of the hosts they previously visited. Once the
chain has collapsed direct communications become possible
and avoid the awkward scenario discussed above. As far as
tolerance to failures is concerned, the crash of an intermediary
node with a forwarding pointer prevents upstream nodes
to forward messages. Collapsing chains of pointers also has
the benet of reducing the system's exposure to failures.
Coordination models oer a more asynchronous form of com-
munication, typically involving a tuple space [4]. As coordination
spaces are non-mobile, they may suer from the same
problem as the home agent; solutions such as distributed
spaces may be introduced for that purpose but maintaining
consistency is a non-trivial problem. An inconvenient of the
coordination approach is that it requires coordinated processes
to poll tuple spaces, which may be ine-cient in terms
of both communication and computation. As a result, tuple
spaces generally provide a mechanism by which registered
clients can be notied of the arrival of a new tuple: when
clients are mobile, we are back to the problem of how to
deliver such notications reliably. If the tuple space itself is
mobile [17], the problem is then to deliver messages to the
tuple space.
This discussion shows that reliable delivery of messages to
mobile agents without using static locations to route messages
is essential, even if peer-to-peer communications are
not adopted as the high-level interaction paradigm between
agents. Previous work has focused on formalisation [10] and
implementation [16] of forwarding pointers, but solutions
were not fault-tolerant. We summarise such an approach
in Section 3 before extending it with support for failures in
Section 4.
3.
OF DIRECTORY SERVICE
In this section, we summarise the principles of a communication
layer based on forwarding pointers [10] without any
fault-tolerance. The algorithm comprises two components:
a distributed directory service and a message router, which
we describe below.
Distributed Directory Service. Each mobile agent is
associated with a timestamp that is increased every time
the agent migrates. When an agent has autonomously decided
to migrate to a new location, it requests the communication
layer to transport it to its new destination. When
the agent arrives at a new location, an acknowledgement
message containing both its new position and its newly-
incremented timestamp is sent to its previous location. As a
result, for each site, one of the following three cases is valid
for each agent A: (i) the agent A is local, (ii) the agent A
is in transit but has not acknowledged its new position yet,
or (iii) the agent A is known to have been at a remote location
with a given timestamp. Timestamps are essential to
avoid race conditions between acknowledgement messages:
by using timestamps, a site can decide which position information
is the most recent, and therefore can avoid creating
cycles in the graph of forwarding pointers. In order to avoid
an increasing cost of communication when the agent mi-
grates, a mechanism was specied to propagate information
about agent's position, which in turn reduces the length of
chains of pointers [10].
Message Router. Sites rely on the information about
agents' positions in order to route messages. For any in-coming
message aimed at an agent A, the message will be
delivered to A if A is known to be local. If A is in transit,
the message will be enqueued, until A's location becomes
known; otherwise, the message is forwarded to A's known
location.
Absence of Fault Tolerance. There is no redundancy
in the information concerning an agent's location. Indeed,
sites only remember the most recent location of an agent,
and only the previous agent's location is informed of the
new agent's position after a migration. As a result, a site
(transitively) pointing at a site exhibiting a failure has lost
its route to the agent.
4. FAULT-TOLERANT ALGORITHM
The intuition of our solution to the problem of failures is to
introduce some redundancy in the information about agents'
positions. Two essential elements are used for this purpose.
First, agents remember N previous dierent sites that they
have visited; once an agent arrives at a new location, it informs
its N previous locations of its new position. Second,
sites remember up to N dierent positions for an agent,
and their associated timestamps. We shall establish that
the algorithm is able to determine the agent's position cor-
rectly, provided that the number of stopping failures remains
smaller or equal to N 1.
Remark We aim to design an algorithm which is resilient to
failures of intermediary nodes. We are not concerned with reliability
of agents themselves. Systems replicating agents and using
failure detectors such as [8] may be used for that purpose; they
are complementary to our approach.
We adopt an existing framework [10] to model the distributed
directory service as an abstract machine, whose state space
is summarised in Figure 1. For the sake of clarity, we consider
a single mobile agent; the formalisation can easily be
extended to multiple agents by introducing names by which
agents are being referred to. An abstract machine is composed
of a set of sites taking part in a computation. Agent
timestamps, which we call mobility counters, are dened as
natural numbers. A memory is dened as an association
list, associating locations with mobility counters; we represent
an empty memory by ;. The value N is a parameter of
the algorithm. We will show that the agent's memory has
a size N and that the algorithm tolerates at most N 1
failures.
The set of messages is inductively dened by two construc-
tors. These constructors are used to construct messages,
which respectively represent an agent in transit and an arrival
acknowledgement. The message representing an agent
in transit, typically of the form agent(s; l; ~
contains the
site s that the agent is leaving, the value l of the mobility
counter it had on that site, and the agent's memory ~
the N previous sites it visited and associated mobility coun-
ters. The message representing an arrival acknowledgement,
ack(s; l), contains the site s (and associated mobility counter
l) where the agent is.
We assume that the network is fully connected, that communications
are reliable, and that the order of messages in
between pairs of sites is preserved. These communication
hypotheses are formalised in the abstract machine
by point-to-point communication links, which we dene as
queues using the following notations, where the expression
q1xq2 denotes the concatenation of two queues q1 ; q2 , and
first(q) the head of a queue q.
Each site maintains some information, which we abstract as
\tables" in the abstract machine. The location table maps
each site to a memory; for a site s, the location table indicates
the sites where s believes the agent has migrated to
(with their associated mobility counter). The present table is
meant to be empty for all sites, except for the site where the
agent is currently located, when the agent is not in transit;
there, the present table contains the sites previously visited
by the agent. The mobility counter table associates each site
with the mobility counter the agent had when it last visited
the site; the value is zero if the agent has never visited the
site.
After the agent has reached a new destination, acknowledgement
messages have to be sent to the N previous sites
it visited. We decouple the agent's arrival from acknowledgement
sending, so that transitions that deal with in-coming
messages are dierent from those that generate new
messages. Consequently, we introduce a further table, the
acknowledgement table, indicating which acknowledgements
still have to be sent.
In our formalisation, we use a variable to indicate whether a
machine is up and running. A site's failure state is allowed
to change from false to true, which indicates that the site is
exhibiting a failure. We are modelling stopping failures [7]
since no transition allows a failure state to change from true
to false.
A complete conguration of the abstract machine is dened
as the Cartesian product of all tables and message queues.
Our formalisation can be regarded as an asynchronous distributed
system [7]. In a real implementation, tables are
not shared resources, but their contents can be distributed
at each site.
The behaviour of the algorithm is represented by transitions,
which specify how the state of the abstract machine evolves.
Figure
2 contain all the transitions of the distributed directory
service. Transitions are assumed to be executed atom-
ically. For convenience, we use some notations such as post ,
receive or table updates, which give an imperative look to
the algorithm; their denitions is as follows. Given a cong-
uration hloc present
present
fail T; ki, such that mob T 0
similar notation is used for other
tables. Given a conguration, post(s1 ; s2 ; m) denotes hloc T;
present
A similar notation is used for receive.
sns g (Set of Sites)
list(S L) (Memory)
(Location Tables)
Acknowledgement
~
loc
present
ack
Figure
1: State Space
In each rule of Figure 2, the conditions that appear to the
left-hand side of an arrow are guards that must be satised
in order to be able to re a transition. For instance, the
rst four rules contain a proposition of the form :fail T (s),
which indicates that the rule has to occur for a site s that is
up and running. The right-hand side of a rule denotes the
conguration that is reached after transition. We assume
that guard evaluation and new conguration construction
are performed atomically. In order to illustrate our rules,
we present graphical representations of congurations; the
rst part of Figure 3 illustrates an agent that has successively
visited sites s0 respective timestamps
t+2. In this example, we assume that the value
of N is 3. (Note that s0 is not represented in the gure.)
The rst transition of Figure 2 models the actions to be
performed, when an agent decides to migrate from s1 to s2 .
In the guard, we see that the present table at s1 must be
non-empty, which indicates that the agent is present at s1 .
After transition, the present table at s1 is cleared, and an
agent message is posted between s1 and s2 ; the message contains
the agent's origin s1 , its mobility counter mob T (s1 ),
and the previous content of the present table at s1 . Note
that s2 , the destination of the agent, is only used to specify
which communication channel the agent message must be
enqueued into. The site s1 does not need to be communicated
this information, nor does it have to remember that
site. In a real implementation, the agent message would also
contain the complete agent state to be restarted by the re-
ceiver. The second part of Figure 3 illustrates changes in
the system, when an agent has initiated its migration.
The second transition is concerned with s2 handling a messageagent(s3 ; l; ~
M) coming from s1 . Tables are updated
to re
ect that s2 is becoming the new agent's location, with
new mobility counter. Our algorithm prescribes
the agent to remember N dierent sites it has visited. As
s2 may have been visited recently, we remove s2 from ~
before adding the site s3 where it was located before mi-
gration. The call add(N; s; l; ~
adds an association (s; l)
to the memory ~
, keeping at most N dierent entries with
the highest timestamps. (Appendix A contains the com-
1 Note that s3 is not required to be equal to s1 . Indeed, we
want the algorithm to be able to support sites that forward
incoming agents to other sites.
plete denition of add.) In addition, the acknowledgement
table of s2 is updated, since acknowledgements have to be
sent back to those previously visited sites. At this point, a
proper implementation would reinstate the agent state and
resume its execution. The third part of Figure 3 illustrates
the system as an agent arrives at a new location.
According to the third transition, if the acknowledgement
table on s1 contains a pair (s2 ; l 2 ), then an acknowledgement
message has to be sent from s1 to s2 ;
the acknowledgement message indicates that the agent is on
s1 with a mobility counter mob T (s1 ).
If a site s2 receives an acknowledgement message about site
s3 and mobility counter l, its location table has to be up-dated
accordingly. Let us note two properties of this rule.
First, we do not require the emitter s1 of the acknowledgement
message to be equal to s3 ; this property allows us
to use the same message for propagating more information
about the agent's location. Second, we make sure that up-dating
the location table (i) maintains information about
dierent locations, (ii) does not overwrite existing location
information with older one. This functionality is implemented
by the function add , whose specication may be
found in appendix A.
According to rule inform of Figure 2, any site s1 believing
that the agent is located at site s3 , with a mobility counter
l, may elect to communicate its belief to another site s2 .
Such a belief is also communicated by an ack message. It
is important to distinguish the roles of the send ack and
inform transitions. The former is mandatory to ensure the
correct behaviour of the algorithm, whereas the latter is
optional. The purpose of inform is to propagate information
about the agent's location in the system, so that the agent
may be found in less steps. As opposed to previous rules,
the inform rule is non-deterministic in the destination and
location information in an acknowledgement message. At
this level, our goal is to dene a correct specication of an
algorithm: any implementation strategy will be an instance
of this specication; some of them are discussed in Section
6. The rst part of Figure 4 illustrates the states of the
system after sending acknowledgement messages, whereas
the second one shows the eect of such messages.
For a conguration hloc present
legal transitions are:
migrate agent(s1
present T (s1)
in present T (s1) := ;
receive
in loc T (s2) := ;
present T (s2) := S 0
ack T (s2) := S 0 g
send
ack
receive
loc
Figure
2: Fault-Tolerant Directory Service
Failure. The rst ve rules of Figure 2 require the site s
where the transition takes place to be up and running, i.e.
:fail T (s). Our algorithm is designed to be tolerant to stopping
failure, according to which processes are allowed to stop
somewhere in the middle of their execution [7]. We model
a stopping failure by the transition stop failure, changing
the failure state of the site that exhibits the failure. Con-
sequently, a site that has stopped will be prevented from
performing any of the rst ve transitions of Figure 2.
As far as distributed system modelling is concerned, it is
unrealistic to consider that messages that are in transit on
pres(s
loc(s
pres(s
pres(s
loc(s
pres(s
pres(s
pres(s
loc(s
pres(s
loc(s
pres(s
pres(s
pres(s
loc(s
pres(s
loc(s
pres(s
Figure
3: Agent Migration (part 1)
a communication link remain present if the destination of
the communication link exhibits a failure. Rule msg failure
shows how messages in transit to a stopped site may be
lost. A similar argument may also hold for messages that
were posted (but not sent yet) at a site that stops. We could
add an extra rule handling such a case, but we did not do so
in order to keep the number of rules limited. As a result, our
communication model can be seen as using buered inputs
and unbuered outputs.
Initial and Legal Congurations. In the initial con-
guration, noted c i , we assume that the agent is at a given
site origin with a mobility counter set to N + 1. Obviously,
at creation time, an agent cannot have visited N sites previ-
ously. Instead, the creation process elects a set S i of dierent
sites that act as \backup routers" for the agent in the initial
Each site is associated with a dierent
mobility counter in the interval [1; N ]. Such N sites could
be chosen non-deterministically by the system or could be
pres(s
pres(s
loc(s
pres(s
loc(s
pres(s
pres(s
loc(s
pres(s
loc(s
pres(s
pres(s
Figure
4: Agent Migration (part 2)
congured manually by the user. For each site in S i , the
location table points to the origin and to sites of S i with
a higher mobility counter; the location table at all other
sites contains the origin and the N 1 rst sites of S i . The
present table at origin contains the sites in S i . A detailed
formalisation of the initial conguration is available from [9].
A conguration c is said to be legal if there is a sequence
of transitions t1 ; is reachable from the
initial
dene 7! as the re
exive, transitive closure of 7!.
5. CORRECTNESS
The correctness of the distributed directory service is based
on two properties: safety and liveness. The safety of the
distributed directory service ensures that it correctly tracks
the mobile agent's location, in particular in the presence of
failures. The liveness guarantees that agent location information
eventually gets propagated.
We intuitively explain the safety property proof as follows.
An acknowledgement message results in the creation of a
forwarding pointer that points towards the agent's loca-
tion. Forwarding pointers may be modelled by a relationship
parent that denes a directed acyclic graph leading to the
agent's location.
In the presence of failures, we show that the relationship
parent contains su-cient redundancy in order to guarantee
the existence of a path leading to the agent, without involving
any failed site: (i) Sites that belong to the agent's
memory have the agent's location as a parent. (ii) Sites
that do not belong to the agent's memory have at least N
parents. Consequently, if the number of failures is strictly
inferior to N , each site has always at least one parent that
is closer to the agent's location; by repeating this argument,
we can nd the agent's location.
We summarise the liveness result similar to the one in [10].
A nite amount of transitions can be performed from any
legal conguration (if we exclude migrate agent and inform).
Furthermore, we can prove that, if there is a message at the
head of a communication channel, there exists a transition
of the abstract machine that consumes that message. Con-
sequently, if we assume that message delivery and machine
transitions are fair, and if the mobile agent is stationary at
a location, then location tables will eventually be updated,
which proves the liveness of the algorithm.
All proofs were mechanically derived using the proof assistant
Coq [1]. Coq is a theorem prover whose logical foundation
is constructive logic. The crucial dierence between
constructive logic and classical logic is that ::p =) p does
not hold in constructive logic. The consequence is that the
formulation of proofs and properties must make use of constructive
and decidable statements. Due to space restriction,
we do not include the proofs but they can be downloaded
from [9]. The notation adopted here are pretty-printed versions
of the mechanically established ones.
6. ALGORITHMANDPROOFDISCUSSION
The constructive proof of the initial algorithm without fault-tolerance
helped us understand the dierent invariants that
needed to be preserved. In particular, the algorithm maintains
a directed acyclic graph leading to the agent's position;
interestingly, short-cutting chains of pointers by propagating
acknowledgement messages ensures that the graph remains
connected and acyclic. Using the same mechanism
of timestamp in combination with replication preserves a
similar invariant in the presence of failures.
The resulting algorithm turned out to be simpler because it
uses less rules, and its correctness proof was easier to derive.
When N is equal to 1, the algorithm has the same observable
behaviour as [10]. From a practical point of view, generating
the mechanical proof still remained a tedious process,
though simpler, because it needed some 25000 tactic invo-
cations, of which 5000 for the formalisation of the abstract
machine were reused from our initial work.
The complexity of the algorithm is linear in N as far as the
number of messages (N acknowledgement messages per mi-
gration), message length (size of a memory is O(N)), space
per site (size of a memory is O(N)), and time per migration
are concerned. Our proof established the correctness in
the worst-case scenario. Indeed, the algorithm may tolerate
more than N failures provided that one parent, at least,
remains up and running for each site.
For a given application, the designer will have to choose the
value of N . If N is chosen to be equal to the number of
nodes in the network, the system will be fully realiable but
its complexity, even though linear, is too high on an Internet
scale. Instead, an engineering decision should be made: in
a practical network, from network statistics, one can derive
the probability of obtaining simultaneous fail-
ures. For each application, and for the quality of service it
requires, the designer selects the appropriate failure proba-
bility, which determines the number of simultaneous failures
the system should be able to tolerate.
A remarkable property of the algorithm is that it does not
impose any delay upon agents when they initiate a migra-
tion. Forwarding pointers are created temporarily until a
stable situation is reached and they are removed. This has
to be contrasted with the home agent approach, which requires
the agent to notify its homebase, before and after
each migration. Interestingly, our algorithm does not preclude
us also from using other algorithms; we could envision
a system where such algorithms are selected at runtime according
to the network conditions and the quality of service
requirements of the application.
Propagating agent location information with rule inform is
critical in order to shorten chains of forwarding pointers,
because shorter chains reduce the cost of nding an agent's
location. The ideal strategy for sending these messages depends
on the type of distributed system, and on the applications
using the directory service. A range of solutions is
possible and two extremes of the spectrum are easily iden-
tiable. In an eager strategy, every time a mobile agent mi-
grates, its new location is broadcasted to all other sites; such
a solution is clearly not acceptable for networks such as the
Internet. Alternatively, a lazy strategy could be adopted
[12] but it requires cooperation with the message router.
The recipient of a message may inform its emitter, when
the recipient observes that that the emitter has out-of-date
routing information. In such a strategy, tables are only up-dated
when user messages are sent.
In Section 4, communication channels in the abstract machine
are dened as queues. We have established that swapping
any two messages in a given channel does not change
the behaviour of the algorithm; in other words, messages do
not need to be delivered in order.
Message Router. This paper studied a distributed directory
service, and we can sketch two possible uses for message
routing.
Simple Routing. The initial message router [10] can be
adopted to the new distributed directory service. A site
receiving a message for an agent that is not local forwards
the message to the site appearing in its location table with
the highest mobility counter; if the location table is empty,
messages are accumulated until the table is updated. This
simple algorithm does not use the redundancy provided by
the directory service and is therefore not tolerant to failure.
Parallel Flooding. A site must endeavour to forward a
message to N sites. If required, it has to keep copies of messages
until N acknowledgements have been received. By
making use of redundancy, this algorithm would guarantee
the delivery of messages. We should note that the algorithm
needs a mechanism to clear messages that have been
delivered and are still held by intermediate nodes.
Further Related Work. Murphy and Picco [14] present
a reliable communication mechanism for mobile agents. Their
study is not concerned with nodes that exhibit failures, but
with the problem of guaranteeing delivery in the presence of
runaway agents. Whether their approach could be combined
with ours remains an open question.
Lazar et al. [6] migrate mobile agents along a logical hierarchy
of hosts, and also use that topology to propagate mes-
sages. As a result, they are able to give a logarithmic bound
on the number of hops involved in communication. Their
mechanism does not oer any redundancy: consequently,
stopping failures cannot be handled, though they allow reconnections
of temporarily disconnected nodes.
Baumann and Rothermel [2] introduce the concept of a shadow
as a handle on a mobile agent that allows applications to
terminate a mobile agent execution by notifying the termination
to its associated shadow. Shadows are also allowed to
be mobile. Forwarding pointers are used to route messages
to mobile agents and mobile shadows. Some fault-tolerance
is provided using a mechanism similar to Jini leases, requiring
message to be propagated after some timeout. This differs
from our approach that relies on information replication
to allow messages to be routed through multiple routes.
Mobile computing devices share with mobile agents the problem
of location tracking. Prakash and Singhal [18] propose
a distributed location directory management scheme that
can adapt to changes in geographical distribution of mobile
hosts population in the network and to changes in mobile
host location query rate. Location information about mobile
hosts is replicated at O(
m) base stations, where m is the
total number of base stations in the system. Mobile hosts
that are queried more often than others have their location
information stored at a greater number of base stations. The
proposed algorithm uses replication to oer improved performance
during lookups and updates, but not to provide
any form of fault tolerance.
7. CONCLUSION
In this paper, we have presented a fault-tolerant distributed
directory service for mobile agents. Combined with a message
router, it provides a reliable communication layer for
mobile agents. The correctness of the algorithm is stated in
terms of its safety and liveness.
Our formalisation is encoded in the mechanical proof assistant
Coq, also used for carrying out the proof of correctness.
The constructive proof gives us a very good insight on the al-
gorithm, which we want to use to specify a reliable message
router. This work is part of an eort to dene a mechanically
proven correct mobile agent system. Besides message
routing, we also intend to investigate and formalise security
and authentication methods for mobile agents.
8.
ACKNOWLEDGEMENTS
Thanks to Nick Gibbins, Dan Michaelides, Victor Tan and
the anonymous referees for their comments. This research is
funded in part by QinetiQ and EPSRC Magnitude project
(reference GR/N35816).
9.
--R
The shadow detection protocol for mobile agents.
Migratory Applications.
Reactive Tuple Spaces for Mobile Agent Coordination
Program and Deploying Java Mobile Agents with Aglets.
A Scalable Location Tracking and Message Delivery Scheme for Mobile Agents.
Distributed Algorithms.
Providing Fault Tolerance to Mobile Intelligent Agents.
A Fault-Tolerant Distributed Directory Service for Mobile Agents: the Constructive Proof in Coq
Distributed Directory Service and Message Router for Mobile Agents.
SoFAR with DIM Agents: An Agent Framework for Distributed Information Management.
Mobile Objects in Java.
MAGNITUDE: Mobile AGents Negotiating for ITinerant Users in the Distributed Enterprise.
Reliable Communication for Highly Mobile Agents.
An RPC mechanism for transportable agents.
LIME: Linda meets mobility.
A Dynamic Approach to Location Management in Mobile Computing Systems.
Nomadic Pict: Language and Infrastructure Design for Mobile Agents.
--TR
Distributed directory service and message routing for mobile agents
Programming and Deploying Java Mobile Agents Aglets
The Shadow Approach
Reactive Tuple Spaces for Mobile Agent Coordination
Migratory Applications
Reliable Communication for Highly Mobile Agents
Pict
An RPC Mechanism for Transportable Agents
--CTR
Denis Caromel , Fabrice Huet, An adaptative mechanism for communicating with mobile objects, Proceedings of the 1st French-speaking conference on Mobility and ubiquity computing, June 01-03, 2004, Nice, France
Luc Moreau , Peter Dickman , Richard Jones, Birrell's distributed reference listing revisited, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.6, p.1344-1395, November 2005 | mobile agents;distributed directory service;fault tolerance |
508845 | Proxy-based security protocols in networked mobile devices. | We describe a resource discovery and communication system designed for security and privacy. All objects in the system, e.g., appliances, wearable gadgets, software agents, and users have associated trusted software proxies that either run on the appliance hardware or on a trusted computer. We describe how security and privacy are enforced using two separate protocols: a protocol for secure device-to-proxy communication, and a protocol for secure proxy-to-proxy communication. Using two separate protocols allows us to run a computationally-inexpensive protocol on impoverished devices, and a sophisticated protocol for resource authentication and communication on more powerful devices.We detail the device-to-proxy protocol for lightweight wireless devices and the proxy-to-proxy protocol which is based on SPKI/SDSI (Simple Public Key Infrastructure / Simple Distributed Security Infrastructure). A prototype system has been constructed, which allows for secure, yet efficient, access to networked, mobile devices. We present a quantitative evaluation of this system using various metrics. | INTRODUCTION
Attaining the goals of ubiquitous and pervasive computing
[6, 2] is becoming more and more feasible as the number
of computing devices in the world increases rapidly. How-
ever, there are still signicant hurdles to overcome when
This work was funded by Acer Inc., Delta Electronics Inc.,
Research Center, and Philips
Research under the MIT Project Oxygen partnership, and
by DARPA through the O-ce of Naval Research under contract
number N66001-99-2-891702.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SAC 2002, Madrid, Spain
integrating wearable and embedded devices into a ubiquitous
computing environment. These hurdles include designing
devices smart enough to collaborate with each other,
increasing ease-of-use, and enabling enhanced connectivity
between the dierent devices.
When connectivity is high, the security of the system is
a key factor. Devices must only allow access to authorized
users and must also keep the communication secure when
transmitting or receiving personal or private information.
Implementing typical forms of secure, private communication
using a public-key infrastructure on all devices is difcult
because the necessary cryptographic algorithms are
CPU-intensive. A common public-key cryptographic algorithm
such as RSA using 1024-bit keys takes 43ms to sign
and 0.6ms to verify on a 200MHz Intel Pentium Pro (a 32-
bit processor) [30]. Some devices may have 8-bit micro-controllers
running at 1-4 MHz, so public-key cryptography
on the device itself may not be an option. Nevertheless,
public-key based communication between devices over a net-work
is still desirable.
This paper presents our approach to addressing these is-
sues. We describe the architecture of our resource discovery
and communication system in Section 2. The device-to-
proxy security protocol is described in Section 3. We review
SPKI/SDSI and present the proxy-to-proxy protocol that
uses SPKI/SDSI in Section 4. Related work is discussed in
Section 5. The system is evaluated in Section 6.
1.1 Our Approach
To allow the architecture to use a public-key security
model on the network while keeping the devices themselves
simple, we create a software proxy for each device. All
objects in the system, e.g., appliances, wearable gadgets,
software agents, and users have associated trusted software
proxies that either run on an embedded processor on the
appliance, or on a trusted computer. In the case of the
proxy running on an embedded processor on the appliance,
we assume that device to proxy communication is inherently
secure. 1 If the device has minimal computational power, 2
and communicates to its proxy through a wired or wireless
network, we force the communication to adhere to a device-
to-proxy protocol (cf. Section 3). Proxies communicate with
1 For example, in a video camera, the software that controls
various actuators runs on a powerful processor, and
the proxy for the camera can also run on the embedded
processor.
2 This is typically the case for lightweight devices, e.g., remote
controls, active badges, etc.
each other using a secure proxy-to-proxy protocol based on
SPKI/SDSI (Simple public Key Infrastructure / Simple Distributed
Infrastructure). Having two dierent protocols
allows us to run a computationally-inexpensive security
protocol on impoverished devices, and a sophisticated
protocol for resource authentication and communication on
more powerful devices. We describe both protocols in this
paper.
1.2 Prototype Automation System
Using the ideas described above, we have constructed a
prototype automation system which allows for secure, yet
e-cient, access to networked, mobile devices. In this system,
each user wears a badge called a K21 which identies the
user and is location-aware: it \knows" the wearer's location
within a building. User identity and location information is
securely transmitted to the user's software proxy using the
device-to-proxy protocol.
Devices themselves may be mobile and may change loca-
tions. Attribute search over all controllable devices can be
performed to nd the nearest device, or the most appropriate
device under some metric. 3
By exploiting SPKI/SDSI, security is not compromised as
new users and devices enter the system, or when users and
devices leave the system. We believe that the use of two different
protocols, and the use of the SPKI/SDSI framework
in the proxy-to-proxy protocol has resulted in a secure, scal-
able, e-cient, and easy-to-maintain automation system.
2. SYSTEM ARCHITECTURE
The system has three primary component types: devices,
proxies and servers. A device refers to any type of shared
network resource, either hardware or software. It could be
a printer, a wireless security camera, a lamp, or a software
agent. Since communication protocols and bandwidth between
devices can vary widely, each device has a unique
proxy to unify its interface with other devices. The servers
provide naming and discovery facilities to the various devices
We assume a one-to-one correspondence between devices
and proxies. We also assume that all users are equipped
with K21s, whose proxies run on trusted computers. Thus
our system only needs to deal with devices, proxies and the
server network.
The system we describe is illustrated in Figure 1.
2.1 Devices
Each device, hardware or software, has an associated trusted
software proxy. In the case of a hardware device, the
proxy may run on an embedded processor within the de-
vice, or on a trusted computer networked with the device.
In the case of a software device, the device can incorporate
the proxy software itself.
Each device communicates with its own proxy over the
appropriate protocol for that particular device. A printer
wired into an Ethernet can communicate with its proxy
using TCP/IP. A wireless camera uses a wireless protocol
for the same purpose. The K21 (a simple device with a
lightweight processor) communicates with its proxy using
the particular device-to-proxy protocol described in Section
3 For example, a user may wish to print to the nearest printer
that he/she has access to.
Event
Play Tape Play Tape
Event
Device K21 VCR
Proxy K21 Proxy
Proxy Farm
VCR Proxy
Device-to-proxy
protocol
Server Network
Name Resolution Routing
(Section
Device-to-proxy
protocol
Proxy-to-proxy
(Section
protocol
Figure
1: System Overview
3. Thus, the device-side portion of the proxy must be customized
for each particular device.
2.2 Proxy
The proxy is software that runs on a network-visible com-
puter. The proxy's primary function is to make access-control
decisions on behalf of the device it represents. It may
also perform secondary functions such as running scripted
actions on behalf of the device and interfacing with a directory
service.
The proxy provides a very simple API to the device. The
sendToProxy() method is called by the device to send messages
to the proxy. The sendToDevice() method is a called
by the proxy to send messages to the device. When a proxy
receives a message from another proxy, depending on the
message, the proxy may translate it into a form that can
be understood by the proxy's particular device. It then forwards
the message to the device. When a proxy receives a
message from its device, it may translate the message into a
general form understood by all proxies, and then forward the
message to other proxies. Any time a proxy receives a mes-
sage, before performing a translation and passing the message
on to the device, it performs the access control checks
described in Section 4.
For ease of administration, we group proxies by their ad-
ministrators. An administrator's set of proxies is called a
proxy farm. This set specically includes the proxy for the
administrator's K21, which is considered the root proxy of
the proxy farm. When the administrator adds a new device
to the system, the device's proxy is automatically given
a default ACL, a duplicate of the ACL for the administra-
tor's K21 proxy. The administrator can manually change
the ACL later, if he desires.
A noteworthy advantage of our proxy-based architecture
is that it addresses the problem of viruses in pervasive computing
environments. Sophisticated virus scanning software
can be installed in the proxy, so it can scan any code before
it is downloaded onto the device.
CommandEvent Used to instruct a device to turn on or
o, for example.
ErrorEvent Generated and broadcast to all listeners when
an error condition occurs.
StatusChangeEvent Generated when, for example, a device
changes its location.
QueryEvent When a server receives a QueryEvent, it performs
a DNS (Domain Name Service) or INS lookup
on the query, and returns the results of the lookup in
a ResponseEvent.
ResponseEvent Generated in response to a QueryEvent.
Figure
2: Predened Event Types
2.3 Servers and the Server Network
This network consists of a distributed collection of independent
name servers and routers. In fact, each server acts
as both a name server and a router. This is similar to the
name resolvers in the Intentional Naming System (INS) [1],
which resolve device names to IP addresses, but can also
route events. If the destination name for an event matches
multiple proxies, the server network will route the event to
all matching destinations.
When a proxy comes online, it registers the name of the
device it represents with one of these servers. When a proxy
uses a server to perform a lookup on a name, the server
searches its directory for all names that match the given
name, and returns their IP addresses.
2.4 Communication via Events
We use an event-based communication mechanism in our
system. That is, all messages passed between proxies are signals
indicating that some event has occurred. For example,
a light bulb might generate light-on and light-o events. To
receive these messages, proxy x can add itself as an eventlistener
to proxy y. Thus, when y generates an event, x will
receive a copy.
In addition, the system has several pre-dened event categories
which receive special treatment at either the proxy or
server layer. They are summarized in Figure 2. A developer
can dene his own events as well. The server network simply
developer-dened events through to their destination.
The primary advantage of the event-based mechanism is
that it eliminates the need to repeatedly poll a device to
determine changes in its status. Instead, when a change oc-
curs, the device broadcasts an event to all listeners. Systems
like Sun Microsystems' Jini [26] issue \device drivers" (RMI
stubs) to all who wish to control a given device. It is then
possible to make local calls on the device driver, which are
translated into RMI calls on the device itself.
2.5 Resource discovery
The mechanism for resource discovery is similar to the
resource discovery protocol used by Jini. When a device
comes online, it instructs its proxy to repeatedly broadcast
a request for a server to the local subnetwork. The request
contains the device's name and the IP address and port of its
proxy. When a server receives one of these requests, it issues
a lease to the proxy. 4 That is, it adds the name/IP address
pair to its directory. The proxy must periodically renew
4 Handling the scenario where the device is making false
claims about its attributes in the lease request packet is
Gateway
Gateway Device 1
Device 3
Device 2
Proxy Farm
Proxy 3
UDP RF
Figure
3: Device-to-Proxy Communication overview
its lease by sending the same name/IP address pair to the
server, otherwise the server removes it from the directory.
In this fashion, if a device silently goes oine, or the IP
address changes, the proxy's lease will no longer get renewed
and the server will quickly notice and either remove it from
the directory or create a new lease with the new IP address.
For example, imagine a device with the name [name=foo]
which has a proxy running on 10.1.2.3:4011. When the device
is turned on, it informs its proxy that it has come online,
using a protocol like the device-to-proxy protocol described
in Section 3. The proxy begins to broadcast lease-request
packets of the form h[name=foo], 10.1.2.3:4011i on the local
subnetwork. When (or if) a server receives one of these pack-
ets, it checks its directory for [name=foo]. If [name=foo] is
not there, the server creates a lease for it by adding the
name/IP address pair to the directory. If [name=foo] is in
the directory, the server renews the lease. Suppose at some
later time the device is turned o. When the device goes
down, it brings the proxy oine with it, so the lease request
packets no longer get broadcast. That device's lease stops
getting renewed. After some short, pre-dened period of
time, the server expires the unrenewed lease and removes it
from the directory.
3. DEVICE-TO-PROXY PROTOCOL FOR
WIRELESS DEVICES
3.1
Overview
The device-to-proxy protocol varies for dierent types of
devices. In particular, we consider lightweight devices with
low-bandwidth wireless network connections and slow CPUs,
and heavyweight devices with higher-bandwidth connections
and faster CPUs. We assume that heavyweight devices are
capable of running proxy software locally (i.e., the proxy
for a printer could run on the printer's CPU). With a local
proxy, a sophisticated protocol for secure device-to-proxy
communication is unnecessary, assuming critical parts of the
device are tamper resistant. For lightweight devices, the
proxy must run elsewhere. This section gives an overview of
a protocol which is low-bandwidth and not CPU-intensive
that we use for lightweight device-to-proxy communication.
3.2 Communication
Our prototype system layers the security protocol described
below over a simple radio frequency (RF) protocol. The
the subject of ongoing research.
RF communication between a device and its proxy is handled
by a gateway that translates packetized RF communication
into UDP/IP packets, which are then routed over
the network to the proxy. The gateway also works in the
opposite direction by converting UDP/IP packets from the
proxy into RF packets and transmitting them to the device.
An overview of the communication is shown in Figure 3.
This gure shows a computer running three proxies; one for
each of three separate devices. The gure also shows how
multiple gateways can be used; device A is using a dierent
gateway from devices B and C.
3.3 Security
The proxy and device communicate through a secure channel
that encrypts and authenticates all the messages. The
HMAC-MD5 [13][20] algorithm is used for authentication
and the RC5 [21] algorithm is used for encryption. Both
of these algorithms use symmetric keys; the proxy and the
device share 128-bit keys.
3.3.1 Authentication
HMAC (Hashed Message Authentication Code) produces
a MAC (Message Authentication Code) that can validate
the authenticity and integrity of a message. HMAC uses
secret keys, and thus only someone who knows a particular
can create a particular MAC or verify that a particular
MAC is correct.
3.3.2 Encryption
The data is encrypted using the RC5 encryption algo-
rithm. We chose RC5 because of its simplicity and perfor-
mance. Our RC5 implementation is based on the OpenSSL
[16] code. RC5 is a block cipher; it usually works on eight-byte
blocks of data. However, by implementing it using
output feedback (OFB) mode, it can be used as a stream
cipher. This allows for encryption of an arbitrary number
of bytes without having to worry about blocks of data.
OFB mode works by generating an encryption pad from
an initial vector and a key. The encryption pad is then
XOR'ed with the data to produce the ciphertext. Since
the ciphertext can be decrypted by producing
the same encryption pad and XOR'ing it with the
ciphertext. Since this only requires the RC5 encryption routines
to generate the encryption pad, separate encrypt and
decrypt routines are not required.
For our implementation, we use 16 rounds for RC5. We
use dierent 128-bit keys for encryption and authentication.
3.4 Location
Device location is determined using the Cricket location
system[18, 17]. Cricket has several useful features, including
user privacy, decentralized control, low cost, and easy
deployment. Each device determines its own location. It
is up to the device to decide if it wants to let others know
where it is.
In the Cricket system, beacons are placed on the ceilings
of rooms. These beacons periodically broadcast location
information (such as \Room 4011") that can be heard by
Cricket listeners. At the same time that this information is
broadcast in the RF spectrum, the beacon also broadcasts
an ultrasound pulse. When a listener receives the RF mes-
sage, it measures the time until it receives the ultrasound
pulse. The listener determines its distance to the beacon
using the time dierence.
4. PROXY TO PROXY PROTOCOL
SPKI/SDSI (Simple public Key Infrastructure/Simple Distributed
Security Infrastructure) [7, 22] is a security infrastructure
that is designed to facilitate the development of
scalable, secure, distributed computing systems. SPKI/SDSI
provides ne-grained access control using a local name space
architecture and a simple,
exible, trust policy model.
SPKI/SDSI is a public key infrastructure with an egalitarian
design. The principals are the public keys and each
public key is a certicate authority. Each principal can issue
certicates on the same basis as any other principal.
There is no hierarchical global infrastructure. SPKI/SDSI
communities are built from the bottom-up, in a distributed
manner, and do not require a trusted \root."
4.1 SPKI/SDSI Integration
We have adopted a client-server architecture for the prox-
ies. When a particular principal, acting on behalf of a device
or user, makes a request via one proxy to a device represented
by another proxy, the rst proxy acts like a client, and
the second as a server. Resources on the server are either
public or protected by SPKI/SDSI ACLs. A SPKI/SDSI
ACL consists of a list of entries. Each entry has a subject
(a key or group) and a tag which species the set of operations
that that key or group is allowed to perform. To gain
access to a resource protected by an ACL, a requester must
include, in his request, a chain of certicates demonstrating
that he is a member of a group in an entry on the ACL. 5
If a requested resource is protected by an ACL, the princi-
pal's request must be accompanied by a \proof of authentic-
ity" that shows that it is authentic, and a \proof of autho-
rization" that shows the principal is authorized to perform
the particular request on the particular resource. The proof
of authenticity is typically a signed request, and the proof of
authorization is typically a chain of certicates. The principal
that signed the request must be the same principal that
the chain of certicates authorizes.
This system design, and the protocol between the proxies,
is very similar to that used in SPKI/SDSI's Project Geron-
imo, in which SPKI/SDSI was integrated into Apache and
Netscape, and used to provide client access control over the
web. Project Geronimo is described in two Master's theses
[3, 14].
4.2 Protocol
The protocol implemented by the client and server proxies
consists of four messages. This protocol is outlined in
Figure
4, and following is its description:
1. The client proxy sends a request, unauthenticated and
unauthorized, to the server proxy.
2. If the client requests access to a protected resource, the
server responds with the ACL protecting the resource 6
5 For examples of SPKI/SDSI ACLs and certicates, see [7]
or [3].
6 The ACL itself could be a protected resource, protected by
another ACL. In this case, the server will return the latter
ACL. The client will need to demonstrate that the user's
key is on this ACL, either directly or via certicates, before
gaining access to the ACL protecting the object to which
access was originally requested.
and the tag formed from the client's request. A tag is a
SPKI/SDSI data structure which represents a set of re-
quests. There are examples of tags in the SPKI/SDSI
IETF drafts [7]. If there is no ACL protecting the requested
resource, the request is immediately honored.
3. (a) The client proxy generates a chain of certicates
using the SPKI/SDSI certicate chain discovery
algorithm [4, 3]. This certicate chain provides a
proof of authorization that the user's key is authorized
to perform its request.
The certicate chain discovery algorithm takes
as input the ACL and tag from the server, the
user's public key (principal), the user's set of cer-
ticates, and a timestamp. If it exists, the algorithm
returns a chain of user certicates which
provides proof that the user's public key is authorized
to perform the operation(s) specied in
the tag, at the time specied in the timestamp.
If the algorithm is unable to generate a chain because
the user does not have the necessary certi-
cates, 7 or if the user's key is directly on the ACL,
the algorithm returns an empty certicate chain.
The client generates the timestamp using its local
clock.
(b) The client creates a SPKI/SDSI sequence [7] consisting
of the tag and the timestamp. It signs this
sequence with the user's private key, and includes
a copy of the user's public key in the SPKI/SDSI
signature. The client then sends the tag-time-
stamp sequence, the signature, and the certicate
chain generated in step 3a to the server.
4. The server veries the request by:
(a) Checking the timestamp in the tag-timestamp sequence
against the time on the server's local clock
to ensure that the request was made recently. 8
(b) Recreating the tag from the client's request and
checking that it is the same as the tag in the tag-
timestamp sequence.
(c) Extracting the public key from the signature.
(d) Verifying the signature on the tag-timestamp sequence
using this key.
Validating the certicates in the certicate chain.
(f) Verifying that there is a chain of authorization
from an entry on the ACL to the key from the
signature, via the certicate chain presented. The
authorization chain must authorize the client to
perform the requested operation.
7 If the user does not have the necessary certicates, the
client could immediately return an error. In our design,
however, we choose not to return an error at this point;
instead, we let the client send an empty certicate chain to
the server. This way, when the request does not verify, the
client can possibly be sent some error information by the
server which lets the user know where he should go to get
valid certicates.
8 In our prototype implementation, the server checks that the
timestamp in the client's tag-timestamp sequence is within
ve minutes of the server's local time.
4. Server verifies request. If the request is
verified, it is honored. If the request does not
verify, it is denied and an error is returned.
2. Server verification fails. ACL and tag are
returned.
chain. Client signs request. Client sends
signed request with certificates.
3. Client uses ACL and tag to generate certificate
Client Proxy Server Proxy
({tag, timestamp} , certificate chain)
(requested resource / error)
(ACL, tag)
1. Initial unauthenticated, unauthorized request
Ku
Figure
4: SPKI/SDSI Proxy to Proxy Access Control
Protocol
If the request veries, it is honored. If it does not
verify, it is denied and the server proxy returns an error
to the client proxy. This error is returned whenever the
client presents an authenticated request that is denied.
The protocol can be viewed as a typical challenge-response
protocol. The server reply in step 2 of the protocol is a challenge
the server issues the client, saying, \You are trying to
access a protected le. Prove to me that you have the credentials
to perform the operation you are requesting on the
resource protected by this ACL." The client uses the ACL
to help it produce a certicate chain, using the SPKI/SDSI
certicate chain discovery algorithm. It then sends the cer-
ticate chain and signed request in a second request to the
server proxy. The signed request provides proof of authentic-
ity, and the certicate chain provides proof of authorization.
The server attempts to verify the second request, and if it
succeeds, it honors the request.
The timestamp in the tag-timestamp sequence helps to
protect against certain types of replay attacks. For example,
suppose the server logs requests and suppose that this log is
not disposed of properly. If an adversary gains access to the
logs, the timestamp prevents him from replaying requests
found in the log and gaining access to protected resources. 9
4.2.1 Additional Security Considerations
The SPKI/SDSI protocol, as described, addresses the issue
of providing client access control. The protocol does
not ensure condentiality, authenticate servers, or provide
protection against replay attacks from the network.
The Secure Sockets Layer (SSL) protocol is the most widely
used security protocol today. The Transport Layer Security
(TLS) protocol is the successor to SSL. Principal goals
of SSL/TLS [19] include providing condentiality and data
integrity of tra-c between the client and server, and providing
authentication of the server. There is support for client
9 In order to use timestamps, the client's clock and server's
clock need to be fairly synchronized; SPKI/SDSI already
makes an assumption about fairly synchronized clocks when
time periods are specied in certicates. An alternative
approach to using timestamps is to use nonces in the
protocol.
SPKI/SDSI Access Control Protocol
Application Protocol
Key-Exchange Protocol with Server
Authentication
TCP/IP
Figure
5: Example Layering of Protocols
authentication, but client authentication is optional. The
SPKI/SDSI Access Control protocol can be layered over a
key-exchange protocol like TLS/SSL to provide additional
security. TLS/SSL currently uses the X.509 PKI to authenticate
servers, but it could just as well use SPKI/SDSI
in a similar manner. In addition to the features already
stated, SSL/TLS also provides protection against replay attacks
from the network, and protection against person-in-
the-middle attacks. With these considerations, the layering
of the protocols is shown in Figure 5. In the gure, 'Applica-
tion Protocol' refers to the standard communication protocol
between the client and server proxies, without security.
SSL/TLS authenticates the server proxy. However, it does
not indicate whether the server proxy is authorized to accept
the client's request. For example, it may be the case that
the client proxy is requesting to print a 'top secret' docu-
ment, say, and only certain printers should be used to print
'top secret' documents. With SSL/TLS and the SPKI/SDSI
Client Access Control Protocol we have described so far, the
client proxy will know that the public key of the proxy with
which it is communicating is bound to a particular address,
and the server proxy will know that the client proxy is authorized
to print to it. However, the client proxy still will
not know if the server proxy is authorized to print 'top se-
cret' documents. If it sends the `top secret' document to be
printed, the server proxy will accept the document and print
it, even though the document should not have been sent to
it in the rst place.
To approach this problem, we propose extending the SPKI-
/SDSI protocol so that the client requests authorization
from the server and the server proves to the client that it is
authorized to handle the client's request (before the client
sends the document o to be printed). To extend the proto-
col, the SPKI/SDSI protocol described in Section 4.2 is run
from the client proxy to the server proxy, and then run in the
reverse direction, from the server proxy to the client proxy.
Thus, the client proxy will present a SPKI/SDSI certicate
chain proving that it is authorized to perform its request,
and the server proxy will present a SPKI/SDSI certicate
chain proving that it is authorized to accept and perform
the client's request. Again, if additional security is needed,
the extended protocol can be layered over SSL/TLS.
Note that the SPKI/SDSI Access Control Protocol is an
example of the end-to-end argument [23]. The access control
decisions are made in the uppermost layer, involving only
the client and the server.
5. RELATED WORK
5.1 Device to Proxy Communication
The Resurrecting Duckling is a security model for ad-hoc
wireless networks [25, 24]. In this model, when devices begin
their lives, they must be \imprinted" before they can be
used. A master (the mother duck) imprints a device (the
duckling) by being the rst one to communicate with it. After
imprinting, a device only listens to its master. During
the process of imprinting, the master is placed in physical
contact with the device and they share a secret key that is
then used for symmetric-key authentication and encryption.
The master can also delegate the control of a device to other
devices so that control is not always limited to just the mas-
ter. A device can be \killed" by its master then resurrected
by a new one in order for it to swap masters.
5.2 Proxy to Proxy Communication
Jini [26] network technology from Sun Microsystems centers
around the idea of federation building. Jini avoids
the use of proxies by assuming that all devices and services
in the system will run the Java Virtual Machine. The
project [8] at the Helsinki University of Technology
has succeeded in building a framework for integrating Jini
and SPKI/SDSI. Their implementation has some latency
concerns, however, when new authorizations are granted.
UC Berkeley's Ninja project [27] uses the Service Discovery
Service [5] to securely perform resource discovery in a
wide-area network. Other related projects include Hewlett-
Packard's CoolTown [9], IBM's TSpaces [11] and University
of Washington's Portolano [29].
5.3 Other projects using SPKI/SDSI
Other projects using SPKI/SDSI include Hewlett-Pack-
ard's e-Speak product [10], Intel's CDSA release [12], and
Berkeley's OceanStore project [28]. HP's eSpeak uses SPKI-
/SDSI certicates for specifying and delegating authoriza-
tions. Intel's CDSA release, which is open-source, includes
a SPKI/SDSI service provider for building certicates, and a
module (AuthCompute) for performing authorization com-
putations. OceanStore uses SPKI/SDSI names in their naming
architecture.
6. EVALUATION
6.1 Hardware Design
Details on the design of a board that can act as the core
of a lightweight device, or as a wearable communicator, are
given in [15].
6.2 Device-to-Proxy Protocol
In this section we evaluate the device-to-proxy protocol
described in Section 3 in terms of its memory and processing
requirements.
6.2.1 Memory Requirements
Table
breaks down the memory requirements for various
software components. The code size represents memory
used in Flash, and data size represents memory used
in RAM. The device functionality component includes the
packet and location processing routines. The RF code component
includes the RF transmit and receive routines as well
as the Cricket listener routines. The miscellaneous component
is code that is common to all of the other components.
The device code requires approximately 12KB of code
space and 1KB of data space. The security algorithms,
HMAC-MD5 and RC5, take up most of the code space.
Component Code Size Data Size
Device Functionality 2.0 191
RF Code 1.1 153
RC5 3.2 256
Miscellaneous 1.0 0
Total 11.9 986
Table
1: Code and data size on the Atmel processor
Function Time (ms) Clock Cycles
decrypt (n bytes) 0:163n
up to 56 bytes 11.48 45,920
Table
2: Performance of encryption and authentication
code
Both of these algorithms were optimized in assembly, which
reduced their code size by more than half. The code could
be better optimized, but this gives a general idea of how
much memory is required. The code size we have attained
is small enough that it can be incorporated into virtually
any device.
6.2.2 Processing Requirements
The security algorithms put the most demand on the device
Table
breaks down the approximate time for each
algorithm as detailed in [15]. The RC5 processing time
varies linearly with the number of bytes being encrypted
or decrypted. The HMAC-MD5 routine, on the other hand,
takes a constant amount of time up to 56 bytes. This is
because HMAC-MD5 is designed to work on blocks of data,
so anything less than 56 bytes is padded. Since we limit
the RF packet size to 50 bytes, we only analyze the HMAC-
MD5 running time for packets of size less than or equal to
50 bytes.
6.3 SPKI/SDSI Evaluation
The protocol described in Section 4 is e-cient. The rst
two steps of the protocol are a standard request/response
no cryptography is required. The signicant steps
in the protocol are step 3, in which a certicate chain is
formed, and step 4, where the chain is veried. Table 3
shows analyses of these two steps. The paper on Certicate
Chain Discovery in SPKI/SDSI [4] should be referred to for
a discussion of the timing analyses. The CPU times are approximate
times measured on a Sun Microsystems Ultra-1
running SunOS 5.7.
7. CONCLUSIONS
We believe that the trends in pervasive computing are
increasing the diversity and heterogeneity of networks and
their constituent devices. Developing security protocols that
can handle diverse, mobile devices networked in various ways
represents a major challenge. In this paper, we have taken
a rst step toward meeting this challenge by observing the
need for multiple security protocols, each with dierent characteristics
and computational requirements. While we have
described a prototype system with two dierent protocols,
other types of protocols could be included if deemed necessary
The two protocols we have described have vastly dier-
ent characteristics, because they apply to dierent scenar-
ios. The device-to-proxy protocol was designed to enable secure
communication of data from a lightweight device. The
SPKI/SDSI-based proxy-to-proxy protocol was designed to
provide
exible, ne-grained, access control between prox-
ies. The proxy architecture and the use of two dierent
protocols has resulted, we believe, in a secure, yet e-cient,
resource discovery and communication system.
8.
--R
The Design and Implementation of an Intentional Naming System.
An Application Model for Pervasive Computing.
SPKI/SDSI HTTP Server
An Architecture for a Secure Service Discovery Service.
The Future of Computing.
Simple public Key Certi
Decentralized Jini Security.
See http://cooltown.
See http://www.
Intelligent Connectionware.
An Implementation of a Secure Web Client Using SPKI/SDSI Certi
An Architecture and Implementation of Secure Device Communication in Oxygen.
Providing Precise Indoor Location Information to Mobile Devices.
The Cricket Location-Support System
SSL and TLS: Designing and Building Secure Systems.
The MD5 Message-Digest Algorithm
The RC5 Encryption Algorithm.
The Resurrecting Duckling - What next? <Proceedings>In Proc
The Resurrecting Duckling: Security Issues for Ad-hoc Wireless Networks
Sun Microsystems Inc.
The Ninja Project: Enabling Internet-scale Services from Arbitrarily Small Devices
The OceanStore Project: Providing Global-Scale Persistent Data
University of Washington.
Performance Comparison of public-key Cryptosystems
--TR
An architecture for a secure service discovery service
The design and implementation of an intentional naming system
The Cricket location-support system
Challenges
SSL and TLS
Certificate chain discovery in SPKI?SDSI
The Resurrecting Duckling - What Next?
The Resurrecting Duckling
--CTR
Joerg Abendroth , Christian D. Jensen, A unified security framework for networked applications, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Sanjay Raman , Dwaine Clarke , Matt Burnside , Srinivas Devadas , Ronald Rivest, Access-controlled resource discovery for pervasive networks, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Taejoon Park , Kang G. Shin, LiSP: A lightweight security protocol for wireless sensor networks, ACM Transactions on Embedded Computing Systems (TECS), v.3 n.3, p.634-660, August 2004
Feng Zhu , Matt Mutka , Lionel Ni, Facilitating secure ad hoc service discovery in public environments, Journal of Systems and Software, v.76 n.1, p.45-54, April 2005
Kui Ren , Wenjing Lou, Privacy-enhanced, attack-resilient access control in pervasive computing environments with optional context authentication capability, Mobile Networks and Applications, v.12 n.1, p.79-92, January 2007
Georgios Kambourakis , Angelos Rouskas , Stefanos Gritzalis, Experimental Analysis of an SSL-Based AKA Mechanism in 3G-and-Beyond Wireless Networks, Wireless Personal Communications: An International Journal, v.29 n.3-4, p.303-321, June 2004
Domenico Cotroneo , Almerindo Graziano , Stefano Russo, Security requirements in service oriented architectures for ubiquitous computing, Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing, p.172-177, October 18-22, 2004, Toronto, Ontario, Canada
Arun Kejariwal , Sumit Gupta , Alexandru Nicolau , Nikil Dutt , Rajesh Gupta, Proxy-based task partitioning of watermarking algorithms for reducing energy consumption in mobile devices, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA | protocol;mobile device;ubiquitous;authorization;certificate;wireless;certificate chain discovery;certificate chain;proxy;security;pervasive |
508854 | A comprehensive model for arbitrary result extraction. | Within the realms of workflow management and grid computing, scheduling of distributed services is a central issue. Most schedulers balance time and cost to fit within a client's budget, while accepting explicit data dependencies between services as the best resolution for scheduling. Results are extracted from one service in total, and then simply forwarded to the next service. However, distributed objects and remote services adhere to various standards for data delivery and result extraction. There are multiple means of requesting results and multiple ways of delivering those results. By examining several popular and idiosyncratic methods, we have developed a comprehensive model that combines the functionality of all component models. This model for arbitrary result extraction from distributed objects provides increased flexibility for object users, and an increased audience for module providers. In turn, intelligent schedulers may leverage these result extraction features. | INTRODUCTION
1.1 Traditional RPCs and asynchronous
extraction
We address the problem of obtaining results from any of
multiple computational servers in response to requests made by
a client program. The simplest form of result extraction is the
traditional synchronous remote procedure call (RPC).
Parameters are passed in, the client waits patiently for the
results, and finally all results are simultaneously available. Only
a single object is returned with certain function calls, but most
languages offer procedure calls where you can have more than
one OUT-parameter. It is even possible in C++ with the use of
pointers, and it is cleanly implemented in CORBA. This has
been the dominant form of result extraction in most
programming languages and for many distributed systems (e.g.
CORBA, where a typical procedure call is defined in the IDL
syntax like void mymethod(in TYPE1 param1, out
TYPE2 param2, out TYPE3 param3).
1.2 Alternative extraction models
There are other models of data extraction that have received
little attention in most programming languages and paradigms.
These alternative models generally are strictly more functional
than the traditional procedure call, and are being used
increasingly as a replacement to RPC-style result extraction.
They all, in at least some aspect, provide more functionality or
flexibility than the RPC. One of the most important
enhancements has been the addition of asynchrony. Clients that
do not have to stall while results are computed or delivered are
strictly more powerful than those that must. Asynchronous
result extraction has been used even in many sequential
computing languages like Java and Ada, and is enabled in
various ways, through message passing, threading, rendezvous,
etc. Asynchrony as a tool to delay or reorder result extraction is
well understood and widely used [12, 13].
Partial extraction of data results has become almost ubiquitous
with the advent of web browsing. There are examples of other
extraction models (such as the progressive extraction of results)
that have seen limited use in specialized arenas. We can also
envision other models encompassing some of the more esoteric
extraction models that are potential more flexible than any in
wide or limited use today. Perhaps the most important of these
model we refer to is "partial extraction." Partial extraction is
taking only the desired portions of a result set, thus saving the
costs associated with extracting the entire set. For instance,
almost all modern web browsers have the ability to download
only text, without images, to speed browsing on slower
connections. Many browsers also allow users to filter unwanted
objects (i.e., embedded audio) out of html documents. The web
has brought "partial extraction" to the desktop as the default
browsing model.
Another model for result extraction, again strictly more
powerful than the traditional RPC, is the "progressive
extraction" model. In many scientific computations, such as
adaptive mesh refinements, answers become "better" (e.g., more
precise) over time. It can be very important to extract results as
progress is made toward an acceptable solution for steering,
early termination, or other reasons. A typical application of this
type is the design of an aircraft wing [2, 3, 5, 12]. Certain
scientific and mathematical processes, like Newton's method of
successive approximation for roots of an equation, can also
utilize this type of extraction [4]. The traditional RPC result
extraction model does not lend itself well to progressive
extractions.
Decomposition of the traditional call model has been discussed
for some time [10]; we advocate a further breakdown of the
extraction phase of this model. We propose an extraction
model, developed in the course of research and development of
the language CLAM (Composition Language for Autonomous
Megamodules) [8] within the CHAIMS (Composing High-level
Access Interfaces for Multisite Software) megaprogramming
project [1], that encompasses these three important
augmentations (asynchronous, partial, progressive) to RPC-style
result extraction models. The amalgamation of these three
extraction paradigms leads to a general model, more expressive
than any of the three taken alone.
1.3 Current system support for partial and
progressive extraction
For many languages and systems, the generic asynchronous
remote procedure call model of result extraction provides
enough flexibility. In cases where it does not, custom solutions
abound. Occasionally, these custom solutions become
widespread, often circumstantially, e.g., because of runtime
support rather than language specification. For instance, partial
extraction within the available community of web-browsers does
not arise from html, per se. It is a consequence of parsing html
pages and making only selective http requests. In effect, the
web-browser implements the partial extraction of results while
the html provides a "schema" of the results available. For
example, when web users choose not to download certain types
of content, the web-browser implements the filtering and is not
affected by the html or the http protocol.
We accept this html/http/browsing model as appropriate for
partial extractions when a schema for the data is available. Of
course, without such a schema, partial extractions would be
meaningless. At some level, this constrains the domain for
which partial extraction is semantically meaningful or even
practical. However, since the model of partial extraction wholly
encompasses the traditional RPC method of result extraction,
this is not a problem.
Progressive extractions are frequently consequences of elaborate
development projects or again arise as a consequence of the
nature of the data involved. We can look again to web-browsing
to find (an extremely limited) form of progressive extraction,
one that arises from the data at hand rather than html or http.
Within certain result sets, like weather maps, the results change
over time. By simply re-requesting the same data, progressive
updates to the data may be seen by a client. On the other hand,
to see a broader picture of progressive result extraction, we turn
to (usually) hand-tooled codes specialized for single
applications.
As far as we know, there is currently no language with
primitives supporting progressive extractions. Additionally,
what marginal runtime and protocol support for progressive
extractions there is seems to only exist because of specialized
data streams (like browsing weather services), and not because
of intentional support. We have found examples of hand-coded
systems that allow partial result extractions, at predefined
process points, to allow early result inspection [3]. Such
working codes have been built to allow users to steer executions
and to test for convergence on iterative solution generators.
However, a language and runtime system to develop arbitrary
codes that allow progressive extractions is not to be found. This
is especially true of compositions languages geared toward
distributed objects.
Herein we outline a set of language primitives expressive
enough to capture each of these result extraction models
simultaneously. We also review an implementation of this
general result extraction model, as encompassed within the
megaprogramming language CLAM and the supporting
components within the CHAIMS system [7].
2. RESULT EXTRACTION WITHIN
2.1 Definitions and motivation
We focus on result extraction. In our studies, interesting
"results" come from computational services rather than simple
data services like databases, web servers, etc. Computational
services are those that add value to input data. Results from
computational services are tailored to their inputs, unlike data
services where results are often already available before and
after the service is used (like static pages from a web server).
Various extraction methods have potentially more value in the
context of computational services than in the context of data
services. We define "partial extraction" as the extraction of a
subset of available results. We define "Progressive extraction"
as the extractions of the same result parameter with different
content/data at different points in a computation. The different
points of computation can deliver different accuracy of specific
results, as is typical for simulations or complex calculations for
images. Or they can signify dependencies of the results on the
actual time, as is the case for weather data that changes over
time.
We went through several versions of CLAM that led to our
current infrastructure. Originally limited to partial data
extractions, repeated iterations in the design process yielded the
general-purpose client-centric result examination and extraction
model that seems to be maximally flexible. We say this model
is "client-centric" because, unlike an exception or callback-type
model, clients initiate all data inspection and collection. In this
"data on demand" approach, clients expect that servers do not
"push" either data or status, but rather must make requests for
both. We specifically contrast this client-centric approach to the
CORBA event model seen in 3.4.
Finally, we present CLAM as one possible language and
implementation of our generic result extraction model. CLAM
is a composition language, rather than computationally oriented
language [8, 14]. As such, it is aptly suited to the presentation
of this extraction model, though not necessarily a programmer's
language of choice for a given problem. We advocate this
extraction model here, and present it within a particular working
framework (CHAIMS).
2.2 Language specification
In CLAM, there are two basic primitives essential to our result
extraction model: EXAMINE and EXTRACT. We use
EXAMINE to inspect the progress of a calculation (or method
invocation, procedure, etc.) by requesting data status or, when
provided by the server, also information about the invocation
progress concerning computation time.
. Purpose
The EXAMINE primitive is used to determine the state and
progress of the invocation referred to by an
invocation_handle. EXAMINE has two status fields: state
and progress. The state can be any of {DONE, NOT_DONE,
PARTIAL, ERROR}. The progress field is an integer, used
to indicate progress of results as well as of the invocation.
The various pieces of status and progress information are only
returned when the client requests them, in line with the client-centric
approach.
. Syntax
(mystatus=status,
(mystatus=status,
Imagine a megamodule (a module providing a service) that has a
method "foo" which returns three distinct <results/data
B, and C. If foo is just invoked, and no work has
been done, A, B, and C will all be incomplete. A call to
will thus return NOT_DONE.
When complete, a call to foo_handle.EXAMINE() will
return DONE, because the work has been performed. If there
are some meaningful results available for extraction before all
results are ready, foo_handle.EXAMINE() returns
PARTIAL. In case of error with the invocation of foo, ERROR
is returned.
If the megamodule supports invocation progress information,
invocation_handle.EXAMINE() is used instead of (mystatus
in order to get
progress information about the invocation. This progress
information indicates how much progress the computation has
made already in terms of time used and estimated time needed to
complete. Ideally this progress information is in agreement with
pre invocation cost estimation which is also provided by CLAM
(see primitive ESTIMATE in [8]). Yet as conditions like server
load can change very rapidly, it is essential to be able to track
the progress of the computation from its invocation to its
completion.
Upon receipt of invocation status "PARTIAL," a client knows
that some subset of the results are extractable, though not that
any particular element of the result data is ready for extraction
(nor that the result contains progressive and thus temporary
values) or has been finalized. Yet PARTIAL indicates to the
client that it would be worth while to inspect individual result
parameters to get their particular status information.
Once again, imagine foo with return data elements A, B, and C.
In this case, A is done, B is partially done with extractable
results available, and no progress has been made on C. A call to
would return PARTIAL, because
some subset of the available data is ready. Subsequently, a
client issue of foo_handle.EXAMINE(A) will return
DONE, foo_handle.EXAMINE(B) will return {PARTIAL,
50}, and foo_handle.EXAMINE(C) will return
NOT_DONE. Interpretations of the results from the
examination A and C are obvious. In the case of result B,
assuming that the result value is 50% completed, and the server
makes this additional information available to the client, a return
tuple such as {PARTIAL, 50} would express this.
Remember, the second parameter returned by EXAMINE is an
integer. CLAM places no restriction on its general use. Servers
impart their own meaning on such parameters. However, the
recommended usage is for the value to indicate a "percentage (0-
100) value of completion." Such semantic meaning associated
with a particular server is available to client builders via a
special repository.
CLAM couples the EXAMINE primitive with an equally
powerful EXTRACT mechanism.
. Purpose
The EXTRACT call collects the results of an invocation. A
subset of all parameters returned by the invocation can be
extracted. In fact, parameters which are extracted are the ones
explicitly specified at the left hand side of the command.
. Syntax
Returning to our previous example of the method foo, we look at
EXTRACT. The return fields are name/value pairs, and may
contain any subset of the available data. For the example
method foo, we might do the following: (tmpa=A, tmpb=B)
foo_handle.EXTRACT(). This call would return the
current values for A and B, leaving C on the server.
This extract primitive allows to extract just those results that are
ready as well as needed. This in contrast to a much simpler
extract where simply all results would be returned with each
extract command, independent of their state.
2.3 Analysis of the extraction model
Different flavors of extraction are available with different levels
of functionality in EXTRACT and EXAMINE. We present here
the various achievable extraction modes when only portions of
the complete set of CLAM data extraction primitives are
available. This analysis shows how EXAMINE and EXTRACT
relate and interact to provide each of the result extraction flavors
outlined in this paper.
Within EXAMINE, simple inspection returns a per invocation
status value from {DONE, NOT_DONE, PARTIAL, ERROR}.
There are two augmentations made to the EXAMINE primitive.
The first augmentation is the addition of a second parameter, the
progress indicator. The second augmentation is the ability to
inspect individual parameters. These two augmentations
provide four distinct possible examination schemes:
(1) without parameter inspection, and without progress
information
(2) without parameter inspection, but with progress
information which is invocation specific
(3) with parameter inspection, but without progress
information
(4) with parameter inspection, and with progress
information which is parameter specific
There are two types of EXTRACT in CLAM:
(a) EXTRACT returns all values from an invocation (like
an RPC)
(b) EXTRACT retrieves specific return values (like
requesting specific embedded html objects).
The two extraction options, when coupled with the four possible
types of EXAMINE, form eight possible models of extraction.
Reference source not found. shows the table of
possible combinations and outlines basic functionality
achievable with each potential scheme. Even with the simplest
case as shown in entry 1a, an EXAMINE that works on a per
invocation basis only (cannot examine particular parameters)
and an EXTRACT that only returns the entire set of results. The
extraction model remains more powerful than a typical
C++/Fortran-like procedure call because of the viable
asynchrony achieved. The model retains its client-centric
orientation. The client polls at chosen times (with EXAMINE)
and can extract the data whenever desired. The assumption in
CLAM is that the client may also extract the same data multiple
times. This places some burdens on the support system that we
discuss in the next section.
Table
1. EXAMINE/EXTRACT relationships
single EXTRACT (a) per return element EXTRACT (b)
per invocation EXAMINE,
One parameter (1)
1a. Like an asynchronous procedure
call
e.g., Java RMI
1b. Limited partial extraction. Like 1a,
with the added ability to extract a subset of
return data at a time indicated by PARTIAL
(limited to one checkpoint) or after all results
are completed.
per invocation EXAMINE,
Two parameters (2)
2a. Very limited. Progressive
extraction becomes possible, with
data completion level indirectly
indicated by second parameter. Must
retrieve entire data set.
2b. Very limited. Progressive extraction still
possible, no legitimate potential for partial
extractions other than a unique set as in 1b.
per result EXAMINE,
One parameter (3)
3a. Allows for data retrieval as
particular return values are complete,
but entire set must be retrieved each
time (semantic partial extraction).
3b. True partial extraction becomes possible
here. No real progressive extraction.
e.g., Web-browsing
per result EXAMINE,
Two parameters (4)
4a. Progressive extraction becomes
possible, with data completion level
indicated by second parameters.
Must retrieve entire data set. Can
determine more detail than 2a.
4b. Partial and progressive extraction are
both possible. Single results may be
extracted at various stages of completion.
e.g., CLAM
The mechanism of table entry 1b is mainly used for partial result
extraction when all results are completed, yet not all results are
needed. The client has the possibility to only extract those
results needed right away, and can leave the other results on the
server until they are extracted at a later point or time, or the
client indicates it is no longer interested in the results. The main
advantage of this level of partial extraction is the avoidance of
transmitting huge amounts of data when it is not needed. This is
particularly the case when one result parameter contains some
meta-information that tells the client if it needs the other result
parameters at all. This mechanism can be compared to the
partial download of web-pages by a web-browser. In a first
download the browser might exclude images. At a later point the
person using the web-browser might request the download of
some specific or all images based on the textual information
received in the first download. This usage of partial extraction
has become very important as it allows to partially download
small amount of information, and only spend the costs (mainly
time when using slow connections) for the costly images when
really needed.
There is a limited capability for process progress inspection
even with only one return parameter in table entry 1b. With the
set {DONE, NOT_DONE, PARTIAL, ERROR}, there is room
for limited process inspection. The return of "PARTIAL" from
a server may indicate a unique meaningful checkpoint to a
client. It could be used to indicate some arbitrary level of
completion or that some arbitrary subset of data was currently
meaningful. This single return value is really a special binary
case of the second return value from EXAMINE. Together with
a per return element EXTRACT, this model can only reasonably
be used to extract one specific, pre-defined subset of results
before final extraction of the entire return data set. Figure 1 one
shows this use of PARTIAL for creating a single binary partition
of results this way: PARTIAL indicates in this specific case that
the results A and B are ready, yet C is not yet ready. If e.g., A
and C were ready yet not yet B, this could not be indicated and
the status NOT_DONE would be returned.
For clarity, we examine more closely the power associated with
each entry in the above table. Entry 1a, when there is only one
status parameter per invocation (i.e., for all of foo) and only the
ability to extract the entire return data set, is clearly the most
limited. It is very much like an asynchronous remote procedure
call, with the clear distinction that the client polls for result
readiness. Also data are only delivered when the client requests
it, as noted previously.
In table entry 1b, we see that adding per element extraction
capability allows clients to reasonably extract one portion of the
data set if it is done earlier than the whole. This capability is
still very limited. This model can only reasonably be used to
extract one specific, pre-defined subset of results before final
extraction of the entire return data set. The return value
"PARTIAL" can be used to indicate that a partial set of results
are done, whereas the examination return value "DONE" may
indicate that all results are ready. This use of PARTIAL can be
seen in Figure 1. Again, it is only possible to create a single
binary partition of results this way.
ready ready not ready
RETURNVALUESFROMFOO
Figure
1. Overloading PARTIAL with additional semantic
information
In table entry 2a, we see that very limited progressive extraction
of the data is made possible by the addition of the second
parameter to EXAMINE. The status of the results can be
indirectly derived from the progress of the invocation. This
indirect progress indication only applies to the entire result
return set, which is really only useful if there is only one result
parameter or all the result parameters belong tightly together.
This is a superset of the web-browser extraction method,
actually. With the web-browser extract method for weather data
to be computed, images, or simulations , no meta-information
about the data is returned to the client (i.e., status about the data
being 20% complete, etc. To add such meta-data to web
browsing, the browser must be further extended to actively
process return values through another mechanism like Java or
JavaScript.
In table entry 2b, we see that very limited progressive extraction
of the data is again made possible by the addition of the second
parameter to EXAMINE. There is really no more power in this
examination and extraction model than table entry 2a. However,
creative server programmers could take advantage of a scheme
similar to that shown in Figure 1. Still, even under such
circumstance, programmers are again limited to one predefined
subset for extraction and are not allowed the flexibility seen in
entries 3b and 4b.
In table entry 3a, we see the addition of per return value
examination information. If there is only one return value from
a method, the functions of entries 3a, 3b, 4a, and 4b are identical
to entries 1a, 1b, 2a, and 2b, respectively. We refer to entry 3a
as semantic partial extraction because the per result examination
allows the user to know exactly when individual return results
are ready, but the entire data set must be extracted to get any
particular element. This can cause unnecessary delay and
overhead, especially with large return data sets.
In table entry 3b, we see the first fully functional partial
extraction. Whereas in entries 1b and 2b individual return
values could be selected from the return set, partial extraction
was not meaningful without a per return value EXAMINE
primitive as offered here.
In table entry 4a, progressive extraction is possible, with
progress indicators for each data return value. On the other
hand, it still suffers from the same overhead as table entry 3a: all
data must be extracted at each stage. This expense cannot be
avoided.
In table entry 4b, we see the manifestation of the complete
CLAM examine/extraction model, one that has all facets of the
general result extraction model. Both partial and progressive
extractions are possible, including partial-progressive
extractions (where arbitrary individual elements may be
extracted at various stages of completion). Without exception,
this model is strictly more powerful than any of the others.
2.4 Granularity in partial extraction
In our discussion of Table 1 we had the special case where it is
only possible to extract exactly one subset of parameters (case
1b) and the more general case wherein the partial extraction of
an arbitrary set of parameters can be extracted (case 3b). In both
cases, the granularity of extraction is given by the individual
method parameter. One method parameter is either extracted as
a whole, or not at all. If a module supplier wants to provide a
very fine granularity for partial extraction, the results of a
method have to be split up into as many different parameters as
possible. This allows fine granularity for partial examination
and extraction, yet has two distinct disadvantages:
- if the result parameters are correlated and together form a
larger logical structure, this higher level structure gets lost in
the specification of the method as well as in the extraction
- the client is burdened with reconstructing any higher level
data structure. This is an overhead concerning programming
as well as concerning information dissemination (the client
needs to get the information from somewhere how to
reconstruct a higher level structure, information that is not
necessarily part of the method specification).
add as a figure part of the repository definition in XMLwith the
definition for the parameter PersDat - leave out the details about
short cuts for simple parameters - from schema spec.
(DOROTHEA'STREE)
Figure
2. Sample CLAM repository information
There is another way to provide a fine granularity for partial
extraction of result parameters without burdening the client with
the reconstruction of higher level data structures. The module
provider makes the substructure of the result parameter public,
and allows users to examine and request only part of a result
parameter. One possible way to do that is by using XML for
parameters [15]. The structure of parameters are defined by
DTDs (or XML-Schemas) and made public along with the
method interfaces [16]. For CLAM the DTD for the XML-
parameters is defined in the CHAIMS repository as shown in
Figure
2. If the client wants to extract parameters as a whole,
the client uses the CLAM syntax as discussed in section 2.2. If
the client wants to examine and extract just part of a parameter,
the client adds an XQL query string to each parameter name in
the examine or extract command. For the parameter "PersDat"
specified in Figure 2 a client could just examine and extract the
element "Lastname" with the following CLAM commands:
XQL is a very simple query language that allows clietns to
search for and extract specific sub-elements of an XML-
parameter. In the above example, the whole data structure
"PersDat" is searched for an element with the tag "Lastname,"
which is then returned inclusive all of its sub-elements. XQL
would also allow clients to specify the whole path (e.g.
"/Persdat/Name/Lastname"), or to search for an element
anywhere within another element (e.g., "/Persdat//Lastname") or
anywhere within the entire parameter (e.g., "//Lastname"). In
our specific example, all of these queries return the same data.
XQL also allows more complex queries including conditions
and subscripts (for more details see [17]).
Using XQL queries for extracting partial results of
computational methods should not be confused with using XQL
queries to extract data from an XML database, in spite of the
apparent similarities. There are several differences:
- here we query non-persistent data; the lifetime of the result
parameters is linked to the duration of the invocation
there exists no overall schema for all the result parameters of
one module or even of several modules. The scope of the XML
specification (the DTD in the repository) is one single
parameter. The relationships between the parameters is not an
issue for partial extraction.
- Due to the first two differences, there is also no need or use for
a join operation, and a simple query language like XQL fulfills
all the needs for partial extraction (whereas for XML-databases
more complex query languages like XML-QL might be better
suited [18 ]).
2.5 Implementation issues
2.5.1 Wrapping
Within CHAIMS, all server modules have certain compatibility
requirements. Many server modules are actually wrapped
legacy code that do not have the necessary components to act as
remote servers. For minimal CHAIMS compliance, any legacy
module can trivially support an EXAMINE/EXTRACT
relationship like that in table entry 1a. This is a single
EXTRACT with a per invocation EXAMINE. Simply treat the
legacy module like a black box that returns only {DONE,
NOT_DONE, ERROR} (without PARTIAL). Also, because
return values are collected in the CHAIMS wrapper, the client
can freely choose when to request the data, though it must
request the data explicitly. The client may also perform multiple
requests for the same data without further augmentation of the
original code (table entry 1b).
connection
wrapper legacy code
INVOKE "foo"
INVOKE "foo"
Figure
3. Client-wrapper communication
To use the more powerful models of data extraction, significant
modification would be required of na-ve modules. We
originally classified the two augmentation types (to legacy
modules) as either partial or progressive-type augmentations.
Partial extraction augmentations are those that make a particular
subset of the return data externally available before the
completion of the entire invocation. Progressive extraction
augmentations are those that post information in multiple stages,
i.e., at implementer-defined checkpoints.
Native modules designed for partial and progressive extraction
must have a way to post interim results that may be extracted by
clients. The interim results must be held in some structure so
that request for data may be serviced without interrupting the
working code.
The CHAIMS wrapper is a threaded component that handles
messages and provides a means of storing interim results and
delivering those results to clients. To implement partial and
progressive extractions, two pieces of information are required:
status and data. When a module posts an interim result (to be
delivered by the wrapper or a native server), both pieces of
information must be given about the result value.
This status does not need to be provided per method or pre-
invocation, however. Such information is extracted from
collected knowledge about all partial results. When no partial
results are ready, status is NOT_DONE. When all partial results
are ready, status is DONE. When some results are ready at any
level, per invocation status is simply PARTIAL. When any
partial result indicates ERROR, however, the per invocation
status should be set to ERROR. This prevents a blind per
invocation extraction of all data elements when some may be
corrupt.
2.5.2 Result marshalling methods
There are two equally appropriate methods for marshalling
partial results, depending upon application: passive and active.
The marshalling concerns are basically the servers', not the
clients'. With a passive approach, whenever a client specifically
requests status or partial results, the message handler requests
that information from the working code. The appropriate
routines for doing so are provided in the runtime layer (CPAM -
Composition Protocol for Autonomous Megamodules) in the
form of a Java class, or they may be user developed. Of course,
native codes written with the intent of posting partial results are
easier to work with than wrapped codes and can use any suitable
language (Java, or otherwise). Figure 4 better shows the
marshalling and examination interaction between a wrapper and
legacy code. The status and progress information is held in the
wrapper. Also, temporary storage locations for the extractable
parameters are located in the wrapper.
The active approach to data marshalling is more appropriate for
certain problem types. In this method, when a server program
reaches a point where it is appropriate to post status and results,
it does so, directly to the CPAM objects or wrapper layer. The
trade-offs between the approaches should be clear. Active
posting is conceptually simpler and easier to code, but requires
the overhead posting and storing interim results that may never
be examined. Figure 4 shows how an active approach to data
marshalling would proceed through time. After the wrapper
receives an EXAMINE request, the appropriate routines actively
inspect the legacy code to update the status/progress structure in
the wrapper. After the EXTRACT is received, the request is
passed to the legacy code, and the data structures are then
updated, before results are passed back to the client.
WRAPPER
LEGACY CODE
result A
result C
status progress
Figure
4. Wrapper result marshalling
2.5.3 Termination
The ability to delay extractions and make repeated extractions
implies that a server no longer knows exactly when a client is
finished with an invocation. With a traditional RPC, this was
not a problem. In that case, when the work was complete,
results were returned, and the server (or procedure) had no
further obligations to the client. With arbitrary extraction, the
server is obligated to hold data for the client.
Even without allowing repeated extractions, there are more
subtle reasons for which the server must hold data for clients. In
the case of a partial extraction from our example method foo, a
client may extract result fields A and B, but the server does not
know that the client is not also interested in result field C. Since
there is no a priori communication of intent from client to
server, their relationship must be changed somewhat.
The obligation for a server to cache and store results is balanced
by a client's obligation to explicitly terminate an invocation.
This explicit termination merely signals to a server that a client
is no longer interested in further extractions from a particular
invocation, but is an integral detail of this model of result
extraction.
2.5.4 Repository
There should be a repository of method interfaces and the
structure of return values available to programmers using this
arbitrary extraction model. Of course, when programming "in
the small," (i.e., stand-alone programs, in-house projects, etc.),
this is not really an issue at all. When making services available
for sale/use externally, service providers must provide the
appropriate information about the results which can be
extracted. For instance, if delivering foo over the net, a provider
should indicate to users that fields A, B, and C may be extracted
separately. This information in the context of CHAIMS (where
we assume "programming in the large") is provided via a
repository.
3. COMPARISONS
In the following we compare the extraction model as defined in
CLAM and mirrored primitive by primitive in the CHAIMS
access protocol CPAM to the extraction models found in the
following protocols:
. web browsing
. JointFlow
. SWAP
. CORBA-DII
3.1 Partial and progressive result extraction
in web browsing
Web browsing generally falls into the category of services that
we refer to as data services. Recall, data services primarily
deliver specific data requested by clients, rather than
computational services which add value to client supplied data.
Clients are usually represented by one of many available web
browsers or crawlers while web servers deliver data to those
clients. Clients request data using the http protocol. Data
extracted (documents delivered) from servers are often written
in html, and often have components of varying types, including
images, audio, and video.
Web browsing occurs in batch and interactive ways. Batch
browsing is performed by crawlers for many reasons, such as
indexing, archiving, etc. Interactive browsing is performed by
humans for numerous reasons, such as information gathering,
electronic commerce, etc. A browser of either sort makes a
request to a server for a specific document. That document is
returned to the client, and serves as a template for further
requests. If the document is html, the browser may parse the
document and determine that there are other elements that form
the complete document (i.e., images tags). The document serves
as a schema, describing the other elements that may be extracted
from the server.
After a main document has been fetched, we can consider the
possible partial and progressive extractions that can take place.
To extract a sub-element of a web page, an http request is sent to
a server, and the data or an error message is returned. In batch
browsing, the textual information contained in the page is
frequently enough to be meaningful. This is very different from
the generalized result extraction model we discuss in that the
schema of the results is not meaningful in itself. But, in web-
browsing, the page retrieved is often meaningful, not just for the
sub-elements it describes. This aside, we consider result
extraction in terms of gathering sub-elements from pages
In interactive browsing, partial extraction is a simple process,
and is at least marginally exploitable in the most widely used
interactive browsers (Netscape and Microsoft's Internet
Explorer). Both feature an "auto-load" feature that can be
toggled (to varying degrees) to automatically load (or not load)
different content type such as images, audio, or video. For
instance, some users are concerned with images and text, but do
not with to be disturbed by audio. Their browser makes the http
requests for all sub-elements, save audio. This is partial
extraction. In other cases, especially with slower internet
connections, images are expensive to download, users may
choose to not automatically download images until determining
that a particular image or set of images is important enough to
invest time in.
Partial extraction in web-browsing is a special case of the
general partial extraction model in that the first result to be
extracted always contains information about the other results to
be extracted. Based on this first result, the client not only
determines its interest in the other elements of the page, but also
gets the information about what other results are available at all.
This is in contrast to our general model, where a result
parameter may but need not provide information about other
result parameters, and where all possible result parameters are
specified in a repository beforehand.
The most commonly found progressive extraction in web
browsing is quite different from progressive extraction in a
computational service though progressive extraction of
computational services over the web, e.g. improving simulation
data, is also feasible. In a computational service, progressive
extraction refers to extracting various transformations of input
data over the life of a computation. In web browsing,
progressive extraction is actually repeated extraction of a
changing data stream. Weather services on the web often
provide continuous updates and satellite images. Stock tickers
provide updated information so users can have current
information about their investments. Repeated extractions from
the same stream show the stream's progress through time.
Sometimes these repeated extractions may be done by manually
reloading the source, or they may be pulled from servers by html
update commands, JavaScript's, embedded Java-code, etc. Such
data is retrievable any time and its progress status is always
DONE and 100% accurate, yet we expect the data to contain
also information about to which point of time it refers.
3.2 Incremental result extraction and
progress monitoring in JointFlow
JointFlow is the Joint Workflow Management Facility of
CORBA [6]. It is an implementation of the I4 protocol of the
workflow reference model of WfMC [11] on top of CORBA.
JointFlow adopts an object oriented view of workflow
management: processes, activities, requesters, resources, process
managers, event audits etc. are distributed objects, collaborating
to get the overall job done. Each of these objects can be
accessed over an ORB, the JointFlow specification defines their
interfaces in IDL. We have chosen JointFlow for comparison as
it is a protocol that also support the request of remote
computational units which can yet need not to have some degree
of autonomy, and the protocol is also based on asynchronous
invocation of work and extraction of results, having special
primitives for invocation, monitoring, and extraction.
Work is started in that a requester asks a process manager to
create a new process. The requester then communicates directly
with the new process, setting context attributes in the process
and invoking the start operation of the process. A process my be
a physical device, a wrapper of legacy code, or it may initiate
several activity objects which might in turn use resources (e.g.
humans) via assignments or itself act as requesters for other
processes. Our focus of interest here is the interaction between
the requester and the process concerning result extraction and
progress monitoring.
3.2.1 Monitoring the Progress of Work
Both, processes and activities are in one of the following states:
running, not_running.not_started, not_running.suspended,
completed (successfully), terminated (unsuccessfully), aborted
(unsuccessfully). A requester can query the state of a process,
the states of the activities of the process (by querying and
navigating the links from processes to activities), and the states
of assignments (by querying and navigating the links from
activities to assignments). If the requester knows the workflow
model with all its different steps implemented by the process,
the requester might be able to interpret the state information of
all subactivities and assignments and figure out what the
progress of the process is. If the model is not known, e.g., due
to an autonomy boundary as they are assumed in CHAIMS, the
only status information provided by the JointFlow protocol itself
is essentially completed or not yet completed. In contrast,
CHAIMS supports the notion that certain services may support
progress information (e.g. 40% done) that can be monitored.
This information is more detailed than just running or complete,
and more aggregated and better suited for autonomous services
than detailed information about component activities.
In contrast to CHAIMS that polls all progress information, in
JointFlow a process signals its completion to the requester by an
audit event. These audit events could also be used to
implement CHAIMS-like progress monitoring on top of
JointFlow: a process can have a special result attribute for
progress information and the process is free to update that
attribute regularly. It then can send an audit event with the old
and new value of the progress indicator result to its requester
after each update. Yet this result attribute cannot be polled by a
requester (in contrast to CPAM and SWAP), because get_result
only returns results if all results are available at least as
intermediate results.
3.2.2 Extracting Results Incrementally
Both, processes and activities have an operation
get_result():ProcessData (returning a list of name value pairs).
Get_result does not take any input parameter and thus returns all
the results. The get_result operation may be used to request
intermediate result data, which may or may not be provided
depending upon the work being performed. If the results cannot
yet be obtained, the operation get_result raises an exception and
returns garbage. The results are not final until the whole unit of
work is completed, resulting in a state change to the state
complete and a notification of the container process or the
requester. This kind of extracting intermediate results
corresponds to the progressive extraction of all result attributes
in CHAIMS. The following features found in CHAIMS are not
available in JointFlow:
. Partial extraction with get_result: only all or none
of the result values can be extracted by get_result, and
there is no mechanism to return an exception only for
some of the values.
. Progressive extraction with get_result of just one
result attribute when not yet all other results are
ready for intermediate or final extraction
. There is no accuracy information for intermediate
results, unless it is in a separate result attribute. There
is no possibility to find out about the availability or the
accuracy of intermediate results unless requesting
these results.
Though partial and progressive result extraction are not part of
the design of JointFlow, they also can be achieved by using
audit events and by pushing progressive and partial results onto
the requester, instead of letting the requester poll for them. A
process or an activity can send out an audit event to its requester
or to the containing process whenever one of the result values
has been updated. This event would then contain the old as well
as the new result value. In case of large data and frequent
updates, this messaging mechanism could result in huge
amounts of traffic. The mechanism would have to be extended
by special context attributes that tell the process or activity in
advance which results should be reported in which intervals.
Yet this results in a very static and server centric approach, in
contrast to the client-centric approach in CHAIMS that is based
on data on demand. Also, as partial and progressive result
extraction are not mandated by the JointFlow protocol itself, it is
questionable how many processes and activities would actually
offer it.
3.3 Incremental result extraction and
progress monitoring in SWAP
SWAP (Simple Workflow Access Protocol) is a proposal for a
workflow protocol based on extending http. It mainly
implements I4 (to some extend also I2 and I3) of the WfMC
reference model. SWAP defines several interfaces for the
different components (which are internet resources) of the
workflow system that interact via SWAP. The three main
components are of type ProcessInstance, ProcessDefinition and
Observer. The messages exchanged between these components
are extended http-messages with headers defined by SWAP.
The data to be exchanged is encoded as text/xml in the body of
the message.
A process instance (having the interface ProcessInstance) is
created and started by sending a
CREATEPROCESSINSTANCE message to the appropriate
ProcessDefinition resource. This message also contains the
context data to be set and the URI of an observer resource that
should be notified about completion and other events. The
response contains the URI of the newly created process instance.
The process is started either automatically by the
ProcessDefinition resource if the
CREATEINSTANCEMESSAGE contains the startImmediately
flag, or by sending a PROPPATCH message to the process
instance with the new state running. A process instance
resource can delegate work to other resources by acting itself as
an observer and ask some ProcessDefinition resources for the
creation of other process instances. As in JointFlow and in
CHAIMS, the process instance creation, setting of context
attributes, start of the process, and the extraction of results are
done asynchronously.
3.3.1 Result Extraction and Result Monitoring
Results are extracted from a process instance by sending it the
message PROPFIND at any time during the execution of a
process instance. This message either returns all available
results, or if it contained a list of requested result attributes, it
only returns the selected ones. Only result attributes are
returned that are available. If requested attributes are not yet
available, presumably an exception should be returned for these
result attributes. SWAP does not specify if the results returned
by PROPFIND have to be final or not. Given the possibility to
ask for specific result attributes, and to get exceptions for
specific result attributes in case they are not available (made
possible by having exceptions encoded in XML instead of
having just one possible exceptions for one procedure call as in
the CORBA based JointFlow protocol), allows some degree of
partial and maybe even progressive extraction.
A process instance signals the completion of work to an
observer with the COMPLETE message. This message also
contains the result data: all the name value pairs that represent
the final set of data as of the time of completion. After sending
the COMPLETE message, the resource does not have to exist
any longer, this in contrast to CHAIMS where the no result data
is lost until the client (observer) sends a TERMINATE.
A process instance can also send NOTIFY messages to an
observer resource. These messages transmit state change events,
data change events, and role change events, data change events
containing the names and values of data items that have
changed.
3.3.2 Incremental Result Extraction as Defined in
the CHAIMS Model
The mechanisms of SWAP allow the following kind of result
extraction and progress monitoring:
. Partial result extraction: Either pushing results via
NOTIFY messages or pulling results via PROPFIND
messages is possible. NOTIFY sends all new result
data, PROPFIND returns all available result data
whether or not they have already been returned by a
previous PROPFIND. Notification of result changes
without sending also the new values is not possible
unless additional result attributes are added. The same
is true for getting the status of individual results:
asking for the status of results without getting also the
results, is not possible unless a state attribute is added
for each data attribute to the set of result attributes.
. Progressive result extraction: The SWAP
specification does not explicitly specify if progressive
result updates in a process instance are allowed or not.
If not, the result attributes would not be available until
their values are final. If yes, then progressive results
can be extracted either by pushing results via NOTIFY
messages or by pulling results via PROPFIND
messages. Accuracy indication is not provided, it
would have to be implemented via additional result
attributes.
3.3.3 Process Progress Monitoring
PROPFIND not only returns all result values available, it also
returns the state of the process instance and additional
descriptive information about the process. As possible states
can be specified by the process itself, PROPFIND also returns
the list of all possible state values, yet in most cases it would
probably just be not_yet_running, running, suspended,
completed, terminated, etc (the basic set of states defined by I4).
A process instance can be asked for the URI of all the processes
it has delegated work to, and an observer then can directly ask
this subprocesses about their statuses. This is analogue to the
model found in JointFlow, and thus has the same drawbacks
concerning autonomy and concerning amalgated progress
information.
progress information is not specified by SWAP, but it
could be implemented by a special result attribute assuming that
result attributes can be changed over time. Such result attributes
could be extracted any time by PROPFIND, independent of the
availability of other result attributes.
Though SWAP does not support incremental result extraction as
defined in CHAIMS, it could quite easily either be added to the
SWAP protocol itself or done by using the SWAP protocol as
defined and applying the simple workarounds mentioned above.
As SWAP has very similar goals in accessing remote processes
as CHAIMS, and as it is a very open and flexible protocol, its
result extraction model is already very close to the one of
CHAIMS and could be easily extended to contain all aspects of
the CHAIMS extraction model. Yet as SWAP has not been
designed with incremental extraction in mind, it does not have
the strong duality between extract and monitoring command as
found in CHAIMS between EXAMINE and EXTRACT.
3.4 Incremental result extraction and
progress monitoring in CORBA
3.4.1 CORBA- DII
CORBA offers two modes for interaction between a client and
remote servers: the static and the dynamic interface to an ORB.
For the static interface an IDL must exist that is compiled into
stub code that can be linked with the client. The client then
executes remote procedure calls as if the remote methods were
local.
The dynamic invocation interface (DII) offers dynamic access
where no stub code is necessary. The client has to know (or can
ask for) the IDL from the remote object, i.e., the names of the
methods and the parameters they take. The client then creates a
request for a method of that object. In this request the method
name appears as a string and the parameters appear as a list of
named values, with each named value containing the name of
the parameter, the value as type any (or a pointer to the value
and a CORBA type code), the length of the parameter, and some
flags. Once the request is created, the method can be invoked.
This is either done synchronously with invoke or
asynchronously with send (in fact, some flags allow more
elaborate settings). Invoke returns after the remote computation
has completed, and the client can read all OUT parameters in the
named value list. In case of a send, the client is not blocked. In
order to figure out when the invocation has finished, the client
can use get_response, either in a blocking (it waits until
invocation is done) or a non-blocking mode. As soon as the
return status of get_response indicates that the remote
computation is done, the client can read OUT parameters from
the named value list.
In case of the asynchronous method invocation in CORBA-DII,
the progress of an invocation can be monitored and asked for by
the client as far as completion is concerned, but no further
progress information is available. Progressive extraction of
results is not supported by DII. Of course a client is free not to
read and use all results after the completion of an invocation, yet
while the computation is going on no partial extraction is
supported.
3.4.2 CORBA Notification Service
In order to mimic the incremental result extraction of CHAIMS,
one could use asynchronous method invocation with DII
coupled with the event service of CORBA. The client could be
implemented as a PullConsumer for a special event channel
CHAIMSresults, the servers could push results into that channel
as soon as they are available, together with accuracy
information. Though event channels could be used for that
purpose (we could require that every megamodule uses event
channels for this), an integration of incremental result extraction
and invocation progress monitoring into the access protocol
itself is definitely more adequate when we consider this to be an
integral part of the protocol. The same is true for the languages
used to program the client: while CLAM directly supports
incremental extraction and progress monitoring, this is not the
case for any of the languages in used for programming CORBA
clients.
4. CONCLUSIONS
In the CHAIMS project, we sought to build a composition-only
language for remote, autonomous services. To do this, we had
to consider many different extraction models used in different
domains. This examination led to the realization that a simple
asynchronous RPC-style approach was not enough.
To build a language and an access protocol to support arbitrary
result extractions took careful consideration of the myriad ways
extractions were currently being used in widespread as well as
hand-crafted systems. Building support and primitives for all of
these result extraction methods (autonomously, progressively,
and partially) and then binding them within one system has led
to the formulation of a comprehensive model for arbitrary result
extraction.
Our model captures the notions of traditional result extraction,
partial extraction and progressive extraction. By combining
two simple primitives in CLAM (EXAMINE and EXTRACT),
the full power of each of these extraction types can be achieved.
This extraction model is appropriate as a template for existing
systems, and future languages as well. It is generic to result
extraction, and only assumes that the necessary asynchrony can
be achieved among components through distributed
communication, threading, or other available means.
5.
--R
"A Language and System for Composing Autonomous, Heterogeneous and Distributed Megamodules,"
"Opus: A Coordination Language for Multidisciplinary Applications,"
"Exploiting Parallelism in Multidisciplinary Applications Using Opus,"
Porgram Design for Engineers
ICASE Research Quarterly
"CPAM, A Protocol for Software Composition,"
"CLAM: Composition Language for Autonomous Megamodules,"
Simple Workflow Access Protocol (SWAP)
"Towards Megaprogramming: A Paradigm for Component-Based Programming"
The Workflow Reference Model
"Pipeline Expansion in Coordinated Applications,"
Design and Implementation
"Composition of Multi-site Services,"
"Extensible Markup Language (XML) 1.0,"
"Schema for Object-Oriented XML 2.0,"
"XQL Tutorial,"
"XML-QL: A Query Language for XML,"
--TR
Toward megaprogramming
Programming languages (3rd ed.)
C Program Design for Engineers
CPAM, A Protocol for Software Composition
A Case for Economy Grid Architecture for Service Oriented Grid Computing
Opus: A Coordination Language for Multidisciplinary Applications
--CTR
Andrea Omicini , Sascha Ossowski, Editorial message: special track on coordination models, languages and applications, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain | partial;result extraction;scheduling;progressive;CHARMS |
509092 | Making sparse Gaussian elimination scalable by static pivoting. | We propose several techniques as alternatives to partial pivoting to stabilize sparse Gaussian elimination. From numerical experiments we demonstrate that for a wide range of problems the new method is as stable as partial pivoting. The main advantage of the new method over partial pivoting is that it permits a priori determination of data structures and communication pattern for Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we design highly parallel algorithms for both sparse Gaussian elimination and triangular solve and we show that they are suitable for large-scale distributed memory machines. | Introduction
In our earlier work [8, 9, 22], we developed new algorithms to solve unsymmetric sparse linear
systems using Gaussian elimination with partial pivoting (GEPP). The new algorithms are highly
efficient on workstations with deep memory hierarchies and shared memory parallel machines with
a modest number of processors. The portable implementations of these algorithms appear in
the software packages SuperLU (serial) and SuperLU MT (multithreaded), which are publically
available on Netlib [10]. These are among the fastest available codes for this problem.
Our shared memory GEPP algorithm relies on the fine-grained memory access and synchronization
that shared memory provides to manage the data structures needed as fill-in is created
dynamically, to discover which columns depend on which other columns symbolically, and to use a
centralized task queue for scheduling and load balancing. The reason we have to perform all these
dynamically is that the computational graph does not unfold until runtime. (This is in contrast to
Cholesky, where any pivot order is numerically stable.) However, these techniques are too expensive
This research used resources of the National Energy Research Scientific Computing Center, which is supported
by the Office of Energy Research of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098.
y This research was supported in part by NSF grant ASC-9313958, DOE grant DE-FG03-94ER25219, UT Sub-contract
No. ORA4466 from ARPA Contract No. DAAL03-91-C0047, DOE grant DE-FG03-94ER25206, NSF Infrastructure
grants CDA-8722788 and CDA-9401156 and DOE grant DE-FC03-98ER25351.
(1) Row/column equilibration and row permutation: A / P r \Delta D r \Delta A \Delta D c ,
where D r and D c are diagonal matrices and P r is a row permutation
chosen to make the diagonal large compared to the off-diagonal
(2) Find a column permutation P c to preserve sparsity: A /
c
with control of diagonal magnitude
set a ii to p
endif
using the L and U factors, with the following iterative refinement
iterate:
multiply
Solve A \Delta
goto iterate
endif
Figure
1: The outline of the new GESP algorithm.
on distributed memory machines. Instead, for distributed memory machines, we propose to not
pivot dynamically, and so enable static data structure optimization, graph manipulation and load
balancing (as with Cholesky [20, 25]) and yet remain numerically stable. We will retain numerical
stability by a variety of techniques: pre-pivoting large elements to the diagonal, iterative refine-
ment, using extra precision when needed, and allowing low rank modifications with corrections at
the end. In Section 2 we show the promise of the proposed method from numeric experiments. We
call our algorithm GESP for Gaussian elimination with static pivoting. In Section 3, we present
an MPI implementation of the distributed algorithms for LU factorization and triangular solve.
Both algorithms use an elaborate 2-D (nonuniform) block-cyclic data distribution. Initial results
demonstrated good scalability and a factorization rate exceeding 8 Gflops on a 512 node Cray T3E.
stability
Traditionally, partial pivoting is used to control the element growth during Gaussian elimination,
making the algorithm numerically stable in practice 1 . However partial pivoting is not the only way
to control element growth; there are a variety of alternative techniques. In this section we present
these alternatives, and show by experiments that appropriate combinations of them can effectively
stabilize Gaussian elimination. Furthermore, these techniques are usually inexpensive compared to
the overall solution cost, especially for large problems.
2.1 The GESP algorithm
In
Figure
1 we sketch our GESP algorithm that incorporates some of the techniques we considered.
To motivate step (1), recall that a diagonally dominant matrix is one where each diagonal entry a ii
is larger in magnitude than the sum of magnitudes of the off-diagonal entries in its row ( P
exist where even GEPP is unstable, but these are very rare [7, 19].
or column ( P
j). It is known that choosing the diagonal pivots ensures stability for such
matrices [7, 19]. So we expect that if each diagonal entry can somehow be made larger relative to
the off-diagonals in its row or column, then diagonal pivoting will be more stable. The purpose of
step (1) is to choose diagonal matrices D r and D c and permutation P r to make each a ii larger in
this sense.
We have experimented with a number of alternative heuristic algorithms for step (1) [13]. All
depend on the following graph representation of an n \Theta n sparse matrix A: it is represented as an
undirected weighted bipartite graph with one vertex for each row, one vertex for each column, and
an edge with appropriate weight connecting row vertex i to column vertex j for each nonzero entry
a ij . Finding a permutation P r that puts large entries on the diagonal can thus be transformed
into a weighted bipartite matching problem on this graph. The diagonal scale matrices D r and D r
can be chosen independently, to make each row and each column of D r AD c have largest entries
equal to 1 in magnitude (using the algorithm in LAPACK subroutine DGEEQU [3]). Then there
are algorithms in [13] that choose P r to maximize different properties of the diagonal of P r D r AD c ,
such as the smallest magnitude of any diagonal entry, or the sum or product of magnitudes. But the
best algorithm in practice seems to be the one in [13] that picks P r , D r and D c simultaneously so
that each diagonal entry of P r D r AD c is \Sigma1, each off-diagonal entry is bounded by 1 in magnitude,
and the product of the diagonal entries is maximized. We will report results for this algorithm
only. The worst case serial complexity of this algorithm is O(n \Delta nnz(A) \Delta log n), where nnz(A) is
the number of nonzeros in A. In practice it is much faster; actual timings appear later.
Step (2) is not new and is needed in both SuperLU and SuperLU MT [10]. The column permutation
P c can be obtained from any fill-reducing heuristic. For now, we use the minimum degree
ordering algorithm [23] on the structure of A T A. In the future, we will use the approximate minimum
degree column ordering algorithm by Davis et. al. [6] which is faster and requires less memory
since it does not explicitly form A T A. We can also use nested dissection on A + A T or A T A [17].
Note that we also apply P c to the rows of A to ensure that the large diagonal entries obtained from
Step (1) remain on the diagonal.
In step (3), we simply set any tiny pivots encountered during elimination to p
" is machine precision. This is equivalent to a small (half precision) perturbation to the original
problem, and trades off some numerical stability for the ability to keep pivots from getting too
small.
In step (4), we perform a few steps of iterative refinement if the solution is not accurate enough,
which also corrects for the
perturbations in step (3). The termination criterion is based
on the componentwise backward error berr [7]. The condition berr - " means that the computed
solution is the exact solution of a slightly different sparse linear system each
nonzero entry a ij has been changed by at most one unit in its last place, and the zero entries are
left unchanged; thus one can say that the answer is as accurate as the data deserves. We terminate
the iteration when the backward error berr is smaller than machine epsilon, or when it does not
decrease by at least a factor of two compared with the previous iteration. The second test is to
avoid possible stagnation. (Figure 5 shows that berr is always small.)
2.2 Numerical results
In this subsection, we illustrate the numerical stability and runtime of our GESP algorithm on
53 unsymmetric matrices from a wide variety of applications. The application domains of the
matrices are given in Table 1. Most of them, except for two (ECL32, WU), can be obtained from
the Harwell-Boeing Collection [14] and the collection of Davis [5]. Matrix ECL32 was provided
by Jagesh Sanghavi from EECS Department of UC Berkeley. Matrix WU was provided by Yushu
Discipline Matrices
fluid flow, CFD af23560, bbmat, bramley1, bramley2, ex11, fidapm11, garon2,
graham1, lnsp3937, lns 3937, raefsky3, rma10, venkat01, wu
fluid mechanics goodwin, rim
circuit simulation add32, gre 1107, jpwh 991, memplus, onetone1, onetone2, twotone
device simulation wang3, wang4, ecl32
chemical engineering extr1, hydr1, lhr01, radfr1, rdist1, rdist2, rdist3a, west2021
petroleum engineering orsirr 1, orsreg 1, sherman3, sherman4, sherman5
finite element PDE av4408, av11924
stiff ODE fs 541 2
Olmstead flow model olm5000
aeroelasticity tols4000
reservoir modelling pores 2
crystal growth simulation cry10000
power flow modelling gemat11
dielectric waveguide dw8192 (eigenproblem)
astrophysics mcfe
plasma physics utm5940
economics mahindas, orani678
Table
1: Test matrices and their disciplines.
Wu from Earth Sciences Division of Lawrence Berkeley National Laboratory. Figure 2 plots the
dimension, nnz(A), and nnz(L i.e. the number of nonzeros in the L and U factors (the
fill-in). The matrices are sorted in increasing order of the factorization time. The matrices of most
interest for parallelization are the ones that take the most time, i.e. the ones on the right of this
graph. From the figure it is clear that the matrices large in dimension and number of nonzeros
also require more time to factorize. The timing results reported in this subsection are obtained on
an SGI ONYX2 machine running IRIX 6.4. The system has 8 195 MHz MIPS R10000 processors
and 5120 Mbytes main memory. We only use a single processor, since we are mainly interested in
numerical accuracy. Parallel runtimes are reported in section 3.
Detailed performance results from this section in tabular format are available at
http://www.nersc.gov/-xiaoye/SC98/.
Among the 53 matrices, most would get wrong answers or fail completely (via division by a
zero pivot) without any pivoting or other precautions. 22 matrices contain zeros on the diagonal to
begin with which remain zero during elimination, and 5 more create zeros on the diagonal during
elimination. Therefore, not pivoting at all would fail completely on these 27 matrices. Most of the
other 26 matrices would get unacceptably large errors due to pivot growth. For our experiment,
the right-hand side vector is generated so that the true solution x true is a vector of all ones. IEEE
double precision is used as the working precision, with machine epsilon - 10 \Gamma16 . Figure 3 shows the
number of iterations taken in the iterative refinement step. Most matrices terminate the iteration
with no more than 3 steps. 5 matrices require 1 step, 31 matrices require 2 steps, 9 matrices require
3 steps, and 8 matrices require more than 3 steps. For each matrix, we present two error metrics,
in
Figure
4 and Figure 5, to assess the accuracy and stability of GESP. Figure 4 plots the error
from GESP versus the error from GEPP (as implemented in SuperLU) for each matrix: A red dot
on the green diagonal means the two errors were the same, a red dot below the diagonal means
LU Factorization time in seconds
dimension
- nonzeros in A
- nonzeros in L+U
Figure
2: Characteristics of the matrices.
Condition number
Number
of
iterative
refinement
steps
GESP (Red), GEPP(Blue)
Figure
3: Iterative refinement steps in GESP.
GESP is more accurate, and a red dot above means GEPP is more accurate. Figure 4 shows that
the error of GESP is at most a little larger, and can be smaller (21 out of 53), than the error
from GEPP. Figure 5 shows that the componentwise backward error [7] is also small, usually near
machine epsilon, and never larger than 10 \Gamma12 .
Although the combination of the techniques in steps (1) and (3) in Figure 1 works well for
most matrices, we found a few matrices for which other combinations are better. For example, for
FIDAPM11, JPWH 991 and ORSIRR 1, the errors are large unless we omit P r from step (1). For
EX11 and RADRF1, we cannot replace tiny pivots by p
Therefore, in the
software, we provide a flexible interface so the user is able to turn on or off any of these options.
We now evaluate the cost of each step in GESP Figure 1. This is done with respect to the serial
implementation, since we have only parallelized the numerical phases of the algorithm (steps (3)
and (4)), which are the most time-consuming. In particular, for large enough matrices, the LU
factorization in step (3) dominates all the other steps, so we will measure the times of each step
with respect to step (3).
Simple equilibration in step (1) (computing D r and D c using the algorithm in DGEEQU from
LAPACK) is usually negligible and is easy to parallelize. Both row and column permutation algorithms
in steps (1) and (2) (computing P r and P c ) are not easy to parallelize (their parallelization
is future work). Fortunately, their memory requirement is just O(nnz(A)) [6, 13], whereas the
memory requirement for L and U factors grows superlinearly in nnz(A), so in the meantime we
can run them on a single processor.
Figure
6 shows the fraction of time spent finding P r in step (1) using the algorithm in [13], as
a fraction of the factorization time. The time is significant for small problems, but drops to 1%
to 10% for large matrices requiring a long time to factor, the problems of most interest on parallel
machines.
The time to find a sparsity-preserving ordering P c in step (2) is very much matrix dependent. It
is usually cheaper than factorization, although there exist matrices for which the ordering is more
expensive. Nevertheless, in applications where we repeatedly solve a system of equations with the
same nonzero pattern but different values, the ordering algorithm needs to be run only once, and
its cost can be amortized over all the factorizations. We plan to replace this part of the algorithm
Error from partial pivoting with Refine
from
GESP
Figure
4: The error jjx true \Gammaxjj 1
Condition number
Backward
error
from
GESP
Figure
5: The backward error
LU factorization (GENP) time in seconds
Fraction
of
GENP
time
- Permute large diagonal
- Triangular solve
Figure
The times to factorize, solve, permute large diagonal, compute residual and estimate
error bound, on a 195 MHz MIPS R10000.
Order
BBMAT 38744 1771722 .0224 .5398 49.1 4.3
Table
2: Characteristics of the test matrices. NumSym is the fraction of nonzeros matched by
equal values in symmetric locations. StrSym is the fraction of nonzeros matched by nonzeros in
symmetric locations.
with something faster, as outlined in Section 2.1.
As can be seen in Figure 6, computing the residual (sparse matrix-vector multiplication
cheaper than a triangular solve both take a small fraction of the
factorization time. For large matrices the solve time is often less than 5% of the factorization time.
Both algorithms have been parallelized (see section 3 for parallel performance data).
Finally, our code has the ability to estimate a forward error bound for the true error jjx true \Gammaxjj 1
This is by far the most expensive step after factorization. (For small matrices, it can be more
expensive than factorization, since it requires multiple triangular solves.) Therefore, we will do this
only when the user asks for it.
3 An implementation with MPI
In this section, we describe our design, implementation and the performance of the distributed
algorithms for two main steps of the GESP method, sparse LU factorization (step (3)) and sparse
triangular solve (used in step (4)). Our implementation uses MPI [26] to communicate data, and so
is highly portable. We have tested the code on a number of platforms, such as Cray T3E, IBM SP2,
and Berkeley NOW. Here, we only report the results from a 512 node Cray T3E-900 at NERSC. To
illustrate scalability of the algorithms, we restrict our attention to eight relatively large matrices
selected from our testbed in Table 1. They are representative of different application domains. The
characteristics of these matrices are given in Table 2.
3.1 Matrix distribution and distributed data structure
We distribute the matrix in a two-dimensional block-cyclic fashion. In this distribution, the P
processes (not restricted to be a power of 2) are arranged as a 2-D process grid of shape P r \Theta P c .
The matrix is decomposed into blocks of submatrices. Then, these blocks are cyclically mapped
onto the process grid, in both row and column dimensions. Although a 1-D decomposition is
more natural to sparse matrices and is much easier to implement, a 2-D layout strikes a good
balance among locality (by blocking), load balance (by cyclic mapping), and lower communication
volume (by 2-D mapping). 2-D layouts were used in scalable implementations of sparse Cholesky
factorization [20, 25].
We now describe how we partition a global matrix into blocks. Our partitioning is based on
the notion of unsymmetric supernode first introduced in [8]. Let L be the lower triangular matrix
in the LU factorization. A supernode is a range of columns of L with the triangular block
just below the diagonal being full, and with the same row structure below this block. Because of
the identical row structure of a supernode, it can be stored in a dense format in memory. This
supernode partition is used as our block partition in both row and column dimensions. If there are
N supernodes in an n-by-n matrix, the matrix will be partitioned into N 2 blocks of nonuniform
size. The size of each block is matrix dependent. It should be clear that all the diagonal blocks
are square and full (we store zeros from U in the upper triangle of the diagonal block), whereas
the off-diagonal blocks may be rectangular and may not be full. The matrix in Figure 7 illustrates
such a partitioning. By block-cyclic mapping we mean block mapped
onto the process at coordinate (I mod P r , J mod P c ) of the process grid. Using this mapping, a
block L(I ; J) in the factorization is only needed by the row of processes that own blocks in row I .
Similarly, a block U(I ; J) is only needed by the column of processes that own blocks in column J .
In this 2-D mapping, each block column of L resides on more than one process, namely, a
column of processes. For example in Figure 7, the k-th block column of L resides on the column
processes f0, 3g. Process 3 only owns two nonzero blocks, which are not contiguous in the global
matrix. The schema on the right of Figure 7 depicts the data structure to store the nonzero blocks
on a process. Besides the numerical values stored in a Fortran-style array nzval[] in column major
order, we need the information to interpret the location and row subscript of each nonzero. This is
stored in an integer array index[], which includes the information for the whole block column and
for each individual block in it. Note that many off-diagonal blocks are zero and hence not stored.
Neither do we store the zeros in a nonzero block. Both lower and upper triangles of the diagonal
block are stored in the L data structure. A process owns dN=P c e block columns of L, so it needs
dN=P c e pairs of index/nzval arrays.
For matrix U , we use a row oriented storage for the block rows owned by a process, although
for the numerical values within each block we still use column major order. Similarly to L, we also
use a pair of index/nzval arrays to store a block row of U . Due to asymmetry, each nonzero block
in U has the skyline structure as shown in Figure 7 (see [8] for details on the skyline structure).
Therefore, the organization of the index[] array is different from that for L, which we omit showing
in the figure.
Since we do no dynamic pivoting, the nonzero patterns of L and U can be determined during
symbolic factorization before numerical factorization begins. Therefore, the block partitioning and
the setup of the data structure can all be performed in the symbolic algorithm. This is much
cheaper to execute as opposed to partial pivoting where the size of the data structure cannot be
forecast and must be determined on the fly as factorization proceeds.
3.2 Sparse LU factorization
Figure
8 outlines the parallel sparse LU factorization algorithm. We use Matlab notation for integer
ranges and submatrices. There are three steps in the K-th iteration of the loop. In step (1), only
a column of processes participate in factoring the block column L(K : N; K). In step (2), only
a row of processes participate in the triangular solves to obtain the block row U(K;K
The rank-b update by L(K in step (3) represents most of the
work and also exhibits more parallelism than the other two steps, where b is the block size of the
K-th block column/row. For ease of understanding, the algorithm presented here is simplified. The
actual implementation uses a pipelined organization so that processes PROCC (K
step (1) of iteration K soon as the rank-b update (step (3)) of iteration K to block column
index
Storage of block column of L
# of blocks
nzval
block #
row subscripts
# of full rows
LDA in nzval
block #
row subscripts
# of full rows0000000001111111110000000001111111110000000000000000111111111111111111110000000000000001111111111111111111100001111000000000000000000001111111111111111111100000000011111111111100000011111100011111100011111100000011111100110011100110000111111001101100111000111000111
Global Matrix
Process Mesh
U
Figure
7: The 2-D block-cyclic layout and the data structure to store a local block column of L.
Let mycol (myrow) be my process column (row) number in the process grid
Let PROCC (K) (PROCR (K)) be the column (row) processes that own block column (row) K
for block to N do
Obtain the block column factor
to the processes in my row who need it
else
Receive need it
endif
Perform parallel triangular solves
to processes in my column who need it
else
Receive need it
endif
(3) for to N do
for to N do
endif
end for
Figure
8: Distributed sparse LU factorization algorithm.
before completing the update to the trailing matrix A(K owned
by PROCC (K 1). The pipelining alleviates the lack of parallelism in both steps (1) and (2).
On 64 processors of Cray T3E, for instance, we observed speedups between 10% to 40% over the
non-pipelined implementation.
In each iteration, the major communication steps are send/receive L(K : N; K) across process
rows and send/receive U(K;K process columns. Our data structure (see Figure 7)
ensures that all the blocks of L(K : N; K) and U(K;K on a process are contiguous in
memory, thereby eliminating the need for packing and unpacking in a send-receive operation or
sending many more smaller messages. In each send-receive pair, two messages are exchanged, one
for index[] and another for nzval[]. To further reduce the amount of communication, we employ
the notion of elimination dags (EDAGs) [18]. That is, we send the K-th column of L rowwise to
the process owning the J-th column of L only if there exists a path between (super)nodes K and
J in the elimination dags. This is done similarly for the columnwise communication of rows of U .
Therefore, each block in L may be sent to fewer than P c processes and each block in U may be sent
to fewer than P r processes. In other words, our communication takes into account the sparsity of
the factors as opposed to "send-to-all" approach in a dense factorization. For example, for AF23560
on (4 \Theta 8) processes, the total number of messages is reduced from 351052 to 302570, or 16%
fewer messages. The reduction is even more with more processes or sparser problems.
3.3 Sparse triangular solve
The sparse lower and upper triangular solves are also designed around the same distributed data
structure. The forward substitution proceeds from the bottom of the elimination tree to the root,
whereas the back substitution proceeds from the root to the bottom. Figure 9 outlines the algorithm
for sparse lower triangular solve. The algorithm is based on a sequential variant called "inner
product" formulation. In this formulation, before the K-th subvector x(K) is solved, the update
from the inner product of must be accumulated and subtracted
from b(K). The diagonal process, at the coordinate (K mod P r , K mod of the process grid,
is responsible for solving x(K). Two counters, frecv and fmod, are used to facilitate the asynchronous
execution of different operations. frecv[K] counts the number of process updates to x(K)
to be received by the diagonal process owning x(K). This is needed because distributed
among the row processes PROCR (K), and due to sparsity, not all processes in PROCR (K)
contribute to the update. When frecv(K) becomes zero, all the necessary updates to x(K) are
complete and x(K) is solved. fmod(K) counts the number of block modifications to be summed
into the local inner product update (stored in lsum(K)) to x(K). When fmod(K) becomes zero,
the partial sum lsum(K) is sent to the diagonal process that owns x(K).
The execution of the program is message-driven. A process may receive two types of messages,
one is the partial sum lsum(K), another is the solution subvector x(K). Appropriate action is
taken according to the message type. The asynchronous communication enables large overlapping
between communication and computation. This is very important because the communication to
computation ratio is much higher in triangular solve than in factorization.
The algorithm for the upper triangular solve is similar to that illustrated in Figure 9. However,
because of the row oriented storage scheme used for matrix U , there is a slight complication in the
actual implementation. Namely, we have to build two vertical linked lists to enable rapid access of
the matrix entries in a block column of U .
Let mycol (myrow) be my process column (row) number in the process grid
Let PROCC (K) be the column processes that own block column K
for each block K that I own
Send x(K) to the column processes PROCC (K)
endif
end for
while ( I have more work
Receive a message (*)
if ( message is lsum(K) )
Send x(K) to the column processes PROCC (K)
endif
else if ( message is x(K) )
for each I ? K, L(I ; K) 6= 0 that I own
Send lsum(I) to the diagonal process who owns L(I ; I)
endif
end for
endif
while
Figure
9: Distributed lower triangular solve L
Symbolic Numeric
Table
3: LU factorization time in seconds and Megaflop rate on the 512 node T3E-900.
3.4 Parallel performance
Recall that we partition the blocks based on supernodes, so the largest block size equals the number
of columns of the largest supernode. For large matrices, this can be a few thousand, especially
towards the end of matrix L. Such a large granularity would lead to very poor parallelism and load
balance. Therefore, when this occurs, we break the large supernode into smaller chunks, so that
each chunk does not exceed our preset threshold, the maximum block size. By experimenting, we
found that a maximum block size between 20 and 30 is good on the Cray T3E. We used 24 for all
the performance results reported in this section.
Table
3 shows the performance of the factorization on the Cray T3E-900. The symbolic analysis
(steps (1) and (2) in Figure 1) is not yet parallel, so we start with a copy of the entire matrix on each
processor, and run steps (1) and (2) independently on each processor. Thus the time is independent
of the number of processors. The first column of Table 3 reports the time spent in the symbolic
analysis. The memory requirement of the symbolic analysis is small, because we only store and
manipulate the supernodal graph of L and the skeleton graph of U , which are much smaller than the
graphs of L and U . The subsequent columns in the table show the factorization time with a varying
number of processors. For four large matrices (BBMAT, ECL32, FIDAPM11 and WANG4), the
factorization time continues decreasing up to 512 processors, demonstrating good scalability. The
last column reports the numeric factorization rate in Mflops. More than 8 Gflops is achieved for
matrix ECL32. This is the fastest published result we have seen for any implementation of parallel
sparse Gaussian elimination.
Table
3 starts with processors because some of the examples could not run with fewer
processors. As a reference, we compare our distributed memory code to our shared memory SuperLU
MT code using small numbers of processors. For example, using 4 processor DEC AlphaServer
the factorization times of SuperLU MT for matrices AF23560 and EX11
are 19 and 23 seconds, respectively, comparable to the 4 processor T3E timings. This indicates
that our distributed data structure and message passing algorithm do not incur much overhead.
Table
4 shows the performance of the lower and upper triangular solves altogether. When the
number of processors continues increasing beyond 64, the solve time remains roughly the same.
Although triangular solves do not achieve high Megaflop rates, the time is usually much less than
that for factorization.
The efficiency of a parallel algorithm depends mainly on how the workload is distributed and
how much time is spent in communication. One way to measure load balance is as follows. Let
Each processor is the same as one T3E processor, except there is a 4 MB tertiary cache.
BBMAT 3.69 3.42 2.27 2.23 1.83 56
ECL32 2.95 2.60 1.66 1.57 1.17 128
Table
4: Triangular solves time in seconds and Megaflop rate on the T3E-900.
Comm
fact
sol
Table
5: Load balance and communication on 64 processors Cray T3E.
f i denote the number of floating-point operations performed on process i. We compute the load
balance
In other words, B is the average workload divided by the maximum
workload. It is clear that better load balance. The parallel
runtime is at least the runtime of the slowest process, whose workload is highest. In Table 5 we
present the load balance factor B for both factorization and solve phases. As can be seen from the
table, the distribution of workload is good for most matrices, except for TWOTONE.
In the same table, we also show the fraction of the runtime spent in communication. The numbers
were collected from the performance analysis tool called Apprentice on the T3E. The amount of
communication is quite excessive. Even for the matrices that scale well, such as BBMAT, ECL32,
FIDAPM11 and WANG4, more than 50% of the factorization time is spent in communication.
For the solve, which has much smaller amount of computation, communication takes more than
95% of the total time. We expect the percentage of communication will be even higher with more
processors, because the total amount of computation is more or less constant.
Although TWOTONE is a relatively large matrix, the factorization does not scale as well as
for the other large matrices. One reason is that the present submatrix to process mapping results
in very poor load distribution. Another reason is due to long time in communication. When we
look further into communication time using Apprentice, we found that processes are idle 60% of
the time waiting to receive the column block of L sent from a process column on the left (step (1)
in
Figure
8), and are idle 23% of the time waiting to receive the row block of U sent from a process
row from above (step (2) in Figure 8). Clearly, the critical path of the algorithm is in step (1), which
must preserve certain precedence relation between iterations. Our pipelining method shortens the
critical path to some extent, but we expect the length of the critical path can be further reduced by
a more sophisticated DAG (task graph) scheduling. For the solve, we found that processes are idle
73% of the time waiting for a message to arrive (at line (*) in Figure 9). So on each process there
is not much work to do but a large amount of communication. These communication bottlenecks
also occur for the other matrices, but the problems are not so pronounced as TWOTONE.
Another problem with TWOTONE is that supernode size (or block size) is very small, only 2.4
columns on average. This results in poor uniprocessor performance and low Megaflop rate.
Concluding remarks and future work
We propose a number of techniques in place of partial pivoting to stabilize sparse Gaussian elim-
ination. Their effectiveness is demonstrated by numerical experiments. These techniques enable
static analysis of the nonzero structure of the factors and the communication pattern. As a result,
a more scalable implementation becomes feasible on large-scale distributed memory machines with
hundreds of processors. Our preliminary software is being used in a quantum chemistry application
at Lawrence Berkeley National Laboratory, where a complex unsymmetric system of order 200,000
has been solved within 2 minutes.
4.1 More techniques for numerical stability
Although the current GESP algorithm is successful for a large number of matrices, it fails to
solve one finite element matrix, AV41092, because the pivot growth is still too large with any
combination of the current techniques. We plan to investigate other complementary techniques to
further stabilize the algorithm. For example, we can use a judicious amount of extra precision to
store some matrix entries more accurately, and to perform internal computations more accurately.
This facility is available for free on Intel architectures, which performs all arithmetic most efficiently
in 80-bit registers, and at modest cost on other machines. The extra precision can be used in both
factorization and residual computation.
We can also mix static and partial pivoting by only pivoting within a diagonal block owned by
a single processor (or SMP within a cluster of SMPs). This can further enhance stability.
We can use a more aggressive pivot size control strategy in step (4) of the algorithm. That is,
instead of setting tiny pivots to
" \Delta jjAjj, we may set it to the largest magnitude of the current
column. This incurs a non-trivial amount of rank-1 perturbation to the original matrix. In the
end, we use Sherman-Morrison-Woodbury formula [7] to recover the inverse of the original matrix,
at the cost of a few more steps of inverse iteration.
It remains to be seen in what circumstances these ideas should be employed in practice. There
are also theoretical questions to be answered.
4.2 High performance issues
In order to make the solver entirely scalable, we need to parallelize the symbolic algorithm. In this
case, we will start with the matrix initially distributed in some manner. The symbolic algorithm
then determines the best layout for the numeric algorithms, and redistributes matrix if necessary.
This also requires us to provide a good interface so the user knows how to input the matrix in the
distributed manner.
For the LU factorization, we will investigate more general functions for matrix-to-process mapping
and scheduling of computation and communication by exploiting more knowledge from the
EDAGs. This is expected to relax much of the synchrony in the current factorization algorithm,
and reduce communication. We also consider switching to a dense factorization, such as the one
implemented in ScaLAPACK [4], when the submatrix at the lower right corner becomes sufficiently
dense. The uniprocessor performance can also be improved by amalgamating small supernodes into
large ones.
To speed up the sparse triangular solve, we may apply some graph coloring heuristic to reduce
the number of parallel steps [21]. There are also alternative algorithms other than substitutions,
such as those based on partitioned inversion [1] or selective inversion [24]. However, these algorithms
usually require preprocessing or different matrix distributions than the one used in our factorization.
It is unclear whether the preprocessing and redistribution will offset the benefit offered by these
algorithms, and will probably depend on the number of right-hand sides.
5 Related work
Duff and Koster [13] applied the techniques of permuting large entries to the diagonal in both
direct and iterative methods. In their direct method using a multifrontal approach, the numeric
factorization first proceeds with diagonal pivots as previously chosen by the analysis on the structure
of A+A T . If a diagonal entry is not numerically stable, its elimination will be delayed, and a
larger frontal matrix will be passed to the later stage. They showed that using the initial permuta-
tion, the number of delayed pivots were greatly reduced in factorization. They experimented with
some iterative methods such as GMRES, BiCGSTAB and QMR using ILU preconditioners. The
convergence rate is substantially improved in many cases when the initial permutation is employed.
Amestoy, Duff and L'Excellent [2] implemented the above multifrontal approach for distributed
memory machines. The host performs the fill-reducing ordering, estimates each frontal matrix
structure, and statically maps the assembly tree, all based on the symmetric pattern of A
and then sends the information to the other processors. During numerical factorization, each frontal
matrix is factorized by a master processor and one or more slave processors. Due to possible delayed
pivots, the frontal matrix size may be different than predicted by the analysis phase. So the master
processor dynamically determines how many slave processors will be actually used for each frontal
matrix. They showed good performance on processors IBM SP2.
MCSPARSE [16] is a parallel unsymmetric linear system solver. The key component in the
solver is the reordering step, which transforms the matrix into a bordered block upper triangular
form. Their reordering first uses an unsymmetric ordering to put relatively large entries on the
diagonal. The algorithm is a modified version of Duff [11, 12]. After this unsymmetric ordering,
they use several symmetric permutations, which preserve the diagonal, to order the matrix into the
desired form. With large diagonal entries, there is a better chance of obtaining a stable factorization
by pivoting only within the diagonal blocks. The number of pivots from the border is thus reduced.
Large and medium grain parallelism is then exploited to factor the diagonal blocks and eliminate
the bordered blocks. They implemented the parallel factorization algorithm on a
Cedar, an experimental shared memory machine.
Fu, Jiao and Yang [15] designed a parallel LU factorization algorithm based on the following
static information. The sparsity pattern of the Householder QR factorization of A contains the
union of all sparsity patterns of the LU factors of A for all possible pivot selections. This has been
used to do both memory allocation and computation conservatively (on possibly zero entries), but it
can be arbitrarily conservative, particularly for matrices arising from circuit and device simulations.
For several matrices that do not incur much overestimation, they showed good factorization speed
on 128 processors Cray T3E.
It will be interesting to compare the performance of the different approaches.
6
Acknowledgement
We are grateful to Iain Duff for giving us access to the early version of the Harwell subroutine MC64,
which permutes large entries to the diagonal.
--R
Highly parallel sparse triangular solution.
Multifrontal parallel distributed symmetric and unsymmetric solvers.
ScaLAPACK Users' Guide.
University of Florida sparse matrix collection.
Approximate minimum degree ordering for unsymmetric matrices.
Applied Numerical Linear Algebra.
A supernodal approach to sparse partial pivoting.
An asynchronous parallel supernodal algorithm for sparse gaussian elimination.
Algorithm 575.
On algorithms for obtaining a maximum transversal.
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices.
Users' guide for the Harwell-Boeing sparse matrix collection (release 1)
Efficient sparse LU factorization with partial pivoting on distributed memory architectures.
Solving large nonsymmetric sparse linear systems using MCSPARSE.
Nested dissection of a regular finite element mesh.
Elimination structures for unsymmetric sparse LU factors.
Matrix Computations.
Optimally scalable parallel sparse cholesky factorization.
Scalable iterative solution of sparse linear systems.
Sparse Gaussian elimination on high performance computers.
Modification of the minimum degree algorithm by multiple elimination.
Efficient parallel sparse triangular solution with selective inversion.
An efficient block-oriented approach to parallel sparse cholesky factorization
--TR
Elimination structures for unsymmetric sparse <italic>LU</italic> factors
An efficient block-oriented approach to parallel sparse Cholesky factorization
Scalable iterative solution of sparse linear systems
Modification of the minimum-degree algorithm by multiple elimination
Solving large nonsymmetric sparse linear systems using MCSPARSE
Applied numerical linear algebra
user''s guide
Efficient Sparse LU Factorization with Partial Pivoting on Distributed Memory Architectures
On Algorithms for Obtaining a Maximum Transversal
Algorithm 575: Permutations for a Zero-Free Diagonal [F1]
Sparse Gaussian Elimination on High Performance Computers
An Asynchronous Parallel Supernodal Algorithm for Sparse Gaussian
A Supernodal Approach to Sparse Partial Pivoting
--CTR
Bergen , F. Hulsemann , U. Rude, Is 1.7 x 10^10 Unknowns the Largest Finite Element System that Can Be Solved Today?, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.5, November 12-18, 2005
Laura Grigori , Xiaoye S. Li, A new scheduling algorithm for parallel sparse LU factorization with static pivoting, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-18, November 16, 2002, Baltimore, Maryland
Mark Baertschy , Xiaoye Li, Solution of a three-body problem in quantum mechanics using sparse linear algebra on parallel computers, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.47-47, November 10-16, 2001, Denver, Colorado
Olaf Schenk , Klaus Grtner, Two-level dynamic scheduling in PARDISO: improved scalability on shared memory multiprocessing systems, Parallel Computing, v.28 n.2, p.187-197, February 2002
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'excellent , Xiaoye S. Li, Analysis and comparison of two general sparse solvers for distributed memory computers, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.388-421, December 2001
Xiaoye S. Li, An overview of SuperLU: Algorithms, implementation, and user interface, ACM Transactions on Mathematical Software (TOMS), v.31 n.3, p.302-325, September 2005
Xiaoye S. Li , James W. Demmel , David H. Bailey , Greg Henry , Yozo Hida , Jimmy Iskandar , William Kahan , Suh Y. Kang , Anil Kapur , Michael C. Martin , Brandon J. Thompson , Teresa Tung , Daniel J. Yoo, Design, implementation and testing of extended and mixed precision BLAS, ACM Transactions on Mathematical Software (TOMS), v.28 n.2, p.152-205, June 2002
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June | MPI;static pivoting;iterative refinement;2-D matrix decomposition;sparse unsymmetric linear systems |
509094 | Tuning Strassen''s matrix multiplication for memory efficiency. | Strassen's algorithm for matrix multiplication gains its lower arithmetic complexity at the expense of reduced locality of reference, which makes it challenging to implement the algorithm efficiently on a modern machine with a hierarchical memory system. We report on an implementation of this algorithm that uses several unconventional techniques to make the algorithm memory-friendly. First, the algorithm internally uses a non-standard array layout known as Morton order that is based on a quad-tree decomposition of the matrix. Second, we dynamically select the recursion truncation point to minimize padding without affecting the performance of the algorithm, which we can do by virtue of the cache behavior of the Morton ordering. Each technique is critical for performance, and their combination as done in our code multiplies their effectiveness.Performance comparisons of our implementation with that of competing implementations show that our implementation often outperforms the alternative techniques (up to 25%). However, we also observe wide variability across platforms and across matrix sizes, indicating that at this time, no single implementation is a clear choice for all platforms or matrix sizes. We also note that the time required to convert matrices to/from Morton order is a noticeable amount of execution time (5% to 15%). Eliminating this overhead further reduces our execution time. | Introduction
The central role of matrix multiplication as a building block in numerical codes has generated a
significant amount of research into techniques for improving the performance of this basic operation.
Several of these efforts [3, 6, 12, 13, 14, 19] focus on algorithms whose arithmetic complexity
This work supported in part by DARPA Grant DABT63-98-1-0001, NSF Grants CDA-97-2637 and CDA-95-
12356, Duke University, and an equipment donation through Intel Corporation's Technology for Education 2000
Program. Chatterjee is partially supported by NSF CAREER Award CCR-95-01979. Lebeck is partially supported
by NSF CAREER Award MIP-97-02547.
is O(n log 2 7 ) instead of the conventional O(n 3 ) algorithm. Strassen's algorithm [23] for matrix
multiplication and its variants are the most practical of such algorithms, and are classic examples of
theoretically high-performance algorithms that are challenging to implement efficiently on modern
high-end computers with deep memory hierarchies.
Strassen's algorithm achieves its lower complexity using a divide-and-conquer approach. Unfor-
tunately, this technique has two potential drawbacks. First, if the division proceeds to the level of
single matrix elements, the recursion overhead (measured, for instance, by the recursion depth and
additional temporary storage) becomes significant and reduces performance. This overhead is generally
limited by stopping the recursion early and performing a conventional matrix multiplication
on submatrices that are below the recursion truncation point [13]. Second, the division step must
efficiently handle odd-sized matrices. This can be solved by one of several schemes: by embedding
the matrix inside a larger one (called static padding), by decomposing into submatrices that overlap
by a single row or column (called dynamic overlap), or by performing special case computation for
the boundary cases (called dynamic peeling).
Previous implementations have addressed these two drawbacks independently. We present a
novel solution that simultaneously addresses both issues. Specifically, dynamic peeling was introduced
as a method to avoid large amounts of static padding. The large amount of static padding
is an artifact of using a fixed recursion truncation point. We can minimize padding by dynamically
selecting the recursion truncation point from a range of sizes. However, this scheme can induce significant
variations in performance when using a canonical storage scheme (such as column-major)
for the matrices. By using a hierarchical matrix storage scheme, we can dynamically select the
recursion truncation point within a range of sizes that both ensures high and stable performance
of the computations at the leaf of the recursion tree and limits the amount of static padding.
We measured execution times of our implementation (MODGEMM) and two alternative im-
plementations, DGEFMM uses dynamic peeling [13] and DGEMMW uses dynamic overlap [6], on
both a DEC Alpha and a SUN UltraSPARC II. Our results show wide variability in the performance
of all three implementations. On the Alpha, our implementation (MODGEMM) ranges from
30% slower to 20% faster than DGEFMM for matrix sizes from 150 to 1024. On the Ultra, MOD-
GEMM is generally faster (up to 25%) than DGEFMM for large matrices (500 and larger), while
DGEFMM is generally faster for small matrices (up to 25%). We also determine the time to convert
matrices to/from Morton order ranges from 5% to 15% of total execution time. When eliminating
this conversion time, by assuming matrices are already in Morton order, MODGEMM outperforms
DGEFMM for nearly all matrix sizes on both the Alpha and Ultra, with greater benefits on the
Ultra.
The remainder of this paper is organized as follows. Section 2 reviews the conventional description
of Strassen's algorithm. Section 3 discusses the implementation issues that affect memory
efficiency, and our solutions to these issues. Section 4 presents performance results for our code.
Section 5 cites related work and compares our techniques to them. Section 6 presents conclusions
and future work.
Background
Strassen's original algorithm [23] is usually described in the following divide-and-conquer form. Let
A and B be two n \Theta n matrices, where n is an even integer. Partition the two input matrices A
and B and the result matrix C into quadrants as follows.
A 11 A 12
A 21 A 22
(1)
The symbol ffl in equation (1) represents matrix multiplication. We then compute the four quadrants
of the result matrix as follows.
+A 22
ffl (B 21
In this paper, we discuss and implement Winograd's variant [7] of Strassen's algorithm, which
uses seven matrix multiplications and 15 matrix additions. It is well-known that this is the minimum
number of multiplications and additions possible for any recursive matrix multiplication algorithm
based on division into quadrants. The division of the matrices into quadrants follows equation (1).
The computation proceeds as follows.
C 22
U 6
Compared to the original algorithm, the noteworthy feature of Winograd's variant is its identification
and reuse of common subexpressions. These shared computations are responsible for
reducing the number of additions, but also contribute to worse locality of reference unless special
attention is given to this aspect of the computation.
We do not discuss in this paper numerical issues concerning these fast matrix multiplication
algorithms, as they are covered elsewhere [10].
2.1 Interface
In order to stay consistent with previous work in this area and to permit meaningful comparisons,
our implementation of Winograd's variant follows the same calling conventions as the dgemm subroutine
in the Level 3 BLAS library [5]. Thus, the implementation computes C / ff
are scalars, op(A) is an m \Theta k real matrix, op(B) is a k \Theta n real
matrix, C is an m \Theta n real matrix, and op(X) is either X or X T . The matrices A, B, and C are
stored in column-major order, with leading dimensions ldA, ldB, and ldC respectively.
3 Memory Efficiency: Issues and Techniques
To be useful in practice, an implementation of the Strassen-Winograd algorithm must answer three
questions: when to truncate the recursion, how to handle arbitrary rectangular matrices, and
how to lay out the array data in memory to promote better use of cache memory. These questions
are not independent, although past implementations have treated them thus. We now review these
implementation issues, identify possible solution strategies, and justify our specific choices.
3.1 Recursion truncation point
The seven products can be computed by recursively invoking Strassen's algorithm on smaller sub-
problems, and switching to the conventional algorithm at some matrix size T (called the recursion
truncation point [13]) at which Strassen's construction is no longer advantageous. If one were to
estimate running time by counting arithmetic operations, the recursion truncation point would
be around 16. However, the empirically observed value of this parameter is at least an order of
magnitude higher. This discrepancy is a direct result of the poor (algorithmic) locality of reference
of Strassen's algorithm.
All implementations of Strassen's algorithm that we have encountered use an empirically chosen
cutoff criterion for determining the matrix size T at which to terminate recursion.
3.2 Handling arbitrary matrices
Divide-and-conquer techniques are most effective when the matrices can be evenly partitioned at
each recursive invocation of the algorithm. We first note that rectangular matrices present no
particular problem for partitioning if all matrix dimensions are even. The trouble arises when we
encounter matrices with one or more dimensions of odd size. There are several possible solutions
to this problem.
ffl The simplest solution, static padding, is to pad the n \Theta n matrices with additional rows and
columns containing zeros such that the padded n 0 \Theta n 0 matrix satisfies the "even-dimensions"
condition at each level of recursion, i.e., n is the recursion depth.
This is Strassen's original solution and the solution most often quoted in algorithms textbooks.
However, for a statically predetermined value of T , the overhead of static padding can become
quite severe, adding almost three times the number of original matrix elements in the worst
case. Furthermore, in a naive implementation of this idea, the arithmetic done on these
additional zero elements is pure overhead. Finally, based on the relation between n 0 and
interference phenomena can reduce performance. These interference effects can be
mitigated using non-standard data layouts, as we discuss further in Section 3.3.
ffl A second solution, dynamic peeling, peels off the extra row or column at each level, and
separately adds their contributions to the overall solution in a later fix-up computation [13].
This eliminates the need for extra padding, but reduces the portion of the matrix to which
Strassen's algorithm applies, thus reducing the potential benefits of the recursive strategy.
The fix-up computations are matrix-vector operations rather than matrix-matrix operations,
which limits the amount of reuse and reduces performance.
ffl A third solution, dynamic overlap, finesses the problem by subdividing the matrix into submatrices
that (conceptually) overlap by one row or column, computing the results for the shared
row or column in both subproblems, and ignoring one of the copies. This is an interesting
solution, but it complicates the control structure and performs some extra computations.
Ideally, we would like to avoid both the implementation complexity of dynamic peeling or dynamic
overlap and the possibility of excessive static padding.
Figure
1: Morton-ordered matrix layout. Each small square is a T \Theta T tile that is laid out contiguously
in column-major order. The number in each tile gives its relative position in the sequence of
tiles.
3.3 Data layout
A significant fraction of the computation of the Strassen-Winograd algorithm occurs in the routine
that multiplies submatrices when the recursion truncates. The performance of this matrix product
is largely determined by its cache behavior. This issue has not been explicitly considered in previous
work on implementing fast recursive matrix multiplication algorithms, where the default column-major
layout of array data has been assumed.
A primary condition for performance is to choose a tile size T such that the tiles fit into the
first-level cache, thus avoiding capacity misses [11]. This is easily achieved using tile sizes in the
range shown in Figure 2. Second, to achieve performance stability as T varies, it is also important
to have the tile contiguous in memory, thus avoiding self-interference misses [17]. Given the
hierarchical nature of the algorithm (the decomposition is by quadrants within quadrants within
hierarchical layouts such as Morton ordering [8] naturally suggest themselves for storing the
matrices.
An operational definition of Morton ordering is as follows. Divide the original matrix into four
quadrants, and lay out these quadrants in memory in the order NW, NE, SW, SE. A submatrix
larger on the side than T is laid out recursively using the Morton ordering; a T \Theta T tile is laid out
using column-major ordering. See Figure 1.
A secondary benefit of keeping tiles contiguous in memory is that the matrix addition operations
can be performed with a single loop rather than two nested loops, thus reducing loop overheads.
3.4 The connections among the issues
We begin by observing that the recursion truncation point T determines the amount of padding
since the matrix must evenly divide at all recursive invocations of the algorithm. We
also note that at the recursion truncation point, we are multiplying T \Theta T submatrices using the
conventional algorithm. By carefully selecting the truncation point, we can minimize the amount
of padding required.
Figure
2 shows the effects of tile size selection on padding. The four lines correspond to the
Size
Size
Padding with Tilesize
Best Case Padding
Tile-size for Best Case Padding
Matrix Size
Figure
2: Effect of tile size on padding. When minimizing padding, tiles are chosen from the range
to 64.
original matrix size n, the padded matrix size n 0 with the tile size T chosen to minimize padding,
the padded matrix size n 0 for fixed tile size and the tile size T that achieves the minimum
padding. This figure demonstrates that the ability to select from a range of tile sizes can dramatically
reduce the amount of extra storage, making it independent of the matrix size n. In contrast,
a fixed tile size can require significant padding, proportional to the matrix size in the worst case.
Consider a square matrix size of 513. With a fixed tile size of 32, static padding requires a
padded matrix of size 1024, nearly twice the original matrix dimension. In contrast, flexibility in
choosing tile size allows us to select a tile size of 33, which requires padding with only 15 (our
worst case amount) extra elements in each dimension. The padded matrix size, 528, is recursively
divided four times to achieve 33 \Theta 33 tiles.
However, we can exploit this flexibility in tile size selection only if we can ensure that the
performance of the matrix multiplication algorithm for these tiles is not sensitive to the choice
of tile size. This is an important consideration, since a significant portion of the algorithm's
computation occurs in the routine that multiplies tiles. Figure 3a and Figure 3b show how the
performance of matrix multiplication, C / A \Theta B, varies as a function of the relation between tile
size and leading dimension. Each line in the graph corresponds to a different submatrix size
24, 28, 32). The submatrices to operate on are chosen from a base matrix (M) as follows:
Non-contiguous submatrices are created
by setting the leading dimension of each submatrix to the leading dimension of the base matrix,
M, which corresponds to the x-axis. Contiguous submatrices are formed by setting the leading
dimension of each submatrix to the tile size, T, which corresponds to each line in the graph.
From
Figure
3 we see that contiguous submatrices exhibit much more stable performance than
non-contiguous submatrices. As expected, the non-contiguous submatrices exhibit a dramatic drop
in performance when the base matrix is a power of two (256 in this case), due to self-interference.
On the Alpha, the contiguous submatrices clearly outperform the non-contiguous. The performance
differential is not as pronounced on the Ultra, however the instability of the non-contiguous
Matrix Size50.070.090.0MFlops
Contiguous Sub-Matrices
Non-Contiguous Sub-Matrices
Matrix Size50.070.090.0MFlops
Contiguous Sub-Matrices
Non-Contiguous Sub-Matrices
a.) DEC Miata b.) Sun Ultra
Figure
3: Effect of Data Layout on Matrix Multiply Performance
submatrices still exists. These results justify the use of Morton ordering within our implementation.
Implementation details
We can envision three different alternatives for storage layout: the input and output matrices are
assumed to be laid out in Morton order at the interface level; the input and output matrices are
copied from column-major storage to Morton order at the interface level; and the conversion to
Morton order is done incrementally across the levels of recursion. The first alternative is not feasible
for a library implementation. Among the two other options, converting the matrices to and from
Morton order at the top level was easier to implement, and performed relatively fast in practice
(5% to 15% of total execution time, see Section 4.1).
We incorporate any necessary matrix transposition operations during the conversion from
column-major to Morton order. This is handy, because it requires only a single core routine
for the Strassen-Winograd algorithm. The alternative solution requires multiple code versions or
indirection using pointers to handle these cases correctly.
Choosing tile sizes from a range as described above will in general induce a small constant
amount of padding. In our implementation, we explicitly padded out the matrix with zeros and
performed redundant computation on the pad. We could afford to do this because the pad was
guaranteed to be small. The alternative scheme of avoiding this overhead would have created tiles
of uneven sizes and required more control overhead to keep track of tile start offsets and similar
pieces of information.
We handle rectangular cases by treating the two dimensions separately. Each tile dimension
is chosen to minimize padding in that dimension. This method of choosing each tile dimension
independently works when the ratio of columns to rows (or rows to columns) is within certain
limits. Highly rectangular matrices pose a problem because the two recommended tile dimensions
may require different levels of recursion. The following example illustrates the difficulties of this
method for such matrices.
Consider a highly rectangular matrix of dimensions 1024x256. We choose the tile dimensions
independently. First, we consider that the matrix has 1024 rows, and choose the number of rows
x
x
x
a.) Lean A and wide B. b.) Wide A and lean B
Figure
4: Handling of Highly Rectangular Matrices.
in the tile that minimizes row padding (i.e., the number of additional rows). In this case, 32 is
chosen, and the recursion is required to unfold to a depth of 5. Next, we consider that the matrix
has 256 columns. The number of columns in the tile is chosen to minimize the number of columns
that are to be padded. Again, we choose 32, but the recursion must unfold to a depth of only 3.
Clearly, naively choosing the two tile dimensions independently does not work for highly rectangular
matrices, since we can not unfold the recursion to both a depth of 5 and to a depth of 3.
To overcome this limitation, the matrix is divided into submatrices such that all submatrices
require the same depth of recursion unfolding for both dimensions. The matrix product is
reconstructed in terms of the submatrix products.
A given matrix can be
ffl wide, meaning its columns-to-rows ratio exceeds the desired ratio,
ffl lean, meaning its rows-to-columns ratio exceeds the desired ratio, or
ffl well-behaved meaning both its columns-to-rows ratio and rows-to-columns ratio are within
the desired ratio.
Since there are two input matrices and B), and each can any one of the above forms, there are a
total of nine possible combinations. Figure 4a and Figure 4b show two examples of how the input
matrices and B) are divided, and how the result (C) is reconstructed from results of submatrix
multiplications.
Finally, we note that 1 and 0 are common values for the ff and fi parameters. In order to avoid
performing extra arithmetic for these parameter values, the core routine for the Strassen-Winograd
algorithm computes D / A ffl B, with D being set to C if and to a temporary otherwise. We
then post-process to compute C / ff D post-processing is necessary.
This section compares the performance of our implementation (MODGEMM) of Strassen's algorithm
to a previous implementation that uses dynamic peeling (DGEFMM) [13] (we use the au-
thor's original code), and to a previous implementation that uses dynamic overlap (DGEMMW) [6].
We measure the execution time of the various implementations on a 500 MHz DEC Alpha Miata
and a 300 MHz Sun Ultra 60. The Alpha machine has a 21164 processor with an 8KB direct-mapped
level 1 cache, a 96KB 3-way associative level 2 cache, a 2MB direct-mapped level 3 cache,
and 512MB of main memory. The Ultra has two UltraSPARC II processors, each with a
level 1 cache, a 2MB level 2 cache, and 512MB of main memory. We use only one processor on the
Ultra 60.
Matrix Size0.801.001.201.40Normalized
Execution
Time
MODGEMM
Matrix Size0.801.001.201.40Normalized
Execution
Time
a.) MODGEMM vs DGEFMM b.) DGEMMW vs DGEFMM
Figure
5: Performance of Strassen Winograd Implementations on DEC Miata
We timed the execution of each implementation using the UNIX system call getrusage for
matrix sizes ranging from 150 to 1024, and For DGEFMM we use the empirically
determined recursion truncation point of 64. For matrices less than 500 we compute the average of
invocations of the algorithm to overcome limits in clock resolution. Execution times for larger
matrices are large enough to overcome these limitations. To further reduce experimental error, we
execute the above experiments three times for each matrix size, and use the minimum value for
comparison. The programs were compiled with vendor compilers (cc and f77) with the -fast option.
The Sun compilers are the Workshop Compilers 4.2, and the DEC compilers are DEC C V5.6-071
and DIGITAL Fortran 77 V5.0-138-3678F.
Figure
5 and Figure 6 show our results for the Alpha and UltraSPARC, respectively. We report
results in execution time normalized to the dynamic peeling implementation (DGEFMM). On
the Alpha we see that DGEFMM generally outperforms dynamic overlap (DGEMMW), see Figure
5b. In contrast, our implementation (MODGEMM) varies from 30% slower to 20% faster than
DGEFMM. We also observe that MODGEMM outperforms DGEFMM mostly in the range of matrix
sizes from 500 to 800, whereas DGEFMM is faster for smaller and larger matrices. Finally, by
comparing Figure 5a and Figure 5b, we see that MODGEMM generally outperforms DGEMMW.
The results are quite different on the Ultra (see Figure 6). The most striking difference is the
performance of DGEMMW (see Figure 6b), which outperforms both MODGEMM and DGEFMM
for most matrix sizes on the Ultra. Another significant difference is that MODGEMM is generally
faster than DGEFMM for large matrices (500 and larger), while DGEFMM is generally faster for
small matrices.
An important observation from the above results is the variability in performance both across
platforms and across matrix sizes. Our ongoing research efforts are targeted at understanding
these variations. Section 4.2 reports on some of our preliminary findings, and the following section
analyzes the penalty of converting to Morton order.
Matrix Size0.801.001.201.40Normalized
Execution
Time
MODGEMM
Matrix Size0.801.001.201.40Normalized
Execution
Time
a.) MODGEMM vs DGEFMM b.) DGEMMW vs DGEFMM
Figure
Performance of Strassen Winograd Implementations on Sun Ultra
Matrix Size5.015.0
Conversion
Cost
(%age
of
Execution
Matrix Size5.015.0
Conversion
Cost
(%age
of
Execution
a.) DEC Miata b.) Sun Ultra
Figure
7: Morton Conversion Time as Percentage of Total Execution Time
Matrix Size0.801.001.201.40Normalized
Execution
Time
MODGEMM
Matrix Size0.801.001.201.40Normalized
Execution
Time
MODGEMM
a.) DEC Miata b.) Sun Ultra
Figure
8: Performance of MODGEMM without Matrix Conversion
4.1 Morton Conversion Time
An important aspect of our implementation is the recursive data layout, which provides stable
performance for dynamic tile size selection. The previous performance results include the time to
convert the two input matrices from column-major to Morton order, and to convert the output
matrix from Morton order back to column-major. Figure 7a and Figure 7b show the cost of this
conversion as a percentage of the entire matrix multiply for each of our platforms. From these
graphs we see that Morton conversion accounts for up to 15% of the overall execution time for
small matrices and approximately 5% for very large matrices.
These results show that Morton conversion is a noticeable fraction of the execution time. Eliminating
the conversion cost (i.e, assuming the matrices are already in Morton order) produces
commensurate improvements in the performance of our implementation. Figure 8a and Figure 8b
shows that without conversion costs MODGEMM does indeed execute faster, and increases the
number of matrix sizes at which it outperforms DGEFMM. Specifically, on the Alpha (Figure 8a)
MODGEMM is superior for most matrix sizes larger than 500, and on the Ultra (Figure 8b) we see
that for only a few matrix sizes DGEFMM outperforms MODGEMM. Furthermore, without Morton
conversion, MODGEMM is very competitive with DGEMMW, and outperforms it for many
matrix sizes.
4.2 Cache Effects
Our initial efforts to gain further insight into the performance variability of our implementation begins
with analysis of its cache behavior. Here, we present preliminary results. We used ATOM [22]
to perform cache simulations of a 16KB direct-mapped cache with blocks of both the
DGEFMM, and MODGEMM implementations. Figure 9 shows the miss ratios of each implementation
for matrix sizes ranging from 500 to 523. The first observation from this graph is that
MODGEMM miss ratios (6% to 2%) are lower than DGEFMM (8%), which matches our expec-
tations. The second observation is the unexpected dramatic drop in MODGEMM's miss ratio at
a matrix size of 513. Preliminary investigations using CProf [18] reveal that this drop is due to a
reduction in conflict misses.
Matrix Size0.0400.080Miss
Figure
9: Cache Miss Ratios for 16KB Direct-Mapped with
To understand this phenomenon, consider that for matrix sizes of 505 to 512 the padded matrix
size is 512 and the recursion truncation point is at tile size 32. The conventional algorithm is
applied to submatrices that are each 8KB in size (32x32x8), and correspond to the four quadrants
of a 64x64 submatrix. With Morton ordering the quadrants are allocated contiguously in memory,
and quadrants separated by a multiple of the cache size (16KB) conflict in the cache. For example,
since the NW and SW quadrants are separated by the NE quadrant, they map to the same locations
in cache (i.e, the first elements of the NW and SW quadrants are separated by 16KB). Therefore,
any operations involving these two quadrants will incur a significant number of cache misses. We
are currently examining ways to eliminate these conflict misses.
5 Related Work
We discuss three separate areas of related work: implementations of Strassen-type algorithms,
hierarchical schemes for matrix storage, and compiler technology for improving the cache behavior
of loop nests.
5.1 Other implementations
Previous implementations of Strassen's algorithm include Bailey's implementation for the CRAY-
2 [3], the DGEMMW implementation by Douglas et al. [6], and the DGEFMM implementation
by Huss-Lederman et al.[13]. Bailey, coding in Fortran, used a static two-level unfolding of the
recursion by code duplication. Douglas et al. introduced the dynamic overlap method for handling
odd-sized dimensions. Huss-Lederman et al. introduced the dynamic peeling method. While all of
these implementations are careful to limit the amount of temporary storage, they do not specifically
consider the performance of the memory hierarchy on their code. In some cases (such as on the
CRAYs), this issue did not arise because the memory system was not cache-based. Section 4 gives
extensive performance comparisons of our implementation vs. DGEFMM and DGEMMW.
Kreczmar [16] proposes an elegant memory-efficient version of Strassen's algorithm based on
overwriting one of the input arguments. His scheme for space savings is not directly applicable for
two reasons: we cannot assume that the input matrices can be overwritten, and his scheme requires
several copying operations that reduce performance.
5.2 Hierarchical schemes for matrix storage
Wise and his coauthors [1, 24] have investigated the algorithmic advantages of quad-tree representations
of matrices. Morton ordering has also appeared in the parallel computing literature, where
is has been used for load balancing of irregular problems [20]. Most recently, Frens and Wise [8]
discuss an implementation of a recursive O(n 3 ) matrix multiplication algorithm using hierarchical
matrix layout, in which they sequence the recursive calls in an unusual manner to get better reuse
in cache. We do not carry the recursion to the level of single matrix elements as they do, but
truncate the recursion when we reach tile sizes that fit in the upper levels of the memory hierarchy.
5.3 Cache behavior
Several authors [17, 15, 4, 21] discuss loop transformations such as tiling that attempt to reduce the
number of cache misses incurred by a loop nest and thus improve its performance. While these loop
transformations are not specific to matrix multiplication, the conventional three-loop algorithm for
matrix multiplication falls into the category of codes that they can handle.
Lam, Rothberg, and Wolf [17] investigated and modeled the influence of cache interference on
the performance of tiled programs. They emphasized the importance of having tiles be contiguous
in memory to avoid self-interference misses, and proposed data copying to satisfy this condition.
Our top-level conversion between the column-major layout at the interface level and the Morton
ordering used internally can be viewed as a logical extension of this proposal.
Ghosh et al. [9] present an analytical representation of cache misses for perfect loop nests, which
they use to guide selected code optimization problems. Their work, like all of the other work cited
above, relies on linear (row- or column-major) storage of arrays, and therefore does not immediately
apply to our code.
6 Conclusions and Future Work
Matrix multiplication is an important computational kernel, and its performance can dictate the
overall performance of many applications. Strassen's algorithm for matrix multiplication achieves
lower arithmetic complexity, O(n log 2 7 ), than the conventional algorithm, O(n 3 ), at the cost of worse
locality of reference. Furthermore, since Strassen's algorithm is based on divide-and-conquer, an
implementation must handle odd-size matrices, and reduce recursion overhead by terminating the
recursion before it reaches individual matrix elements. These issues make it difficult to obtain
efficient implementations of Strassen's algorithm.
In this paper we presented a practical implementation of Strassen's algorithm (Winograd vari-
ant) that exploits the ability to dynamically select the recursion truncation point based on matrix
size and efficiently handles odd-sized matrices. We achieve this by using a non-standard array
layout called Morton order; by converting from standard layouts (e.g., column-major) to internal
Morton layout at the interface level; and by exploiting dynamic selection of the recursion truncation
point to minimize padding.
We compare our implementation to two alternative implementations that use dynamic peeling
(DGEFMM) [13] and dynamic overlap (DGEMMW) [6]. Execution time measurements on a DEC
Alpha and a SUN UltraSPARC II reveal wide variability in the performance of all three imple-
mentations. On the Alpha, our implementation (MODGEMM) ranges from 30% slower to 20%
faster than DGEFMM for matrix sizes from 150 to 1024. On the Ultra, MODGEMM is generally
faster than DGEFMM for large matrices (500 and larger), while DGEFMM is generally faster for
small matrices. When eliminating the time to convert matrices to/from Morton order (5% to 15%
of total execution time), MODGEMM outperforms DGEFMM for nearly all matrix sizes on the
Ultra, and for most matrices on the Alpha.
Our future work includes investigating techniques to further improve the performance and
stability of Strassen's algorithm, while minimizing code complexity. We also plan to examine the
effects of rectangular input matrices. Our implementation supports the same interface as Level 3
BLAS dgemm routine [2], we plan to examine its performance for a variety of input parameters.
--R
Experiments with quadtree representation of matrices.
Extra high speed matrix multiplication on the Cray-2
Tile size selection using cache organization and data layout.
A set of level 3 basic linear algebra subprograms.
GEMMW: a portable level 3 BLAS Winograd variant of Strassen's matrix-matrix multiply algorithm
Efficient procedures for using matrix algorithms.
Cache miss equations: An analytical representation of cache misses.
Accuracy and Stability of Numerical Algorithms.
Evaluating associativity in CPU caches.
A tensor product formulation of Strassen's matrix multiplication algorithm.
Implementation of Strassen's algorithm for matrix multiplication.
IBM engineering and scientific subroutine library guide and reference
On memory requirements of Strassen's algorithms.
The cache performance and optimizations of blocked algorithms.
Cache profiling and the SPEC benchmarks: A case study.
Dynamic partitioning of non-uniform structured workloads with spacefilling curves
Data transformations for eliminating conflict misses.
Atom a system for building customized program analysis tools.
Gaussian elimination is not optimal.
Costs of quadtree representation of nondense matrices.
--TR
Extra high speed matrix multiplication on the Cray-2
Evaluating Associativity in CPU Caches
A set of level 3 basic linear algebra subprograms
The cache performance and optimizations of blocked algorithms
LAPACK''s user''s guide
ATOM
GEMMW
Tile size selection using cache organization and data layout
Dynamic Partitioning of Non-Uniform Structured Workloads with Spacefilling Curves
multi-level blocking
Cache miss equations
Auto-blocking matrix-multiplication or tracking BLAS3 performance from source code
Data transformations for eliminating conflict misses
Implementation of Strassen''s algorithm for matrix multiplication
Accuracy and Stability of Numerical Algorithms
Cache Profiling and the SPEC Benchmarks
Efficient Procedures for Using Matrix Algorithms
Experiments with Quadtree Representation of Matrices
--CTR
Siddhartha Chatterjee , Alvin R. Lebeck , Praveen K. Patnala , Mithuna Thottethodi, Recursive Array Layouts and Fast Matrix Multiplication, IEEE Transactions on Parallel and Distributed Systems, v.13 n.11, p.1105-1123, November 2002
Hossam ElGindy , George Ferizis, On improving the memory access patterns during the execution of Strassen's matrix multiplication algorithm, Proceedings of the 27th Australasian conference on Computer science, p.109-115, January 01, 2004, Dunedin, New Zealand
K. Fatahalian , J. Sugerman , P. Hanrahan, Understanding the efficiency of GPU algorithms for matrix-matrix multiplication, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, August 29-30, 2004, Grenoble, France
Kang Su Gatlin , Larry Carter, Architecture-cognizant divide and conquer algorithms, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.25-es, November 14-19, 1999, Portland, Oregon, United States
John Mellor-Crummey , David Whalley , Ken Kennedy, Improving memory hierarchy performance for irregular applications, Proceedings of the 13th international conference on Supercomputing, p.425-433, June 20-25, 1999, Rhodes, Greece
John Mellor-Crummey , David Whalley , Ken Kennedy, Improving Memory Hierarchy Performance for Irregular Applications Using Data and Computation Reorderings, International Journal of Parallel Programming, v.29 n.3, p.217-247, June 2001
Igor Kaporin, The aggregation and cancellation techniques as a practical tool for faster matrix multiplication, Theoretical Computer Science, v.315 n.2-3, p.469-510, 6 May 2004
Sandeep Sen , Siddhartha Chatterjee, Towards a theory of cache-efficient algorithms, Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms, p.829-838, January 09-11, 2000, San Francisco, California, United States
Lillian Lee, Fast context-free grammar parsing requires fast boolean matrix multiplication, Journal of the ACM (JACM), v.49 n.1, p.1-15, January 2002
Sandeep Sen , Siddhartha Chatterjee , Neeraj Dumir, Towards a theory of cache-efficient algorithms, Journal of the ACM (JACM), v.49 n.6, p.828-858, November 2002
Paolo D'Alberto , Alexandru Nicolau, Adaptive Strassen's matrix multiplication, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Nico Galoppo , Naga K. Govindaraju , Michael Henson , Dinesh Manocha, LU-GPU: Efficient Algorithms for Solving Dense Linear Systems on Graphics Hardware, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.3, November 12-18, 2005
Siddhartha Chatterjee , Alvin R. Lebeck , Praveen K. Patnala , Mithuna Thottethodi, Recursive array layouts and fast parallel matrix multiplication, Proceedings of the eleventh annual ACM symposium on Parallel algorithms and architectures, p.222-231, June 27-30, 1999, Saint Malo, France
Mohamed F. Mokbel , Walid G. Aref , Ibrahim Kamel, Analysis of Multi-Dimensional Space-Filling Curves, Geoinformatica, v.7 n.3, p.179-209, September
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.14 n.7, p.625-639, July
Mohamed F. Mokbel , Walid G. Aref, Irregularity in multi-dimensional space-filling curves with applications in multimedia databases, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Siddhartha Chatterjee , Vibhor V. Jain , Alvin R. Lebeck , Shyam Mundhra , Mithuna Thottethodi, Nonlinear array layouts for hierarchical memory systems, Proceedings of the 13th international conference on Supercomputing, p.444-453, June 20-25, 1999, Rhodes, Greece
Richard Vuduc , James W. Demmel , Jeff A. Bilmes, Statistical Models for Empirical Search-Based Performance Tuning, International Journal of High Performance Computing Applications, v.18 n.1, p.65-94, February 2004 | matrix multiply;strassen's algorithm;data layout;cache memory |
509246 | Distributed Memory Parallel Architecture Based on Modular Linear Arrays for 2-D Separable Transforms Computation. | A framework for mapping systematically 2-dimensional (2-D) separable transforms into a parallel architecture consisting of fully pipelined linear array stages is presented. The resulting model architecture is characterized by its generality, high degree of modularity, high throughput, and the exclusive use of distributed memory and control. There is no central shared memory block to facilitate the transposition of intermediate results, as it is commonly the case in row-column image processing architectures. Avoiding shared central memory has positive implications for speed, area, power dissipation and scalability of the architecture. The architecture presented here may be used to realize any separable 2-D transform by only changing the coefficients stored in the processing elements. Pipelined linear arrays for computing the 2-D Discrete Fourier Transform and 2-D separable convolution are presented as examples and their performance is evaluated. | Introduction
Separable transforms play a fundamental role in digital signal and image processing. Nearly every
problem in DSP is based on the transformation of a time or space domain signal to alternative
spaces better suited for efficient storage, transmission, interpretation or estimation. The most
commonly employed 2-dimensional (2-D) signal transforms, such as the Discrete Cosine, Fourier,
Sine Transforms (DCT, DFT, and DST) and the Discrete Hough and Radon Transforms (DHT,
DRT), are known as separable due to a special type of symmetry in their kernel [1]. Separable
transforms are computationally less expensive than non-separable ones, having time complexity in
O(M) per output data element, as opposed to O(M 2 ) for non-separable forms, when applied to a
size M \Theta M input image.
The majority of the available VLSI implementations for separable transforms are based on the
popular row-column approach (see for example Rao and Yip [2], Guo et al. [3], Y.-P. Lee et al. [4],
Bhaskaran and Konstantinides [5], Gertner and Shamash [6], and Chakrabarti and J'aj'a [7]), where
the 2-D transform is performed in three steps: (i) 1-D transformation of the input rows, followed
by (ii) intermediate result transposition (usually implemented with a transposition memory), and
(iii) 1-D transformation of the intermediate result columns, as illustrated in Fig. 1 (a). In this
figure, and throughout this paper, we assume that all image inputs become available in raster-scan
order, that is, as a continuous sequence of row vectors. Arrays that can accept and produce data
in raster-scan avoid expensive host interface memory buffering.
Each of the 1-D row and column processing blocks in Fig. 1 (a) may be realized either with highly
modular structures, such as standard pipelined arrays of identical Processing Elements (PEs), or
by some other type of architecture optimized for the targeted transform. However, one of the
main shortcomings of the conventional row-column architecture is that the central memory block
severely limits the modularity of the overall structure, despite the possible modularity of the 1-
D row and column arrays. In addition, central memory requires moderately complex address
generation circuitry, row and column decoders, sense amplifiers depending on its size, and control
for coordinating the concurrent access of the two row and column architectures to the shared
memory. Large central memories also have negative implications for low-power design.
In this paper we present a method for synthesizing fully pipelined modular arrays for 2-D
separable transforms. The derived computational structures use only small-size FIFO memories
local to the PEs which require neither address generation nor central control and memory-to-array
routing. Faster clock speed, smaller area, and low power can be achieved with architectures having
a regular set of simple PEs with distributed memory and control. Reducing the size of memory
structures has been shown to reduce power consumption [8], largely due to reduced effective bus
capacitances of smaller memories, and absence of sense amplifiers. The architectures synthesized
are general in the sense that any separable transform can be realized by programming in the PEs
memory
bank for
transposition
address
row processing
(a)
row column
raster-
scan
input
(b)
logic
column processing
Figure
1: (a) Conventional architecture for raster-scan 2-D separable transforms relying on shared
central memory bank for the transposition of intermediate results, and (b) proposed alternative
architecture based on two linear arrays where the central memory bank is replaced by FIFO queues
distributed to the PEs.
the appropriate set of kernel coefficients.
The benefits of eliminating memory transposition have been recognized by Lim and Swartzlander
[9] and by Wang and Chang [10]. In [9], the authors treat the DFT problem in the context of
multi-dimensional systolic arrays. Our method is distinct from [9] primarily in that it is restricted
to single input (SI) linear arrays using raster-scan I/O. Fisher and Kung [11] have shown that linear
arrays have bounded clock skew regardless of size, whereas higher than 2-D arrays of arbitrary size
may not be technologically feasible. (See Lee and Kedem [12] for a discussion on general aspects of
mapping algorithms to linear arrays.) Although the DCT arrays presented in [10] avoid transposition
by means of local PE storage, the resulting architectures require M 2 multipliers on an M \Theta M
input, which in many cases may be prohibitive. The arrays developed in this paper require only
multipliers.
Another important characteristic of the proposed architecture is that it uses localized communications
and its interconnection complexity is in O(M) (for M \Theta M separable 2-D transforms).
To the best of our knowledge, parallel architectures that may use less than O(M) PEs also require
O(M) input/output ports and may exhibit non-local communications among PEs i.e. their inter-connection
complexity may grow faster than O(M ). Furthermore, the reduction in the number of
PEs becomes possible by exploiting the specific coefficient symmetries, i.e. these architectures may
not compute any desirable 2-D separable transform, something that the architecture developed here
is capable of.
In addition to the general computational structure, we also derive here as examples arrays for
the 2-D DFT and for 2-D separable convolution. Although purely from a computational efficiency
point of view the DFT is more expensive than the Fast Fourier Transform (FFT), from a VLSI
implementation point of view there are significant reasons why the DFT may be preferable to the
FFT (particularly with a small number of coefficients). The FFT's complex data routing limits
overall speed and is expensive in terms of chip area (see Swartzlander [13] and Thompson [14]). In
this paper we use the 1-D DFT array by Beraldin et al. [15] and derive a modular array structure
for computing the 2-D DFT without transposition memory.
The 2-D separable convolution has been studied extensively for many years (see for example
Dudgeon and Mersereau [16]) and has been used as a basis for constructing general transfer
functions. Abramatic et al. [17] has used separable FIR systems for constructing non-separable
IIR filters, and Treitel and Shanks [18] have used parallel separable filters for approximating non-separable
ones. Despite the limited range of frequency responses of separable filters, they are
attractive due to their low computational cost as compared to non-separable systems. In addi-
tion, separable convolution plays a central role in the Discrete Wavelet Transform (DWT) [19], as
currently most of the basis used for the 2-D DWT are separable.
The rest of this paper is organized as follows. In Section 2 we fix notation, define general
separable 2-D transforms, and discuss alternative views of conventional row-column 2-D processing
that will allow us to formulate equivalent algorithms suitable for raster scan I/O. In Section 3 the
derived algorithms are then transformed systematically to fully pipelined linear array structures
with distributed memory and control using appropriate linear space-time mapping operators. In
Sections 4 and 5 we apply the general method developed in Section 3 to derive modular linear
arrays for the 2-D DFT, and for 2-D separable convolution, respectively.
Transforms, Definitions and Notation
A 2-D transform of an M \Theta M signal x(i; is given by
Notice that in a general 2-dimensional (2-D) transform there are M 2 , 2-D distinct orthonormal
basis functions spanning the transform space, namely g
(where g denotes
the complex conjugate of g) [1, 20]. Separable and symmetric transforms have the property that
l;m (i; Hence one can express y(l; m) as
l (i)
In order to formulate the transforms in matrix form, we adopt the following notation: Let
A be an M i \Theta M k matrix of elements indexed from zero and interchangeably
also denoted as a(i; k) or a i;k . Furthermore, let column vectors M i \Theta 1 denoted by lower-case
letters as
where the superscript t denotes transposition. We adopt Golub
and Van Loan's [21] Matlab-like notation and denote submatrices as A(u; v), where u and v are
integer column vectors that pick out the rows and columns of A defining the submatrix. Also, index
vectors are specified by colon notation as implying
l
. For example,
is a 2 \Theta 5 submatrix containing rows 2 and 3, and columns 5 through 9 of matrix A.
Entire rows or columns can be specified by a single colon as A(:; k), which corresponds to A's kth
column, and A(i; :) denoting A's ith row. Index vectors with non-unit increments are specified as
denoting a count from p to r in length-q increments.
Let X and Y be the transform input and transform coefficient (output) M \Theta M matrices
respectively, and M \Theta M matrix G be the separable transformation kernel whose columns are
the M , 1-D distinct basis vectors such that G(i;
, then matrix
With this notation in place, Eqn. (2) can then be rewritten as
where
m). Consequently, a separable transform can be computed as
two consecutive matrix-matrix products. The first matrix product can be written as
and can be viewed as operating with matrix G on the rows of input X to produce the rows of the
intermediate result Z. The second matrix product,
can be viewed as operating with matrix G t on the columns of Z to produce the columns of Y . This
interpretation forms the basis for the traditional row-column processing method that necessitates
the transposition of Z. In Section 3 we show how Y a be obtained by processing directly the rows
of Z in the order in which they become available, thus eliminating the need for transposition. We
now give the definition for raster scan ordering.
Definition 2.1 Given the set of matrix elements g, we say
that element X(i; precedes in raster scan order element X(i 0
only if either i 0
Raster scan induces a total ordering on the input set, and is equivalent to row-major ordering.
Let us also define the vectors x and z as x j
and z j
which are column vector views of matrices X and
Z respectively, suitable for raster scan processing. means that the p th row of Z can be
derived by multiplying the p th row of X times G, i.e. Z(p;
This set of equations can be expressed compactly using Kronecker products [20, 22], by multiplying
the 1 \Theta M 2 input vector x t times I
M\Omega G to obtain z
M\Omega G), or equivalently
The matrix-vector product Eqn. (6) is an alternative view of Eqn. (4) that allows us to derive
an algorithm of dimension two (i.e. an algorithm having index space of dimension two) for row
processing, rather than of dimension three as implied by the matrix-matrix product form of Eqn.
(4). Reformulating algorithms into lower dimensional index spaces, as we do here, is an effective
technique for simplifying the mapping of algorithms of high dimension into simple and efficient
mono-dimensional (linear) array structures, by avoiding the complexities of multiprojection [23].
The intermediate result column vector z in Eqn. (6) can be viewed either as being composed
of M consecutive vectors z(i corresponding to the
rows of matrix Z, or as M vectors interspersed by a stride factor of M [21], corresponding to the
columns of matrix Z, and given by 1g. By adopting
the latter view, column 1-D transform Eqn. (5) can be expressed as
I M )z; (7)
where vector y is defined similarly as x and z. 1
3 Array Synthesis Framework for Separable Transforms
In this section we elaborate on the space time mapping for the two components of the general 2-D
separable transform array: one for row, and one for column processing. For generality, we treat 1-D
transformation as a matrix vector product and do not make use of any additional computational
savings that might be possible due to kernel symmetries. Rather than write a three dimensional
Single Assignment Form (SAF) [23] 2 algorithm for row processing as suggested by Eqn. (4) (con-
ventional M \Theta M matrix-matrix product), we work with Eqn. (6) (matrix-vector product). The
nested loop algorithm for Eqn. (6) is only of dimension two, and can be trivially mapped into
a linear array for processing inputs in raster-scan order, due to the way we constructed vectors x
and z.
1 As a check, recall that an alternate definition for a separable transform is that the basis matrix T of size M 2 \Theta M 2
be written as the Kronecker product of the basis M \Theta M matrix G times itself (see Akansu [1]). Substituting Eqn. (6)
into Eqn. (7), we get
t\Omega IM
)x. Using the property
(A\Omega B)(C\Omega
(AC\Omega BD) [1, 20], we have
that
An algorithm in SAF is one in which all variables are assigned a single value throughout the entire execution, so
statements of the form x(i) / x(i) b are not allowed.
If we define the
G as ~
Eqn. (6) is given below
Algorithm 3.1
Inputs x(i) and ~
Initially z
z k (i) / ~
Outputs z
Algorithm 3.1 is defined over 2-D index space I = f(i; g. Using linear
space-time mapping methods, under space transformation this algorithm can be
transformed into an array with M Processing Elements (PEs), but with efficiency of only 50% due
to the block diagonal sparse structure of ~
G. By applying a transformation on index variable i
defined
cM , we obtain an equivalent algorithm which has a rectangular Constant
Bounded Index Space (CBIS) [27] given by I 0
g. This new
algorithm can be mapped under into a linear array with M , fully utilized (100% efficient)
PEs.
Localization of broadcast variables is a complex and rich problem [23, 28, 29]. For maintaining
focus we do not address here the details required for writing Algorithm 3.1 in localized form where
all variables have a common domain of support, but rather give below as Algorithm 3.2 one possible
localized algorithm suitable for raster scan processing. (Note that the variables ~
~
z(i; are the localized equivalents of x(k), G and z(k) respectively).
Algorithm 3.2
Inputs x(k), G(i;
for
for
~
~
~
Outputs z(b k
Pictorially, the localized Algorithm 3.2 can be represented by its Dependence Graph (DG) [23] as
shown in Fig. 2 (a) for 4, where inputs x(k) appear along the horizontal k-axis and are shown
with the corresponding elements in matrix X. The operations modeled by a DG node are shown in
Fig. 2 (b). In this figure, arrows represent data dependencies which are seen to be local among DG
nodes for variables x and z. Not shown in the figure are the dependence vectors associated with
variable G, which are horizontal and of length M ; note that the coefficients of matrix G appear in
this DG replicated M times as
one term for each DG section.
Using the linear space map the DG of Fig. 2 (a) is projected along the i axis giving
rise to the linear array in Fig. 2 (c). The linear time map (schedule) used is which is
consistent with raster scan processing. Input stream in raster scan is fed to PE 0 , one data point
per schedule time period, and the output stream becomes available after an initial delay of three
time units at also in raster scan. So for example, at is available at
and at at PE 0 again. Note that although outputs
do not become available at a single PE, every time instant only one PE produces a z value, and
consequently outputs can be collected by means of a bus connecting all PE outputs. 3
The local PE control needed to realize the output of partial results z in Algorithm 3.2, (i.e.
recognize the inter-row boundaries at k 2 in the DG of Fig. 2 (a)) can be
implemented simply by attaching a bit-tag (called decode bit in Fig. 2 (d)) to each input data token
X. This bit is set only for the last element of every row of matrix X, and the PEs accumulate
locally the result of the multiply-accumulate operation on z only when the bit of the inputs is not
set. When a PE finds that an input in X(:; arrives that has the decode bit set, it places
the result on an output link rather than on the internal z register, and then clears the internal z
register. A column of transform coefficients G are stored in a bank of M registers, and are fed to
the multiplier in sequence with wraparound.
Next we derive a SAF algorithm for the transformation which can be viewed either as
(i) matrix G t operating on column vectors Z(:; k) to produce column vectors Y (:;
or as (ii) matrix Z operating on row vectors G t (i; :) to produce row vectors Y (i; :). Using the
latter interpretation, for which in turn is equal to
?From this expression follows the nested-loop SAF algorithm given below which is very similar to
Algorithm 3.1 except from the fact that the basic multiply-add operation is of the type "scalar
times vector plus vector." 4
Algorithm 3.3
Alternatively, outputs can also be systolically pipelined along with inputs X so that they become available at a
single PEM \Gamma1 . See Beraldin et al. [15] for a discussion on output pipelining and systolization for the DFT.
4 So-called saxpy operation in [21].
z
x
(c) row array
2-bit cntr
z
x
output
(d) internal PE structure
to column array
input
(a) row DG
(b) DG node
Figure
2: (a) Dependence Graph (DG) for transformation on rows of X, with
node operations; (c) the resulting row array after applying the linear space-time mapping operators
along with the I/O processor-time schedule, and (d) the internal structure
of processing element PE 2 .
Inputs: matrices Z and G
Initially Y
Outputs:
scalar multiplications, or one multiplication of scalar G(1; 0) by vector Z(1; :) resulting to vector
which is then added to vector Y 0 (0; :).
The most important aspect of this formulation is that it allows us to operate directly on the rows
of intermediate matrix Z thereby exploiting the raster scan order in which Z is becoming available
from the row array and, as a result, avoiding any intermediate matrix transposition. Furthermore,
although Eqn. (5) is a matrix-matrix product defined over a 3-D index space, we have constructed
Algorithm 3.3 as a matrix-vector product over a 2-D index space on variables (i; k) by using vector
data objects instead of scalars.
Similarly to Algorithm 3.2, Algorithm 3.3 can be localized by propagating variable Z(k; :) along
the i-axis and accumulating output Y (i; :) along the k-axis. The DG for localized Algorithm 3.3
is shown in Fig. 3 (a), where basic DG node operation consist of M scalar multiplications and M
scalar additions and data dependencies among DG nodes consist of M parallel vectors (depicted
as one thick arc). Fig. 3 (b) shows the operation modeled by a DG node (saxpy).
We map DG shown in Fig. 3 (a) using the same space transformation applied to the DG of
Algorithm 3.2, namely resulting in the column linear array shown in Fig. 3 (c).
Scheduling for this DG is accomplished by means of a function mapping a computation within
DG node (i; k) to a schedule time period
is an index that
picks the lth element in row Z(k; :), and is the same schedule vector used for the row
array. So for example, if the DG node depicted in Fig. 3 (b) is (i; computation
scheduled for execution on PE 1 at
at and the first computation of DG node (i; namely
is scheduled on PE 2 at Note that this timing function has two resolution
levels, one that assigns nodes in the DG to macro time steps, and one that assigns individual
multiply-add operations within a DG node to actual time steps (clock cycles).
The regularity of this scheduling allows a simple implementation with no additional local PE
control. And, very importantly, the width-M data dependencies for Z(k; :) pointing upwards in
the column DG of Fig. 3 (a) do not map to M physical links. Rather, only a single physical link
suffices to propagate the inputs z since the sequence of z values is produced at a rate of one scalar
token per schedule time period by the row array. And in addition, z tokens are consumed by the
vector array at the same rate they are produced.
(a) column DG (b) DG node
Y (i,0)+.
Y (i,1)+.
Y (i,2)+.
Y (i,3)+.
2-bit cntr.
clock/4
y
z
output
(d) PE structure
(c) column array
z
from row
array
dependencies
map to single physical link
FIFO
FIFO
Figure
3: (a) Dependence Graph (DG) for transformation DG on columns of Z, with
nodes represent scalar-vector multiply and vector-vector add; (c) the resulting column array
after applying the linear space-time mapping operators along with the I/O
processor-time schedule, and (d) the internal structure of processing element PE 2 .
Under array 5 has identical structure as the row array in Fig. 2 (c),
and is shown in Fig. 3 (c) along with its I/O schedule. The difference is in the amount of memory
required by the column array for holding entire rows of Z; Y rather than single elements. In
Fig. 3 (d) we show the internal PE structure, where we see that I/O registers for x and z in the row
array have been replaced by length-M FIFO queues for Z and Y in the column array, respectively.
3.1 Performance Characterization
We now calculate the latency and pipelining period [23] of the proposed array. Fig. 4 shows the
complete I/O schedule for both arrays (row and column) on a transform of size 4, with two
consecutive input matrices X 0 and X 1 . In this figure we see that both arrays are perfectly space
and time matched: output stream z generated by the row array can be immediately consumed by
the column array at the exact same rate of production. As a consequence no intermediate buffering
is required.
The problem latency (total time) for a single separable transform with input X 0 is calculated
as follows: Assuming that the row array starts computation at we know that under schedule
the time at which the first intermediate result is produced is i.e. at
4. From that point on, one intermediate result becomes available per schedule time
period. Hence, the column array requires 28 time steps for processing the entire stream of z tokens
(time between accepting z(0) and producing Y (3; 3)) , or in general MT4
time units. Since the first input to the column array becomes available at , the latency (i.e.
the total time required for the parallel architecture to compute the 2-D separable transform for a
single image
When considering a continuous stream of problems with input matrices fX
relevant measure of performance is the block pipelining period i.e. the number of time steps between
initiations of two consecutive problems [23]. Referring again to Fig. 4, note first that the last data
token of the first input to enter the row array, X 0 does so at
at in the figure), and thus the time at which the first element of the next input X 1 (0; may
enter the row array is t Next, the time at which the column array may start
accepting inputs from intermediate result Z 1 (of problem instance
Assuming that the row array does accept X 1 (0; 0) at the earliest possible time
then units later at the first data token of the second input stream,
becomes available to the column array, which is exactly t column the time at which
the column array is ready to accept it. Consequently, the resulting block pipelining period is
Loosely speaking we will name this second array column array, or vector array to remind us that it produces
the same result Y as the column array implementing the conventional row-column approach in Figure 1 (a). Note
however that in our array, Z is provided in raster scan order i.e. as a sequence of rows, not columns.
. X(0,3) X(1,0) . X(1,3) X(2,0) . X(2,3) X(3,0) . X(3,3)
. Y(0,3) Y(1,0) . Y(1,3) Y(2,0) . Y(2,3) Y(3,0) . Y(3,3)
X0 inputs to row array
. X(0,3) X(1,0) . X(1,3) X(2,0) . X(2,3) X(3,0) . X(3,3)
X1 inputs to row array
Intermediate results Z0
Intermediate results Z1
Column array outputs Y0
time
last X0 input at 1first intermediate result of
Z1 to column array at
Figure
4: Overall time schedule for both arrays considering two successive inputs X 0 and X 1 , with
as in the example). Since there is a single input line to the row
array and thus only one token can be fed per time step the theoretically minimum fi, or maximum
throughput, is attained.
Efficiency is given by the ratio of the total number of computations to the product of latency
times number of processors. Single input efficiency is 50%, and for a fully pipelined array is exactly
100%, implying that one output Y (i; becomes available every schedule time period.
Since the performance metrics depend on the particular schedule chosen here, there are a number
of available tradeoffs that can be explored. For instance, the non-systolic schedule
used for the column array (with the same timing function) leads to a reduced latency of M 2
while the block pipelining period remains minimal, . However, in this case M outputs (a
column of Y ) are becoming available simultaneously at every time step and should be collected in
parallel (Single Input Parallel Output, or SIPO, architecture).
4 Arrays for the 2-D Discrete Fourier Transform
In this Section we derive a parallel architecture for the 2-D Discrete Fourier Transform (DFT) which
is a special case of the computational structures derived in Section 3. Therefore the arrays presented
here are similar to those shown in Figs. 2 and 3, except that by using a DFT algorithm known as
Goertzel's DFT [15] we may reduce kernel coefficient storage and simplify control complexity.
4.1 Goertzel's DFT
The DFT of a 1-D, M-sample sequence x(k) is given by the set of M coefficients
and the DFT of a 2-D, M \Theta M-sample sequence x(i; k) is given by
Clearly, as kernel g lm (i;
li
M the DFT is separable on 1-D kernel g l li
M .
The 1-D DFT array by Beraldin et al. [15] based on the so-called Goertzel Algorithm is derived
by applying Horner's polynomial multiplication rule. The following recursive equation is equivalent
to the 1-D DFT
where z(m; k) denotes the m th DFT coefficient after the k th recursive step, and 0 - m;
Consider, for example, the computation of DFT coefficient z(2) with which, according to
Eqn. (9) is given by
4 x(3). Now, by unraveling
the recursion in Eqn. (11) we have that z(2;
or just z(2;
4 x(0), which indeed is equal to the DFT
coefficient z(2), since W \Gamma6
4 , and W \Gamma8
1. With Goertzel's algorithm, a DFT coefficient
is computed using a single twiddle factor W \Gammam
M , rather than M twiddle factors as with the direct
application of Eqn. (9).
4.2 Modular DFT Array Derivation
Let x and z be the input and intermediate result M 2 \Theta 1 vectors representing the corresponding
raster scan matrices X and Z, as in Section 2. Examination of Eqn. (11) reveals that, together with
the above definitions for x and z, an algorithm very similar to the localized SAF Algorithm 3.2 (see
Section can be formulated, where W \Gammai
M plays the role of G(i; k), and in the statement for updating
z addition takes place before multiplication. Hence, a localized SAF algorithm for Eqn. (11) is given
by
Algorithm 4.1
Inputs x(k), W \Gammai
for
6 In this equation, m is the "frequency" index, and k the "time" index.
~
~
Outputs z(b k
where ~ x(i; k); ~ g(i;
z(i; are the localized equivalents of variables x(k), W \Gammai
M and z(k) re-
spectively. The DG for this algorithm is similar in structure to the one shown in Fig. 2 (a), except
that instead of M 2 G coefficients propagated by length-M horizontal dependencies along the k-
direction, we have only M complex twiddle factors propagated by length-one dependencies along
the k-direction. Also, the order of internal PE operations is reversed.
Algorithm 4.1 is mapped under space transformation
resulting in the row array shown in Fig. 5 (a). As shown in Fig. 5 (b), the internal PE structure
of the DFT row array is simpler than the array for general transforms in that a single complex
twiddle factor is always used as a multiplicand. This sort of simplification can be further exploited
for designing a specialized multiplier for that particular twiddle factor. For example, depending
on the value of W \Gammai
M , or using approximations to W \Gammai
M which are factors of powers of two, a fast
shift-add accumulator can be used rather than an array multiplier.
The DG for the DFT column array, which is the vector version of Eqn. (11), is constructed
similarly as the one for the general case shown in Fig. 3 (a), with DG node shown in Fig. 3 (b).
The vector DG for Goertzel's DFT column algorithm is similar to the general one in Fig. 3 (a),
with the exception that there are only M twiddle factors propagating in the k-direction, and the
operations in the PEs are exchanged.
In order to illustrate a different solution having better latency performance than the arrays of
Section 3, we schedule the column DFT DG with function T 0
mapping
computation l of DG node (i; k) at time period
instead of
Mapping the vector DG under (S; T 0
results in the column array shown in Fig. 5 (c),
with internal PE structure shown in Fig. 5 (d).
In this column array, by using schedule T 0
the need for input FIFO queues holding intermediate
results Z is eliminated, resulting in total latency and memory of nearly one half that of the general
version in Section 3. Schedule T 0
is also applicable to the general arrays of Section 3 and is specific
neither to the DFT nor to Goertzel's formulation. However, to realize the performance gains M
parallel output ports are needed in order to collect in parallel one full column of Y results every
clock cycle.
y
z
output
(d)
(c) column array
z
x
output
(b)
(a) row array
FIFO
Figure
5: Parallel architecture for 2-D DFT with 4. (a) The row array and processor-time I/O
schedule, (b)the internal structure of PE 2 . (c) The column array and processor-time I/O schedule,
(d) the internal PE structure.
4.3 Performance Characterization
The latency of the overall architecture under schedules for the row and T 0
the column arrays is M 2 +M , which is faster than the that of the general architecture developed
in Section 3 (latency of 2M 2 ) by almost a factor of two, due to the broadcasting of intermediate
results z(i) to all PEs in the column array and the availability of M parallel output ports. As a
result, entire columns of the final result Y become available simultaneously, as shown in Fig.5 (c)
at times pipelining period is identical to that of the array in Section 3,
i.e. . However, the smaller latency obtained with broadcasting comes at the expense of a
potentially larger duration for the each time step, i.e. a reduced maximum rate at which we can
run the clock, as M increases.
5 Arrays for 2-D separable Convolution
5.1 2-D Convolution
Linear 2-D convolution on M i \Theta M k input x(i; is given by
1. Notice that, in contrast to the general transform
case of Eqn. (1), the summation limits in convolution range over a region that depends on the
support of both functions g(\Delta; \Delta) and x(\Delta; \Delta). Since we deal here with the case where the kernel g
is of smaller support than the input x, the summation limits are defined over the kernel's support
region. 7
Separable filters have the property that g(i
There is a substantial difference in complexity between non-separable and separable filters, the
former requiring L i L k multiplications and L i additions per output sample, while the latter
requiring multiplications and additions per sample.
Eqn. (13) implies that input x(i; can be first processed in raster-scan order by performing
convolutions on its rows, resulting in intermediate result z(i;
The final result is then obtained by performing M k , 1-D column-wise convolutions on z(i; k), as
k). For simplicity of exposition, but without loss of generality, from now
on we assume
7 The method is not restricted to small convolution kernels, but we focus here on VLSI arrays with a relatively
small number of PEs.
Figure
6 (a) shows an example of the DG for row-wise convolution on raster-scan input x(i; k),
3. In this figure we have omitted the data dependence vector arrowheads
indicating the data flow direction for broadcast variables x and g, and have included only those
pointing upwards for the localized accumulation variables z. In contrast to the general transform
case in Section 3 (see Fig. 2), in convolution there is a slight "spill-over" of outputs at row
interface boundaries. In our proposed solution, we let input rows be separated by
slots as they enter the array, and, as a result, there is no need to introduce any control mechanism
for row interface management. Typically, the inefficiency incurred by leaving empty computations
between rows in the schedule is small, in the order of 1%. 8
Let the linear space-time mapping operators
tA , be given by
and indexing processors and t is denoting the schedule time periods.
The resulting array, along with its I/O schedule is shown in Fig. 6 (b), where every
input token is made simultaneously available to all PEs. The arrows in the I/O schedule indicate
the progression of partial results, that lead to the computation of output z(0; 2). No row interface
control is needed as long as are padded at the end of each input row. For example, at
time instants (following input x(0; 5)), two zeros are inserted into the input stream
clearing the filter for the subsequent insertion of row x(1; k),
Using the notation introduced in Section 2, we can write
as
which is convolution based on saxpy row operations. At the vector level, the DG in Fig. 7 (a)
accepts row inputs
produces row outputs Y (i; :) (filter coefficients g(i 0
are still
scalar values). The internal DG node structure shown in Fig. 7 (b) has M k
each one accepting as arguments an element from row
:) and a filter coefficient g(i 0
subsequently adding the product to one y partial result that is propagating from bottom to top,
leading to the computation of the corresponding element of Y (i; :) In Fig. 7 (c) we show the
complete 2-D convolution pipelined architecture consisting of two linear array stages: the first,
shown on the left and called the row array, for processing the DG of Fig. 6 (a), followed by another
linear array stage, called the column array, for processing the vector DG of Fig. 7 (a). Each PE is
8 An alternative to spacing rows by slots is to introduce control tags carried by the last and first elements
of every row, resulting in perfect row pipelining. However, the efficiency improvement of perfect row pipelining over
the solution presented here is negligible.
row 0
row 1 row 2
(a)
(b)
Figure
(a) Row-wise 1-D convolution on x(i; resulting in z(i; k), with M
3. Note that rows are spaced by slots at the input. (b) array for
row-wise convolution and processor-time snapshots of array operation.
y(i,0)+. y(i,1)+. y(i,7)+.
(a) (b)
FIFO
y
(d)
FIFO
Figure
7: (a) Column 1-D convolution DG in vector form. Dark nodes represent
scalar computations and thick edges represent data dependencies among vector variables with
(b) The internal structure of DG node (i; i 0
(c) The
parallel architecture for computing 2-D separable convolutions: the first stage linear array performs
the row and the second the column processing respectively.(d) The internal row and column array
PE structure.
identified by two indices (p; q), where q indicates whether a PE is for row processing or for
column processing
The space transformation for the column array is and the timing function mapping a
computation to a schedule time period is given by
The choice of this timing function implies that PE i 0 ;1 can serialize the execution of the M k
scalar operations modeled by DG node (i; i 0
) in the following
2)g. The two arrays are perfectly space and time matched because
this order corresponds exactly to the order in which outputs
are produced from PE 0;0
of the row array (see Fig. 6). As a result, each intermediate output
is broadcasted to
the column array PEs via a bus as soon as it is produced and there is no need for introducing any
additional control or memory to interface the two pipelined array stages.
Figure
7 (d) shows the internal structure of the PEs of both arrays. During every schedule time
period, PE p;q performs the following three operations: (a) multiplication of the value on the input
bus by the local filter coefficient; (b) addition of the product to the incoming partial result from
(c) propagation of the result to PE p\Gamma1;q . With regards to partial result accumulation
in the column array, since every DG node produces outputs this is also the number of
registers in the FIFO queues required between every pair of PEs (see Fig. 7 (c)). Note, however,
that buses for broadcasting as well as links for moving partial results y(i; into and out of the
FIFOs carry scalar quantities, so their bit-width should be only one word (and does not depend on
M k or L).
5.2 Performance Characterization: Latency and Memory
Total computation time for a single input problem instance x(i; k) is as follows. The row array
computation time is T 9 The column array computation time is
Since the communication between both structures is perfectly pipelined, there is only a delay of one
time unit in producing the first output y(0; 0). As a result, total computation time is T vector + 1.
As an example, consider typical numbers for image processing, where M
which result in 96.6% efficiency (loss due to imperfect row pipelining), while with
efficiency is 98.3%. 11 This relatively small efficiency loss
is the result of having arrays with virtually no control circuitry. Alternatively, this efficiency loss
rows, each requiring processing time of Mk +Lk \Gamma 1. The insertion of zeros between rows is the cause of the
steps.
the total number of computations is and since there are 2L processors the
efficiency is given by
could be reduced by introducing control at the PE-level for managing the interfaces between rows
in the row array (as opposed to insertion of zeros).
The block pipelining period on an input sequence fX
implying that a complete output y(i; k) is available every fi time steps, substantially improving
efficiency. Efficiency for the case when M
1024 and
Next we prove a Proposition on the lower bound on memory of any architecture that processes
inputs under raster scan order, and show that the 2-D convolution array proposed here nearly
attains this lower bound. This Proposition shows that this array does not require larger memory
storage than the conventional transposition memory solutions, a result holding also for the
structures presented in Sections 3 and 4.
In the following I/O model, we draw a boundary around the circuit under consideration, and
assume that the input stream x(i; k) crosses this boundary in raster scan, or row-major order. 12
Once an input has entered the circuit, it is held in local storage until used in some computation for
the last time, and then it is discarded. It is further assumed that once input x(i; k) has appeared
in the input stream, the circuit cannot request x(i; k) from the host again, and has to manage it
throughout its lifetime. (This analysis is similar to that in [30].)
Proposition 5.1 The lower bound on memory for 2-D convolution by separable L-tap and by non-separable
filters with row-scan input is
Proof: Consider output y(i; k). By Eqn. (12) (non-separable) or Eqn. (13) (separable) we know
that computation of y(i; requires the set of L 2 input elements L+1)g.
Under the row-scan order imposed by Def. 2.1, the last element in this set to enter the circuit
boundary is x(i; k). The minimum time x(i; spends inside the boundary can be calculated by
looking for the last output element that requires x(i; its computation, which can be shown
to be y(i 1). Making the assumption that the circuit is operating under perfect
pipelining, and hence that one output element is available every single time period, the minimum
time needs to spend inside the boundary is
are rows, each with Finally,
during time t, at least (M k new input elements entered the boundary, and therefore
minimum memory is lower bounded by t storage elements. 2
Total memory requirements for the proposed convolution array of Fig. 7 (c) are dominated by
the column FIFOs, accounting for a total of (M k memory registers, and meeting
exactly the lower memory bound in Proposition 5.1. In addition, there is a total of 2L registers
for locally storing filter coefficients (which could be reduced to L by having corresponding PEs in
12 The derivation also holds if we substitute column-major for row-major order, as long as we are consistent.
each array share their coefficient), and L registers for the accumulation of z(i; results in
the row array. The array proposed in Section 4 also meets this lower memory bound. In the case
of a global transform (i.e. DFT) with equal support regions for kernel and input, Proposition 5.1
reduces to the trivial case where storage for an entire M \Theta M frame is required.
6 Conclusions
In this paper we have shown that the basis for constructing modular architectures for 2-D separable
transforms is an array for the 1-D transform, and that the resulting overall architecture can reflect
the modularity of that array. The only restriction required by the 1-D transform array is that it
accepts input and generates output in raster scan order. Then a regular architecture consisting of
two fully pipelined linear array stages with no shared memory can be derived, based on the key idea
that the DG for the first linear array (performing the "row" processing) is structurally equivalent
to the DG of the second linear array (for the "column" processing), but with the difference that the
latter DG operates at a coarser level of granularity (vector instead of scalar computations). This
idea can be exploited even for separable transforms having a complex computation structure, since,
in principle, avoiding transposition and shared memory does not depend on the 1-D transform
structural details. Rather, it depends only on (i) there being a raster scan order for the 1-D
transform and (ii) a change in abstraction level from scalar to vector operations. The 2-D DFT and
the 2-D convolution have been used as case studies to demonstrate the generality and effectiveness
of the proposed architecture. We further conjecture that the same method can be used to derive
pipelined array structures for multi-dimensional separable transforms of any dimension.
We have also applied this methodology to the Discrete Wavelet Transform, a time-scale transform
that represents finite energy signals with a set of basis functions that are shifts and translates
of a single prototype mother wavelet. Modular arrays for computing the 1-D DWT have been proposed
in the literature [24, 25, 26], but 2-D solutions usually rely on transposition, or some form of
complex data routing mechanism that destroys modularity. Since most of the current applications
and research on wavelets is based on separable FIR wavelet filters, we have been able to apply the
results from Section 5 to the 1-D arrays presented in [24, 25].
All Dependence Graphs and space-time transformation pairs derived here, can be supplied as
input to the rapid prototyping tool DG2VHDL developed in our group [31, 32] that can generate
automatically high quality Hardware Description Language models for the individual PEs and the
overall processor array architectures. The generated VHDL models can be used to perform behavioral
simulation of the array and then synthesized (using industrial strength behavioral compilers,
such as Synopsys [35]) to rapidly generate hardware implementations of the algorithm targeting a
variety of devices, including the increasingly popular Field Programmable Gate Arrays
[33, 34], or the Application Specific Integrated Circuits (ASICS). Using the ideas presented here,
modular linear array cores for the 2-D Discrete Cosine Transform (DCT) which supporting a variety
of different I/O patterns have been synthesized and compared in [36].
--R
"Multiresolution Signal Decomposition"
"Discrete Cosine Transform: Algorithms, Advantages, and Applica- tions"
"An Novel CORDIC-based Array Architecture for the Multidimensional Discrete Hartley Transform"
"A Cost-Effective Architecture for 8x8 Two-Dimensional DCT/IDCT Using Direct Method"
"Image and Video Compression Standars"
"VLSI Architectures for Multidimensional Fourier Transform Processing"
"VLSI Architectures for Multidimensional Transforms"
"Low-Power Video Encoder/Decoder Using Wavelet/TSVQ With Conditional Replenishment"
"Multidimensional Systolic Arrays for Multidimensional DFTs"
"Highly Parallel VLSI Architectures for the 2-D DCT and IDCT Computations"
"Synchronizing Large Systolic Arrays"
"Synthesizing Linear Array Algorithms from Nested For Loop Algo- rithms"
"VLSI Signal Processing Systems"
"Fourier Transforms in VLSI"
"Efficient One-Dimensional Systolic Array Realization of the Discrete Fourier Transform"
"Multidimensional Digital Signal Processing"
"Design of 2-D Recursive Filters with Separable Denominator Transfer Functions"
"The Design of Multistage Separable Planar Filters"
"Parallel Algorithms and Architectures for Discrete Wavelet Transforms"
"Fundamentals of Digital Image Processing"
"Matrix Computations"
"Algorithms for Discrete Fourier Transform and Convolution"
VLSI Array Processors.
"On the Synthesis of Regular VLSI Architectures for the 1-D Discrete Wavelet Transform"
"Distributed Memory and Control VLSI Architectures for the 1-D Discrete Wavelet Transform"
"1-D Discrete Wavelet Transform: Data Dependence Analysis and Synthesis of Distributed Memory and Control Array Architectures"
"On Time Mapping of Uniform Dependence Algorithms into Lower Dimensional Processor Arrays"
"Systolic Array Implemenation of Nested Loop Pro- grams"
"Synthesizing Systolic Arrays with Control Signals from Recurrence Equa- tions"
"Computational Aspects of VLSI"
"DG2VHDL: A tool to facilitate the high level synthesis of parallel processing array architectures"
"DG2VHDL: a tool to facilitate the synthesis of parallel VLSI architectures"
"Using DG2VHDL to Synthesize an FPGA Implementation of the 1-D Discrete Wavelet Transform"
Synthesis of Array Architectures of Block Matching Motion Estimation: Design Exploration using the tool DG2VHDL.
Behavioral Synthesis.
Design and Synthesis of maximum throughput parallel array architectures for real-time image transforms
--TR
VLSI array processors
VLSI Architectures for multidimensional fourier transform processing
Synthesizing Linear Array Algorithms from Nested FOR Loop Algorithms
Fundamentals of digital image processing
Discrete cosine transform: algorithms, advantages, applications
VLSI Architectures for Multidimensional Transforms
Behavioral synthesis
VLSI Signal Processing Systems
Image and Video Compression Standards
Multiresolution Signal Decomposition
Multidimensional Digital Signal Processing
On Time Mapping of Uniform Dependence Algorithms into Lower Dimensional Processor Arrays
Synchronizing large VLSI processor arrays
Parallel architectures and algorithms for discrete wavelet transforms | VLSI architectures;parallel processing;2-D separable transforms |
509251 | A High Speed VLSI Architecture for Handwriting Recognition. | This article presents PAPRICA-3, a VLSI-oriented architecture for real-time processing of images and its implementation on HACRE, a high-speed, cascadable, 32-processors VLSI slice. The architecture is based on an array of programmable processing elements with the instruction set tailored to image processing, mathematical morphology, and neural networks emulation. Dedicated hardware features allow simultaneous image acquisition, processing, neural network emulation, and a straightforward interface with a hosting PC.HACRE has been fabricated and successfully tested at a clock frequency of 50 MHz. A board hosting up to four chips and providing a 33 MHz PCI interface has been manufactured and used to build BEATR IX, a system for the recognition of handwritten check amounts, by integrating image processing and neural network algorithms (on the board) with context analysis techniques (on the hosting PC). | Introduction
Handwriting recognition [1, 2, 3] is a major issue in a wide range of application areas, including
mailing address interpretation, document analysis, signature verification and, in particular, bank
check processing. Handwritten text recognition has to deal with many problems such as the
apparent similarity of some characters with each other, the unlimited variety of writing styles
and habits of different writers, and also the high variability of character shapes issued by the
same writer over time. Furthermore, the relatively low quality of the text image, the unavoidable
This work has been partially supported by the EEC MEPI initiative DIM-103 Handwritten Character Recognition
for Banking Documents.
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
presence of background noise and various kinds of distortions (for instance, poorly written,
degraded, or overlapping characters) can make the recognition process even more difficult.
The amount of computations required for a reliable recognition of handwritten text is therefore
very high, and real-time constraints can only be satisfied either by using very powerful and
expensive processors, or by developing ad-hoc VLSI devices.
To cope with the tight cost and size constraints we had, we decided to develop HACRE,
a massively parallel VLSI image processor with the instruction set dedicated to execute both
traditional image processing (such as filtering and image enhancement) and mathematical morphology
[5] (such as opening, closing, skeletonization) and several types of neuro-fuzzy networks [6]
(such as perceptrons, self-organizing maps, cellular networks, fuzzy systems). We have tailored
the recognition algorithms to the architecture capabilities.
HACRE is based on PAPRICA-3, a dedicated architecture deriving from enhancement of
previous works [8, 9, 10]. It is designed to be used both in a minimum-size configuration
(consisting of 1 chip, 1 external RAM, plus some glue logic for microprocessor interfacing) and
in larger-size configurations (consisting of as many cascaded chips as required, as many external
RAM's, some additional global interconnection logic and a host interface, typically a PCI).
A configurable PCI board hosting up to four HACRE chips, together with the required
RAM chips (for image and program memories), camera and monitor interfaces, controllers, PCI
interface and all the required logic, has been designed, manufactured and successfully tested.
One such board, populated with two HACRE chips, plugged into a hosting PC, has been
used to build and test BEATRIX, a complete system for high-speed recognition of the amount
on banking checks, which mixes image processing algorithms, neural networks and a context
analysis subsystem [3].
The VLSI device and the overall system (board and PC interface) have been designed in
parallel with the development of the algorithms (BEATRIX), so that the hardware and software
designs have influenced each other very much. That resulted into an efficient though flexible
implementation for a wide class of applications.
Section 2 describes the driving application, while Sections 3 and 4 describe the PAPRICA-
3 architecture and the HACRE chip, respectively. Section 5 describes the PCI board, while
submitted to Journal of VLSI Signal Processing 3
Section 6 briefly describes the BEATRIX check recognizer. At the end, Section 7 gives the
measured performance of the system and compares them with those of a commercial PC.
Driving Application
The aim of our work was to recognize real-world checks, where handwriting is assumed to be
unboxed and usually unsegmented, so that characters in a word may touch or even overlap. The
amount is written twice: the legal amount (namely, the literal one), and the courtesy amount
(namely, the numerical one).
The two fields are placed in well-known areas of the check, and an approximate localization
of these two areas can be obtained from the information contained in the code-line printed at
the bottom of the check.
2.1 Application Requirements
Our aim was to cope with the tight cost and performance requirements which might make our
system commercially relevant. From a preliminary system analysis we pointed out the following
requirements: average processing speed of at least 3 checks per second (5 per second, peak),
with a rejection rate significantly lower than 5% and an error rate approaching zero. This
corresponds to a sustained recognition speed of about 500 characters per second (both digits and
alphanumeric) and a character accuracy rate in excess of 99%.
Cost requirements imposed a higher bound of 50 US$ per chip with a power dissipation of at
most 1 W per chip. In addition, one particular application in a low-speed but low-cost system
was claiming at a single-chip embedded solution.
As far as the host processor is concerned, we selected a commercial PC with a standard PCI
bus. The operating system could either be Microsoft Windows or Linux.
The choice of such a general-purpose hybrid architecture (a PAPRICA-3 system tightly
interconnected to a host PC) was driven by the observation that in many image processing
systems the first processing steps (at the bitmap level) mostly require simple operations on a
large amount of data, whereas, as processing proceeds, less and less data requires more and more
4 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
complex operations. In our approach, PAPRICA-3 is tailored to a fast and repetitive processing
of elementary pixels of an image while the PC is best used for more complex and more symbolic-
based operations on a much reduced amount of data. We feel that such a cooperation between
the two systems can provide the best cost-performance ratio.
2.2 Chip Requirements
The HACRE chip has been designed bearing in mind the following application-dependent con-
straints/requirements:
ffl cascadability: HACRE implements a slice of Processing Elements (PE's) (all that
could fit in 100 mm 2 ), but additional chips can be cascaded and connected to as many
RAM chips;
ffl neural networks mapping: one PE per each neuron, to have the highest efficiency;
ffl a simple 1-bit Processor Element; operations with higher resolution can be computed by
means of a bit-serial approach; this provided the best complexity/flexibility/performance
trade-off among all the architectures which were analyzed;
ffl provisions for both image processing, mathematical morphology [5], neural networks [6]
emulations, image acquisition, etc.;
ffl easy interface with a host processor (possibly a PC), to improve overall
ffl the highest degree of programmability, while keeping the hardware complexity at a low
level; this has been achieved by letting the host PC (instead of HACRE) perform a number
of operations which normally occur at a lower rate;
ffl both "local" and "global" instructions; the former are used to implement a sort of pixel
neighborhood, while the latter are available to compute global operators, such as summa-
tions, maxima, minima, winner-takes-all, etc.;
ffl provisions for simple handling of external look-up tables (external RAM's or ROM's);
submitted to Journal of VLSI Signal Processing 5
Internal
Registers
from camera
8-bit serial input
serial output
to monitor
k lines of binary
image plane 0
WCS
UNIT
CONTROL
Processing
Elements
GLOBAL
COMMUNICATION PEQ
Figure
1: General architecture of the Processor Array.
ffl a set of external "status registers" where HACRE can accumulate neuron outputs and
which can be read (or written into) in parallel by the host PC;
ffl a set of direct binary I/O channels (6+6) through which HACRE can either interrupt the
host PC or activate stepper motors, CCD cameras, etc.
3 Architecture Description
PAPRICA-3 is the latest component of a family of massively parallel processor architectures
[8, 9, 10] designed at the Politecnico di Torino. As shown in Figure 1, the kernel of PAPRICA-3
is composed of a linear array of identical Processing Elements (PEs) connected to a memory
via a bidirectional bus.
The memory stores the image and the intermediate results of the computation and is organized
in words (each one associated to an address) whose length matches that of the array of
processors. Each word in the memory contains information relative to one binary pixel plane
(also called layer) of one line of an image. Because the width of the bus that connects the array
and the memory is the same as the number of processing elements (and therefore it is the same
as the word length), a single memory cycle is required to load or store an entire line of the image
6 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
to/from the PE's internal registers.
A Control Unit executes the program stored in the instruction memory (called the Writable
Control Store) and generates the signals that control the operations of the processing elements,
the image memory, and of the other architectural elements that will be described below. The
typical flow of operation consists of first transferring one line of the image from the memory to
the array, then processing the data, and finally storing back the results into memory according
to a LOAD - EXECUTE - STORE processing paradigm typical of RISC processors. The same
cycle is repeated for each line until the entire image has been processed.
When executing a program, the correspondence between the line number, the pixel plane of
a given image, and the absolute memory address is computed by means of data structures called
Image Descriptors. An Image Descriptor is a memory pointer that consists of two parts: a base
address that usually represents the first line of the image, and two line counters which can be
reset or increased by a specified amount during the execution of the program, which are used as
indices to scan different portions of the image.
The instruction set includes several ways to modify the sequential flow of control. Branches
can be taken unconditionally or on the basis of conditions drawn over the control unit internal
registers. In addition, any instruction can be prefixed by an enabling condition. One register in
the control unit is dedicated to the implementation of hardware loops: given the iterative nature
of the algorithms employed in image processing, this features greatly enhances the performance
of the architecture. Two additional conventional counters can be used as indices in the outer
loops; instructions are provided to preset, increase and test their value.
3.1 Processing Elements
Each PE is composed of a Register File and a 1-bit Execution Unit, and processes one pixel
of each line. The core of the instruction set is based on morphological operators [5]: the result
of an operation depends, for each processor, on the value assumed by the pixels of a given
neighborhood, which in the case of PAPRICA-3 is a reduced 5 \Theta 5 box, as sketched by the
grey squares in Figure 1. The morphological function can be selected by changing the value
of a template which encodes for each pixel of the neighborhood the kind of required boolean
submitted to Journal of VLSI Signal Processing 7
combination.
The instruction set includes also logical and algebraic operations (AND, OR, NOT, EXOR,
etc.), which can be used either to match input patterns against predefined templates, or to
compute algebraic operations such as sums, differences, multiplications, etc. As PE's are 1-bit
computing elements, all algebraic operations have to be computed using a bit-serial approach.
For each Processor Element, the value of the pixels located in the East and West directions
(either 1 or 2 pixels away) is obtained by a direct connection to the neighboring PE's, while the
value of the pixels in the North and South directions corresponds to that of previously or to be
processed lines.
To obtain the outlined neighborhood in the chip implementation, a number of internal registers
per each PE, at present), called Morphological Registers (MOR), have a structure which
is more complex than that of a simple memory cell, and are actually composed of five 1-bit cells
with a S!N shift register connection. When a load operation from memory is performed, all
data are shifted northwards by one position and the south-most position is taken by the new
line from memory. In this way, data from a 5 \Theta 5 neighborhood are available inside the array
for each PE, at the expense of a two-line latency. A second set of registers (48 per each PE, at
present), called Logical Registers (LOR), is only 1-bit wide and is used for logical and algebraic
operations only.
3.2 Video Interface
An important characteristic of the system is the integration of a serial-to-parallel I/O device,
called InterFace (VIF), which can be connected to a linear CCD array for direct image
input (and optionally to a monitor, for direct image output). The interface is composed of two
8-bit shift registers which serially and asynchronously load/store a new line of the input/output
image during the processing of the previous/next line. Two instructions activate the bidirectional
transfer between the PE's internal registers and the VIF, ensuring also proper synchronization
with the CCD and the monitor.
8 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
R R R R R R R R
Count Ones
RESET
Status
Word
Count
R
R R R
a)
Processor
Figure
2: Inter-processor Communication Mechanisms: a) Status Evaluation Network b) Inter-processor
Communication Network
3.3 Inter-processor communication
Two inter-processor communication mechanisms are available to exchange information among
PE's which are not directly connected.
The first mechanism consists of a network (called Status Evaluation Network), shown in
Figure 2a which spans the extent of the array; each processor sends the 1-bit content of one
of its registers and the global network provides a Status Word which summarizes the status of
the array. The word is divided into two fields: the first field is composed of two global flags,
named SET and RESET, which are true when the contents of the specified registers are all '1's
or all '0's, respectively; the second field is the COUNT field which is set equal to the number
of processing elements in which the content of the specified register is '1'. This inter-processor
communication mechanism can be used to compute global functions such as maxima, minima
(e.g., for emulation of fuzzy systems), logical OR and AND of boolean neurons, neighborhood
communications, neuron summations in perceptrons, external look-up tables, winner-takes-all,
seed propagation algorithms, and many others.
This global information may also be accumulated and stored in an external Status Register
File and used for further processing, or to conditionally modify program flow using the mechanism
of prefix conditions. Status Registers can also be read by the host processor. For instance,
Status Registers have been used in the example of Section 2 to implement a neural network by
computing the degree of matching between an image and a set of templates (weight and center
submitted to Journal of VLSI Signal Processing 9
matrices).
The second communication mechanism is an Inter-processor Communication Network, shown
in Figure 2b, which allows global and multiple communications among clusters of PE's. The
topology of the communication network may be varied at run-time: each PE controls a switch
that enables or disables the connection with one of its adjacent processors. The PE's may thus
be dynamically grouped into clusters, and each PE can broadcast a register value to the whole
cluster with a single instruction. This feature can be very useful in algorithms involving seed-
propagation techniques, in the emulation of pyramidal (hierarchical) processing and for cellular
neural networks or for local communication (short range neighborhood).
3.4 Host interface
A Host Interface allows the host processor to access the WCS and a few internal configuration
registers when HACRE is in STOP mode. The access is through a conventional 32-bit data bus
with associated address and control lines. The same lines are controlled by HACRE in RUN
mode and used to access the private external Image Memory.
Some additional control and status bits are used to exchange information with the host
processor: these include a START input line and a RUNNING output line, plus another six
input and six output lines called Host Communication Channels (HCC's). HCC input lines
can be tested during program execution to modify program flow, while HCC output lines can
be used as flags to signal certain conditions to the host processor (for instance, interrupts).
The kernel of the hardware implementation of the PAPRICA-3 system is the HACRE integrated
circuit whose block diagram is shown in Figure 3.
The main components are :
ffl The Processor Array (PA) which is composed of processing elements; the internal
architecture and the main features of the PA and of the PE's are described in detail in
Section 4.1.
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
WCS
REGISTER
TO EXTERNAL
external memory
to the host and
CONTROL
UNIT
ICN
Data
Address
TO EXTERNAL IMAGE MEMORY
Descriptors
Image
Figure
3: Block diagram of the chip.
The PA has a direct and fast parallel communication channel towards the Image Memory
(IM) which in the current implementation is external to the chip. The decision to keep
the IM outside of the chip has been a very critical issue. The direct and fast access to
a large internal memory would have allowed to execute at each clock cycle a LOAD or
STORE operation with the same timing as other instructions and with a high processing
throughput but, on the other hand, the cost of implementing a large memory with a
standard CMOS technology would have been too high. In fact architectures such as the
IMAP system [15], which have large on chip data memories, employ dedicated, memory
oriented fabrication processes. Therefore we decided to implement the IM external to
the chip and to reduce the processing overhead by a heavy pipeline of internal operations
which may allow the partial overlap of a memory access with the execution of subsequent
instructions.
ffl The Control Unit (CU) which executes the instructions by dispatching to the PA the
appropriate control signals. The choice of implementing the CU on the same chip as the
PA is a key characteristic of the PAPRICA-3 architecture with respect to other mono- and
bi-dimensional SIMD machines [11, 12, 14, 15, 16] in which the controller is a unit which
is physically distinct form the array. In this case the maximum clock speed is limited to
a few tens of MHz by the propagation delay of the control signal from the controller to
the different processing units. This may be a non critical limit in systems with a large
submitted to Journal of VLSI Signal Processing 11
number of PE's, but, since our application was aimed at real time embedded systems, we
preferred to push to the limit the performances even of single-chip systems by integrating
the CU with the array. This means that with multiple-chip systems the CU is replicated
in each chip with an obvious loss in silicon area, but in our case it has been considered a
little price to pay, with respect to the possible increase in performance since our design
goal was an operating frequency of 100 MHz. Section 4.2 will analyze in detail the design
choices of the CU and its internal architecture.
ffl The Writable Control Store (WCS, 1K words \Theta 32 bits), in which a block of instructions is
stored and from which the CU fetches instructions. The choice of a WCS is a compromise
between different constraints. First the goal of executing one instruction per cycle required
the Instruction Memory to reside on the same integrated circuit as the CU. Since the
amount of memory which may be placed on a chip with a standard CMOS technology
is limited the optimal solution would have been a fast associative cache. A preliminary
feasibility study showed that the cache approach would have been too expensive in terms of
silicon area and too slow to match the target 10 ns cycle time. Considering that most image
processing algorithms consist of sequences of low level steps, such as filters, convolutions,
etc., to be performed line by line over the whole image this means that the same block
of instructions has to be repeated many times. Hence we chose to pre-load each block
of instructions into the WCS, a fast static memory, and to fetch instructions from there.
After the instructions of one block have been executed for all the lines of an image, a new
block of instructions is loaded in the WCS. If the ratio between the loading time and the
processing time is small then the performance of a fast cache with a hit ratio close to 1
can be obtained, at a fraction of the cost and complexity.
ffl The ICN and SEN communication networks. The former is fully distributed in the PE's;
the latter is composed of two parts: the first part is distributed and is the collection of
EVAL units (see Section 4.1) integrated in the PE's, while the second one is centralized
and is composed of one Count Ones unit and two 32-AND units for the evaluation of the
SET and RESET flags.
12 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Figure
4: Microphotograph of the chip
ffl The Host interface which allows a host processor to access the WCS, and a few internal
configuration registers. The access is through a conventional 32-bit data bus with associated
address and control lines. In addition the interface handles the external protocol for
the communication between the PA and the external Image Memory.
A microphotograph of the complete chip is shown in Figure 4. The chip has been implemented
in a 0:8 -m, single poly, dual metal CMOS technology and has a total area of 99 mm 2 .
Multiple chips may be cascaded to build systems with a larger number of processing elements,
as explained in detail in Section 5.
submitted to Journal of VLSI Signal Processing 13
EXECUTION UNIT
UNIT
MATCH
UNIT
COMM
UNIT
UNIT
REGISTER
from
previous
the
load
from
camera
store
video
to
the next
to
MOP/LOP
result
from
Image
Memory
match
result
operands
to
ICN
from
and to
from
neighbours
to neighbours
Figure
5: Block diagram of the processing element.
4.1 Processing Array
The Processing Array has been implemented using a full custom design style in order to take
advantage of its regular structure and optimize its area. In fact each Processing Element is a
layout slice and, since all PE's operate according to a SIMD paradigm, the control signals are
common to all of them and may be broadcast by simple abutment of PE cells. Unlike the block
diagram of Figure 3 where the array is shown as a single entity, in the implementation
the array is divided into two 16-PE sub-blocks which are clearly visible at the top and at the
bottom of the photograph of Figure 4. In this way the capacitive load and the peak current
on the control lines is reduced and the delay is optimized; in addition, the 16 PE's block has a
better aspect ratio and is more easily routed by automatic tools. The block diagram of a PE
is shown in Figure 5. Its main components are the Register File (RF) and the Execution Unit
(EU).
The RF, introduced in Section 3.1, is composed of two sections, corresponding to the 48
LOR registers and to the 16 MOR registers. Address decoding is centralized for each
block in order to optimize its area and the decoded selection lines are run over the whole block.
LOR's are implemented as 3-port static RAM cells, allowing the simultaneous execution of 3
operations (2 read and 1 write) on the RF at any given time.
14 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Memory
or
from
Image
Cell
Central N NN
Cell Cell Cell Cell
from Address Decoder
To EXECUTION UNIT
Figure
Structure of a MOR register
Each MOR register is composed of five 1-bit RAM cells, as shown in Figure 6. The central
cell is similar to a LOR cell with 3 ports, while the other four are static RAM cells with a single
read port. In addition all cells have an input port (SH) which allows data to be shifted from
each cell to its right neighbor in a single clock cycle.
When executing a LOAD operation between the Image Memory or the VIF and a LOR
register, the value of the 1-bit pixel is simply transferred in the register through one of the
ports. When the same operation is performed on a MOR, the value from the memory (or the
VIF) is loaded by the SH port into the leftmost cell and the contents of all the others is shifted
one position to the right. In this way the central cell contains the current pixel of the image,
the right cells the north neighbors with distance 1 and 2 and the left cells the south neighbors.
When a read operation is executed on port 1 of a MOR, the values of all cells are sent to the
EU for the execution of neighborhood based operations. The execution time of a read/write
operation from the RF is in the worst case lower than 10ns and the data is latched at the output
by the global system clock.
The register file of each PE integrates also one stage of the VIF. From a global point of
view the VIF is a shift register with a number of stages equal to the number of PE's and
with a 16 bit word width. It is divided in an Input and an Output section, each 8-bit wide,
which are connected respectively to a pixel serial input and output device and may be clocked
independently.
The operations of the VIF and of the Processor Array are independent and may proceed in
parallel, overlapping I/O and processing. However they synchronize with each other when one
of the two following events takes place:
submitted to Journal of VLSI Signal Processing 15
ffl A full line of input data has been shifted in the Input section and the CU has fetched a
Load From Camera instruction. In this case in each PE the 8 bits of the Input section of
the local VIF stage are transferred to the southmost position of the first 8 MOR registers.
ffl A full line of data has been shifted out of the output section of the VIF and a Store To
Video instruction has been fetched by the CU. When it happens the value of the first 8
LOR registers are transferred in parallel into the output section of the local stage of the
VIF.
In order to minimize the transfer time and the interconnection area required, the Input and
Output sections of the local VIF stage have been implemented directly inside the RF, close to
the corresponding LOR and MOR registers.
The EU performs the operations corresponding to the instructions of the program on the
data from the RF, under the control of the CU. It is composed of 4 blocks:
ffl The LOP unit which is responsible for logical and arithmetic operations. It has been
synthesized using a standard cell approach and the final layout has been routed manually
to optimize the area.
ffl The MATCH unit which is responsible for MATCH operations, the base of all morphological
and low level image processing operation. Its basic function is a comparison between
a given neighborhood of the pixel and a vector of 3-valued templates (0, 1 and don't care)
provided by the CU. The neighborhood of the pixel is obtained from the contents of a
MOR register of the PE and from the four neighboring PE's. In order to reduce the silicon
area and execute the operation in a single clock cycle the unit has been implemented with
custom dynamic logic.
ffl The COMM unit which implements in standard cell one section of the ICN communication
network depicted in Figure 2b.
ffl The EVAL unit which, when the corresponding instruction is executed, takes the contents
of one register of the PE, masks it with the contents of another register and sends the
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Figure
7: Layout of a processing element.
results to the centralized part of the SEN for the evaluation of the different fields of the
Status Word.
As explained in the next section, the Control Unit is able to concurrently execute more than
one instruction which may in turn activate different functional units in the PE's. Hence, in
order to obtain a correct execution, a data pipeline that reflects that of the CU had to be put
in place to separate the units in the data path.
Figure
7 shows the layout of one PE which occupies approximately 1 mm 2 of silicon area.
As clearly visible, all functional units have the same vertical dimension (horizontal in fig. 4).
Control signals run in the vertical direction across the PE which may be connected by abutment
to its neighbors. The layout also shows how tightly the register file and the VIF I/O structure
are integrated in order to obtain a high throughput in I/O operations.
4.2 The Control Unit
As already mentioned, PAPRICA-3 exploits both spatial parallelism due to its massively parallel
architecture, and instruction level parallelism due to the pipelined design of the control unit.
Because of the very different nature of the instructions executed by the array and those executed
directly by the control unit, the pipeline had to be designed with particular care both in its
architecture and its implementation in order to obtain the best trade-off between complexity
and performance.
In HACRE the Control Unit is located on each chip of a multi-chip implementation together
with the WCS, thus decentralizing the issue of control signals. The WCS is loaded only once
every algorithm while each instruction is usually executed many times (often thousands of times)
submitted to Journal of VLSI Signal Processing 17
per image. In most cases the overhead is thus reduced by orders of magnitude. The drawback of
a similar implementation is the waste of chip area required for the duplicated logic: while this
is still significant in the technology used for the design (0:8-m), we anticipate a lower impact
in deep sub-micron technologies where the interconnections, and not the logic, play the major
role.
The main drive in the design of the pipeline is to obtain the highest performance for those sequences
of instructions that are most used in image processing algorithms; these include, among
others, sequences of morphological operations, loops on the entire image, and bit-serial compu-
tations. Although the relative frequency of global and scalar instructions is far less than that of
array instructions, their sometimes long latency due to their global nature may impose severe
constraints to the execution of other instruction, negatively impacting the overall performance;
neglecting their execution would therefore certainly lead to a sub-optimal design.
For these reasons the pipeline has been logically divided into two sections which have been
partially collapsed in the implementation to better exploit the hardware resources. The first
section deals with scalar and global operations and has been designed as a conventional mono-
functional queue; the second section controls the execution of array operations and employs a
more complex multi-functional pipeline (which increases instruction level parallelism) and a more
sophisticated conflict resolution mechanism. Despite the implementation as a single continuous
structure, the two sections are designed to operate independently and they synchronize by means
of mutual exclusion on shared resources during the execution of instructions that involve both
array and global or scalar operations.
4.2.1 Pipeline implementation
Figure
8 shows the first section of the pipeline together with the supporting registers. Dotted
lines represent data flow, while block lines represent the flow of the instruction codes.
The first stage of the pipeline (IF) is responsible for loading the instructions from the
program memory. In order to achieve a high clock rate in spite of a relatively slow memory,
instructions are fetched two at a time (technique known as double fetch). The entire mechanism
is handled by this stage, so that to the rest of the pipeline it appears as if the memory were able
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
M/N
Decod.
IF IM
Status
Word
ID
Stop
Register
Sum
From Status Registers
To host system
From ext. connection
Image
Desc.
To the CD stage, second section
Progr. Mem.
Figure
8: First section of the pipeline.
to deliver one instruction per clock cycle.
The drawback of this technique is an increased penalty due to control conflicts that require
to invalidate the initial stages of the pipeline, for example when a branch is taken (this problem
is typical of superpipelined architectures where the queue is very deep). If the loop control in
bit-serial and morphological computations were subject to this problem, the benefits obtained
by the higher clock rate achievable by the double fetch technique would certainly be offset by
the increased penalty. For this reason, the stage J of the pipeline is dedicated to the support of a
hardware loop mechanism that controls the program counter and minimizes the negative effects.
When the number of instructions in the loop is even (or if there is only one instruction), the
pipeline is able to deliver one instruction per clock cycle with no control overhead. In all other
cases the penalty is just one clock cycle per iteration. In the ID stage the scalar instructions
get executed and leave the pipeline, while the array instruction are dispatched to the AR queue
that computes the effective addresses for the internal registers and the image memory.
Figure
9 shows the second section of the pipeline. The stage CD is dedicated to the
conflict resolution as will be explained later. The rest of the pipeline is a multi-functional queue,
where each branch of the queue is dedicated to a particular feature of the array. In particular
SE controls the Status Evaluation; CO the inter-processor communication mechanism; the
submitted to Journal of VLSI Signal Processing 19
CD
OW
CO
MU SM
SS
From AR3
acc.
Figure
9: Second section of the pipeline.
sequence OR-BP-EX-OW executes the array operations by first reading the internal registers,
then propagating the data to the neighborhood, then computing the result of the operation and
finally writing back the result in the internal registers; MU handles the access to the image
memory; and finally SL-LV and SS-SV is the path followed by the instructions that control
the VIF.
Because of the multi-functional nature of the pipeline and the presence of different execution
stages (including ID for the scalar instructions), it is possible that instructions be executed out of
order (although the architecture still issues no more than one instruction per clock cycle). This
feature enhances temporal parallelism and therefore performance. In addition, as mentioned
earlier, the first and the second sections of the pipeline are decoupled, so that a stall in the
second doesn't halt the execution of scalar instructions in the first. By doing this, the impact
of the sequential part of the algorithm is minimized.
4.2.2 Conflict Resolution
The concurrent execution of instructions gives rise to conflicts in the pipeline that must be
resolved in the most efficient way. In this section we will present the mechanism employed to
resolve conflicts involving the internal registers. In a multi-functional queue there are essentially
two ways to detect a conflict condition:
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Scoreboard method: In the scoreboard method, all conflicts are resolved in a single pipeline
stage. A table, called the scoreboard, contains an entry for each resource in the architecture
(e.g. registers). Every instruction, before entering the execution stages, verifies the
availability of the source and destination operands, and once the permission is obtained,
it gets possession of them by registering with the scoreboard. The resources are released
only after the execution is complete.
Tomasulo algorithm: In the Tomasulo algorithm the conflict resolution is distributed in the
pipeline. For example, the stage responsible for reading the operands checks their avail-
ability, as does the stage that writes them. Since the resolution is distributed, resources are
reserved only when they are actually needed, so that an instruction is allowed to proceed
even if only part of the required operands are available.
The advantage of the scoreboard method is a much simpler implementation, which is traded-off
with the higher throughput achievable with the Tomasulo algorithm. HACRE adopts a
combination of the two approaches. In a traditional implementation the scoreboard is basically
a map where a set of flags signals the availability of a particular register for reading or for
writing. Given the number of registers in our architecture, the distributed access to this map
that would be used in the Tomasulo algorithm results in a big and slow implementation. Our
solution is to build a map that does not represent the registers, but rather the interaction among
the instructions in the different stages of the pipeline.
Once in the CD stage, each instruction compares the effective addresses of its operands to
those of the instructions in the following stages of the pipeline (in the picture, those to the
right); in doing so, the instruction builds a map where each stage of the pipeline is marked with
a flag: an active flag in the map means that there is a potential conflict between the instruction
which owns the map and the instruction in the stage of the pipeline corresponding to the flag.
The conflict is only potential because the instruction that builds the map has yet to be routed
to the stages where the operands are needed: at that later time, the instruction that gives rise
to the conflict might have already left the pipeline.
submitted to Journal of VLSI Signal Processing 21
When our instruction leaves the CD stage, it brings the map with it; in addition, at each
cycle, the flags in the map are updated to reflect the change in state of the pipeline, i.e. each
flag is routed in the map to the stage where the corresponding instruction ends up. When an
instruction corresponding to a flag leaves the pipeline, the flag is cleared in the maps. Note that
each instruction has its very own different copy of the map, reflecting the fact that the conflicts
are the expression of a precedence relation which involves two instructions at a time.
The data collected by each instruction in the map can then be used to establish whether
a certain operand is available at a certain time, thus implementing the Tomasulo technique.
Note however that by optimizing this approach not only is the collective size of the maps much
smaller than that of a register map (approximately one third in our case), but the update of
the flags is much simpler to implement than a multiple indexed access to the scoreboard in all
stages.
Assuming a 2 clock cycle access time to the image memory, the pipeline has been measured to
perform at about 2 clock cycles per instruction on real applications. Under these conditions, the
speed-up obtained with respect to a conventional, non-pipelined controller running at a similar
clock speed is close to 6. The use of more aggressive optimization techniques for the software
may further improve these numbers.
5 The PCI Board
We designed a PCI board hosting up to four HACRE chips, a large image memory, a program
memory, a system controller, the Status Register File and the PCI interface. A piggy-back
connector provides a direct interface to an input imaging device, such as a linear scanner or a
video camera and to an output device, such as a video monitor. Some bus drivers and the clock
generation and synchronization logic are also part of the board. A block diagram of the board
is depicted in Figure 10.
The Image Memory is a fast static RAM (15 ns access time), 256K \Theta 32 \Theta n bits, where n
is the number of HACRE chips installed. One 32-bit memory module is associated to each of
the chips. The memory is of exclusive use of the array during program execution, while it is
22 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Figure
10: System block diagram.
memory mapped on the PC through the PCI interface when it is in STOP mode. Isolation is
by means of bidirectional drivers.
The External Program Memory is a static memory module analogous to those used for the
Image Memory and is used to store the complete HACRE program. It is mapped on the PC
address space while in STOP mode. The System Controller autonomously transfers to the
internal Writable Control Store the correct instruction block during program execution, so that
a continuously repeated program, even if very long, can be downloaded from the PC only once.
The array outputs a completion word to the System Controller every time it completes the
execution of the active program block. The completion word contains a code that directs the
system controller to load a new block at a specific address or to signal program completion or
an anomaly to the PC.
The System Controller provides several functions. It contains a set of registers, memory
mapped on the PC address space, always accessible, that enable system management. It is also
responsible of the control of the piggy-back board dedicated to image acquisition and display.
When debugging an application it is sometimes desirable to test an algorithm on a well-defined
set of images. To be able to do so without having to modify the HACRE program, the System
Controller provides means of excluding the piggy-back I/O board and give the PC an access to
the VIF. The PC writes data to a register and the System Controller shifts them into the VIF.
The VIF output can also be redirected to a PC memory location for test purposes.
submitted to Journal of VLSI Signal Processing 23
Different types of Video I/O Boards can be connected through the piggy-back video con-
nector. This connector makes available to the interface board all video signals from the VIF
structure, the Host Communication Channels and several control lines from the System Con-
troller. We defined a simple interface both for digital video input from a camera or a scanner and
digital video output to a monitor. Input data are shifted into HACRE asynchronously, using a
pixel clock provided by the video source. At each end of line a handshake protocol insures that
no data are missed or overwritten. Another handshake protocol allows the interface to read from
the array the output video lines as soon as they are available. A frame buffer memory, a DAC
and a simple scan logic are needed to provide an analog video signal for an output monitor.
The Status Register File (SRF) is used to accumulate and broadcast the global information
collected by the Status Evaluation Network described in Section 3. When executing the EVAL
instruction, every HACRE chip, using the Status Evaluation Network, calculates its Status
Word, asserts a dedicated output line and suspends program execution. The EVAL line of the
first chip is connected to the System Controller. The System Controller, using a
Bus, reads the Status Word from every chip. The COUNT field is accumulated, while the SET
and RESET fields are ANDed to detect all-ones and all-zeros condition. The address field of
the EVAL instruction is then retrieved from the first chip and used to store the result in the
corresponding Status Register. The result is also propagated to the chips, which resume program
execution and can test the result in conditional instructions. The same basic mechanism is used
to reset, accumulate and normalize or read the contents of a specific register. A special SRF
operation interrupts the PC in correspondence of a Status Register Write, thus allowing for non-linear
or custom functions, implemented in software on the PC, to be used when accumulating
values in a Status Register. The SRF is very useful in object classification algorithms. Every
register stores the degree of matching of the image with a different template. By analyzing the
SRF, the PC can interpret the image contents accordingly.
We designed the PCI board to cope with both the available chips, limited in frequency to
45-50MHz, and the future ones running at 100MHz. The clock is obtained from the system
PCI clock. This was done to avoid synchronization problems between the PCI interface and the
rest of the system, which have to interact. The array clock is derived from the PCI clock by a
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
Figure
11: The PCI board.
PLL multiplier. The PLL can generate 33, 50, 66 or 99MHz frequencies (jumper configurable)
from the 33MHz PCI clock, with known phase relationship. This high frequency signal feeds
the different HACRE chips. To be able to compensate for the different PCB line lengths of
the clock signals and to comply with the stringent requirements on clock loading and delay on
PCI specifications, we found very useful the adoption of skew buffers to control the clock signal
feeding the different devices. These are PLL devices with four outputs at the same frequency as
the input one, but with delays presettable via jumpers. In this way it is possible to compensate
for PCB transmission delays. All clock signals are one-to-one connections to avoid multiple
reflections. All lines longer than few centimeters are series terminated at the source to match
PCB line impedance. All high frequency signals run on internal layers of the PCB, shielded by
ground or power supply planes, to limit EMI.
The PCI interface has PCI target capabilities and conforms to PCI 2.1 specifications. All
of the board registers and memories are mapped on the PCI memory address space, to form a
contiguous 8 MB memory block. The board can interrupt the PC to signal error conditions, end
of run and to permit non-standard SRF operations. The PCI interface, the System Controller
and the SRF were all implemented using a single FPGA (Altera 10K30), providing internal
memory blocks and the availability of a large number of gates. This solution was adopted for
submitted to Journal of VLSI Signal Processing 25
both cost reasons and flexibility. A photograph of the PCI board is shown in Figure 11.
6 Application Implementation
This section describes BEATRIX, that is an implementation of the proposed handwriting recognizer
on the PCI board populated with: two HACRE chips (namely, 64 PE 0 s); 2 high-speed
chips (for a total of 2 MB of image memory); 256 KWord of program memory; 256 Status
Registers; 6+6 direct I/O channels; a direct interface to an image scanner; PCI interface to a
hosting 100 MHz Pentium PC.
The system has been tested and Section 7 shows its performance. See also [3] for additional
details on the complete recognizer algorithm and performance.
6.1 System Description
BEATRIX integrates four logical subsystems which are in cascade [3]: a mechanical and optical
scanner, to acquire a bit-map image of the check; an image preprocessor for preliminary image
filtering, scaling, and thresholding; a neural subsystem, based on an ensemble of morphological
feature extractors and neuro-fuzzy networks, which detects character centers and provide
hypotheses of recognition for each detected character; a context analysis subsystem based on a
lexical and syntactic analyzer.
The neural subsystem carries out a pre-recognition of the individual characters, based on an
integrated segmentation and recognition technique [7].
Legal and courtesy amounts are preprocessed and recognized independently (at the character
level) and then the two streams of information are sent to the common context analysis
subsystem, which exploits all the mutual redundancy.
The context analysis subsystem combines the candidate characters and, guided by the mutual
redundancy present in the legal and courtesy amounts, produces hypotheses about the amount
so as to correct errors made by the neural subsystem alone.
The image preprocessor and the neural subsystem are executed on the PAPRICA-3 system
(namely, the PCI board), while the context analysis subsystem is executed by an external Pen-
26 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
tium processor, which can implement these types of algorithms more efficiently.
6.2 Image Preprocessor
The first preprocessing subsystem is the image preprocessor which consists of the blocks described
below.
ffl A WINDOW EXTRACTOR acquires the input image from the SCANNER, at a resolution of approximately
200 dpi, 16 gray levels. The scanner is an 876-pixel CCD line camera scanned
mechanically over the image, from right to left (due to practical reasons), at a speed of
2m/s (which is equivalent to about 700 characters/s); Fig. 12.a.
Image acquisition is performed by the VIF, in parallel with processing, and a whole image
line is acquired in just one clock cycle.
ffl A FILTER block computes a simple low-pass filter with a 3 \Theta 3 pixel kernel (Fig. 12.b),
while a BRIGHTNESS block compensates for the non-uniform detector sensitivity and paper
color (Fig. 12.c).
ffl A THRESHOLD block converts the gray-scale image into a B/W image by a comparison with
an adaptive threshold (Fig. 12.d).
ffl A THINNING block reduces the width of all the strokes to 1 pixel (Fig. 12.f). Thinning
is a morphological operator [5] which reduces the width of lines, while preserving stroke
connectivity.
ffl A BASELINE block detects the baseline of the handwritten text, which is a horizontal stripe
intersecting the text in a known position (Fig. 12.f).
ffl A FEATURES block detects and extracts from the image a set of 12 stroke features, which
are helpful for further character recognition. As shown in Fig. 12.h (crosses), this block
detects the four left, right, top and bottom concavities, and the terminal strokes in the eight
main directions.
ffl A FEATURE REDUCTION, a ZOOM and a COMPRESS blocks reduce, respectively, the number of
features (by removing both redundant and useless ones), the vertical size of the manuscript
submitted to Journal of VLSI Signal Processing 27
a)
c)
d)
f)
Figure
12: Preprocessing steps of handwritten images (an "easy" example): a) original image,
200 dpi, 16 gray levels; b) low-pass filtered image; c) compensated for brightness; d) thresholded;
spot noise removal; f) thinned, after 6 steps; g) finding baseline (at the left side of the image);
features detection (features are tagged by small crosses); i) compressed.
(to approximately 25-30 pixels), and the overall size of the manuscript (by a linear factor
of 2), by means of ad-hoc topological transformations which do not preserve image shape,
although they do preserve its connectivity. (Fig. 12.i)
After all the preprocessing steps, the B/W image is ready for the following neural recognition
steps (see Section 6.3). The image is reduced both in size (down to 14 \Theta
(=252) pixels for the courtesy and the legal amounts, respectively), in number of gray levels
(2), and in stroke thickness (1 pixel), and noise is removed. Table 1 lists execution times of
individual blocks.
6.3 Neural subsystem
The neural subsystem is made of two cascaded subsystems, namely a CENTERING DETECTOR and
a CHARACTER RECOGNIZER. See [3] for further details on the algorithms and the implementation
28 F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
of each block. Table 1 lists execution times of individual blocks.
ffl The CENTERING DETECTOR scans the preprocessed and compressed image from right to left
(for mechanical reasons) and extracts a sliding window of fixed size. It then tries to locate
the characters, by detecting the position of their center, based on the type, quantity and
mutual positions of the detected features.
Note that windows without strokes are immediately skipped, as they contain no useful
information. Lines can be skipped in as low as one clock cycle.
ffl The CHARACTER RECOGNIZER recognizes each individual pseudo-character, using a hybrid
approach, which mixes feature-based [2] and neural [1] recognizers.
First of all, features extracted by the FEATURES block are used to identify all easy-to-recognize
characters. For instance, most "0", "6", "9" digits (but not only these) are written well enough
that a straightforward and fast analysis of the main features and strokes is sufficient to recognize
those characters with high accuracy.
Other characters are more difficult to recognize using only features; for instance, digits "4",
"7" and some types of "1" can be recognized more easily using neural techniques. All characters
which have not been recognized using features are isolated and passed to a neural network (an
ad-hoc 2-layers WRBF [4]) trained by an appropriate training set.
Therefore the CHARACTER RECOGNIZER is made of two blocks, namely a feature-based recognizer
and a neural recognizer, each one optimized to recognize a particular subset of the whole
alphabet.
The CHARACTER RECOGNIZER is "triggered" for each pseudo-character center detected by the
CENTERING DETECTOR. As shown in Table 1, the CHARACTER RECOGNIZER is the slowest piece
of code. Fortunately it is run at a relatively low rate, namely every 15 lines, in the average,
therefore its effects on computing time are limited.
submitted to Journal of VLSI Signal Processing 29
Performance
Table
1 lists the execution times of the various processing blocks for the example presented in
Section 2; figures are given for a system with 64 PE's (namely, 2 chips), running at 33 MHz. All
the programs were also tested on both a Pentium at 100 MHz and a Sparc Station 10, using
the same algorithms based on mathematical morphology, which are well suited to the specific
problems of bitmap processing and character recognition.
Some programs (FILTER, BRIGHTNESS, THRESHOLD, ZOOM, CENTERING DETECTOR, CHARACTER
RECOGNIZER) could be implemented more efficiently on a sequential computer using more traditional
methods (ad-hoc programs). These were also implemented on the Pentium and their
performance listed in Table 1 for comparison.
It can be seen that the performance of PAPRICA-3 is 100 to 1000 times faster than that of
Pentium 100 MHz and Sparc Station, for nearly all the programs considered. This improvement
factor reduces by at most five folds when a Pentium 500 MHz is used.
For further comparison, Table 2 lists the execution times of a single-chip PAPRICA-3 system
running at 100 MHz, for other well-known neural algorithms such as Perceptrons (MLP) and
Kohonen maps [6]. As all mathematical operations are implemented in a bit-serial fashion,
system performance depends heavily on input and weight resolution. Furthermore, the best
performance can be obtained when the number of either neurons or inputs matches the number
of PE's.
The overall recognition accuracy of BEATRIX (not quoted here; see [3]) are comparable with
other existing handwriting recognition systems, but our system achieves that at a much lower
price.
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
worst case morphol. ad-hoc morphol.
Image preprocessor -s/line ms/check -s/line -s/line -s/line
BRIGHTNESS 5.90 11.8 1,970 320 1,660
FEATURES 6.10 12.2 9,490 - 10,430
ZOOM 4.48 8.96 820 160 760
COMPRESS y 61.6 123.2 21,350 - 24,950
PAPRICA-3 Pentium 100 MHz
worst case morphol. ad-hoc
Neural subsystem ms/psd-char ms/check ms/check ms/check
RECOGNIZER
RECOGNIZER (NEURAL) yy 3.36 161.4 - 13,440
TOTAL RECOGNIZER 11.4 545.4 27,840
Table
1: Average execution times of the various processing steps, while processing the courtesy
amount. y
COMPRESS acts on an image zoomed by an average factor 3.2, therefore processing times
are scaled accordingly. yy CHARACTER RECOGNIZER acts a few times per each pseudo-character,
namely once every 15 lines on average.
submitted to Journal of VLSI Signal Processing 31
Neural Network Paradigm internal weights external weights
MCPS MCUPS MCPS MCUPS
adaptive
learning rate
neighborhood
Kohonen, neurons, 8 inputs, 8 bits/input, 5 \Theta 5
neighborhood
90
Table
2: Performance of the BEATRIX system, in single-chip configuration (namely,
running at 100 MHz, with either internal weights (max. 60 bits/neuron) or external weights (no
size limitation). MCPS and MCUPS stand for, respectively, mega connections per second and
mega connection updates per second.
8 Conclusion
From the hardware implementation point of view not all the original goals have been reached.
In particular all the main full custom blocks (memory, PA) have been designed and verified
by simulation to operate within a ns clock cycle in the worst case, but the whole chip is
fully functional up to a maximum frequency of 50 MHz. This is due to strict deadlines on
the availability of funds for chip fabrication which have reduced the time available for the
optimization of the CU layout on the basis of backannotated simulation.
Moreover, as clearly visible in the microphotograph of figure 4, a large area of approximately
wasted. This is mainly due to the limitations of the tools employed in the placing
and routing phase of the CU which has been synthesized into a standard cell library form a
HDL description, This has led to a large increase of the ratio between the CU area and the
PA area and Preliminary tests with new tools have shown that the current layout size could be
reduced by approximately 15 %. This would allow to place on board of a single chip system an
integrated version of the Status Register File [17] which has been designed in order to minimize
the components of a single chip system.
With the current technological evolution of VLSI circuits preliminary evaluations made with
F. Gregoretti et al.: A High Speed VLSI Architecture for Handwriting Recognition
a 0:35 -m technology have shown that a single chip system could integrate 64 PE's, the SRF and
64 Kbit of image memory, making it possible a really fully integrated system for handwritten
character recognition.
For what system performance concerns, the performance of the BEATRIX system have shown
that the proposed PAPRICA-3 architecture, even in a medium-size configuration, outperforms
the Pentium processor by much more than a factor ten. In addition, the recognition accuracy
of BEATRIX is comparable with other much more expensive systems.
In addition the development environment and the image processing language (not described
here), which have been developed explicitly for PAPRICA-3 allow a straightforward design of
new mathematical morphology and image processing algorithms, reducing the design time of
new algorithms.
--R
"Off-line cursive script word recognition"
"Invariant handwriting features useful in cursive-script recognition"
"High-Speed Recognition of Handwritten Amounts on Italian Checks"
" Weighted Radial Basis Functions for Improved Pattern Recognition and Signal Processing"
"Image Analysis and Mathematical Morphology"
"Neural Networks: A Comprehensive Foundation"
"A Chip Set Implementation of a Parallel Cellular Architecture"
"Design and Implementation of the PAPRICA Parallel Architecture"
Implementation of a slim array processor.
A linear array parallel image processor: Slim-ii
The PAPRICA SIMD array: Critical reviews and perspectives.
Efficient image processing algorithms on the scan line array processor.
A 3.84 GIPS integrated memory array processor with 64 processing elements and a 2-MB SRAM
An array processor for general purpose digital image compression.
''An intelligent Register File for Neural Processing applications''.
--TR
Off-Line Cursive Script Word Recognition
Efficient Image Processing Algorithms on the Scan Line Array Processor
The evolution of the PAPRICA system
Design and Implementation of the PAPRICA Parallel Architecture
High-speed recognition of handwritten amounts on Italian checks
Neural Networks
Implementation of a SliM Array Processor
Recognition-Based Segmentation of On-Line Hand-Printed Words
A Linear Array Parallel Image Processor | parallel architectures;image processing;handwriting recognition;artificial neural networks;VLSI implementations |
509594 | The effects of communication parameters on end performance of shared virtual memory clusters. | Recently there has been a lot of effort in providing cost-effective Shared Memory systems by employing software only solutions on clusters of high-end workstations coupled with high-bandwidth, low-latency commodity networks. Much of the work so far has focused on improving protocols, and there has been some work on restructuring applications to perform better on SVM systems. The result of this progress has been the promise for good performance on a range of applications at least in the 16-32 processor range. New system area networks and network interfaces provide significantly lower overhead, lower latency and higher bandwidth communication in clusters, inexpensive SMPs have become common as the nodes of these clusters, and SVM protocols are now quite mature. With this progress, it is now useful to examine what are the important system bottlenecks that stand in the way of effective parallel performance; in particular, which parameters of the communication architecture are most important to improve further relative to processor speed, which ones are already adequate on modern systems for most applications, and how will this change with technology in the future. Such information can assist system designers in determining where to focus their energies in improving performance, and users in determining what system characteristics are appropriate for their applications.We find that the most important system cost to improve is the overhead of generating and delivering interrupts. Improving network interface (and I/O bus) bandwidth relative to processor speed helps some bandwidth-bound applications, but currently available ratios of bandwidth to processor speed are already adequate for many others. Surprisingly, neither the processor overhead for handling messages nor the occupancy of the communication interface in preparing and pushing packets through the network appear to require much improvement. | Introduction
With the success of hardware cache-coherent distributed shared memory (DSM), a lot of effort has
been made to support the programming model of a coherent shared address space using commodity-
oriented communication architectures in addition to commodity nodes. The techniques for the communication
architecture range from using less customized and integrated controllers [17, 2] to supporting
shared virtual memory (SVM) at page level through the operating system [12, 8]. While these techniques
reduce cost, unfortunately they usually lower performance as well. A great deal of research effort has
been made to improve these systems for large classes of applications. Our focus in this paper is on SVM
systems.
In the last few years there has been much improvement of SVM protocols and systems, and several
applications have been restructured to improve performance [12, 8, 25, 14]. With this progress, it is now
interesting to examine what are the important system bottlenecks that stand in the way of effective parallel
performance; in particular, which parameters of the communication architecture are most important
to improve further relative to processor speed, which are already adequate on modern systems for most
applications, and how will this change with technology in the future. Such studies can hopefully assist
system designers in determining where to focus their energies in improving performance, and users in
determining what system characteristics are appropriate for their applications.
This paper examines these questions through detailed architectural simulation using applications with
widely different behavior. We simulate a cluster architecture with SMP nodes and a fast system area
interconnect with a programmable network interface (i.e. Myrinet). We use a home-based SVM protocol
that has been demonstrated to have comparable or better performance than other families of SVM
protocols. The base case of the protocol, called home-based lazy release consistency (HLRC) does not
require any additional hardware support. We later examine a variant, called automatic update release
consistency (AURC) that uses automatic (hardware) propagation of writes to remote nodes to perform
updates to shared data, and also extend our analysis to the use of uniprocessor nodes where this is useful.
The major performance parameters we consider are the host processor overhead to send a message, the
network interface occupancy to prepare and transfer a packet, the node-to-network bandwidth (often
limited by I/O bus bandwidth), and the interrupt cost. We do not consider network link latency, since
it is a small and usually constant part of the end-to-end latency, in system area networks (SAN). After
dealing with performance parameters, we also briefly examine the impact of key granularity parameters
of the communication architecture. These are the page size, which is the granularity of coherence, and
the number of processors per node.
Assuming a realistic system that can be quite easily implemented today, and a range of applications
that are well optimized for SVM systems [10], we see (Figure 1) that, for most applications, protocol
and communication overheads are substantial. The speedups obtained in the realistic implementation
are much lower than in the ideal case, where all communication costs are zero. This motivates the
current research, whose goal is twofold. First, we want to understand how performance changes as
FFT
LU
Ocean
(contiguous) Water
(nsquared) Water
(spatial) Radix Volrend Raytrace
Barnes
(rebuild)
Barnes
(space)26101418Speedup
Figure
1. Ideal and realistic speedups for each application. The ideal speedup is computed
as the ratio of the uniprocessor execution time divided by the sum of the compute and
the local cache stall time in the parallel execution, i.e. ignoring all communication and
synchronization costs. The realistic speedup corresponds to a realistic set of values (see
Section communication architecture parameters today, in a configuration with four
processors per node.
the parameters of the communication architecture are varied relative to processor speed, both to see
where we should invest systems energy and to understand the likely evolution of system performance
as technology evolves. For this, we use a wide range of values for each parameter. Second, we focus
on three specific points in the parameter space. The first is the point for which the system generally
achieves its best performance within the ranges of parameter values we examine. The performance on
an application at this point is called its best performance. The second point is an aggressive set of values
that the communication parameters can have in current or near-future systems, especially if certain
operating system features are well optimized. The performance at this point in the space is called the
achievable performance. The third point is the ideal point, which represents a hypothetical system that
incurs no communication or synchronization overheads, taking into consideration only compute time
and stall time on local data accesses. Our goal is to understand the gaps between the achievable, best
and ideal performance by identifying which parameters contribute most to the performance differences.
This leads us to the primary communication parameters that need to be improved to close the gaps.
We find, somewhat surprisingly, that host overhead to send messages and per-packet network interface
occupancy are not critical to application performance. In most cases, interrupt cost is by far the dominant
performance bottleneck, even though our protocol is designed to be very aggressive in reducing the
occurrence of interrupts. Node-to-network bandwidth, typically limited by the I/O bus, is also significant
for a few applications, but interrupt cost is important for all the applications we study. These results
suggest that system designers should focus on reducing interrupt costs to support SVM well, and SVM
protocol designers should try to avoid interrupts as possible, perhaps by using polling or by using a
programmable communication assist to run part of the protocol avoiding the need to interrupt the main
processor.
Our results show the relative effect of each of the parameters, i.e. relative to the processor speed and
F
O F
F
O
I
Processor M
e
r
y
s
Processor
I/O
s
First level Cache
Buffer
Write
Second
Level
Cache
Network Interface
e
r
y
Snooping
Device
Figure
2. Simulated node architecture.
to one another. While the absolute parameter values that are used for the achievable set match what
we consider achievable with aggressive current or near-future systems, viewing the parameters relative
to processor speed allows us to understand what the behavior will be as technology trends evolve. For
instance, if the ratio of bandwidth to processor speed changes, we can use these results to reason about
system performance.
Section 2 presents the architectural simulator that we use in this work. In Section 3 we discuss the
parameters we use and the methodology of the study. Section 4 presents the applications suite. Sections 5
and 6 present our results for the communication performance parameters. In Section 5 we examine the
effects of each parameter on system performance and in Section 6 we discuss for each application the
parameters that limit its performance. Section 7 presents the effects of page size and degree of clustering
on system performance. We discuss related work in Section 8. Finally we discuss future work directions
and conclusions in Sections 9 and 10 respectively.
Simulation Environment
The simulation environment we use is built on top of augmint [18], an execution driven simulator
using the x86 instruction set, and runs on x86 systems. In this section we present the architectural
parameters that we do not vary.
The simulated architecture (Figure 2) assumes a cluster of c-processor SMPs connected with a commodity
interconnect like Myrinet [3]. Contention is modeled at all levels except in the network links
and switches themselves. The processor has a P6-like instruction set, and is assumed to be a 1 IPC pro-
cessor. The data cache hierarchy consists of a 8 KBytes first-level direct mapped write-through cache
and a 512 KBytes second-level two-way set associative cache, each with a line size of 32 Bytes. The
buffer [19] has 26 entries, 1 cache line wide each, and a retire-at-4 policy. Write buffer stalls are
simulated. The read hit cost is one cycle if satisfied in the write buffer and first level cache, and 10 cycles
if satisfied in the second-level cache. The memory subsystem is fully pipelined.
Each network interface (NI) has two 1 MByte memory queues, to hold incoming and outgoing packets.
The size of the queues is such that they do not constitute a bottleneck in the communication subsystem.
If the network queues fill, the NI interrupts the main processor and delays it to allow queues to drain.
Network links operate at processor speed and are 16 bits wide. We assume a fast messaging system [5,
16, 4] as the basic communication library.
The memory bus is split-transaction, 64 bits wide, with a clock cycle four times slower than the
processor clock. Arbitration takes one bus cycle, and the priorities are, in decreasing order: outgoing
network path of the NI, second level cache, write buffer, memory, incoming path of the NI. The I/O bus
is bits wide. The relative bus bandwidth and processor speed match those on modern systems. If we
assume that the processor has a 200 MHz clock, the memory bus is 400 MBytes/s.
Protocol handlers themselves cost a variable number of cycles. While the code for the protocol handlers
can not be simulated since the simulator itself is not multi-threaded, we use for each handler an
estimate of the cost of its code sequence. The cost to access the TLB from a handler running in the
kernel is 50 processor cycles. The cost of creating and applying a diff is 10 cycles for every word that
needs to be compared and 10 additional cycles for each word actually included in the diff.
The protocols we use are two versions of a home-based protocol, HLRC and AURC [8, 25]. These
protocols either use hardware support for automatic write propagation (AURC) or traditional software
diffs (HLRC) to propagate updates to the home node of each page at a release point. The necessary pages
are invalidated only at acquire points according to lazy release consistency (LRC). At a subsequent page
fault, the whole page is fetched from the home, where it is guaranteed to be up to date according to
the lazy release consistency [8]. The protocol for SMP nodes attempts to utilize the hardware sharing
and synchronization within an SMP as much as possible, reducing software involvement [1]. The
optimizations used include the use of hierarchical barriers and the avoidance of interrupts as much as
possible. Interrupts are used only when remote requests for pages and locks arrive at a node. Requests
are synchronous (RPC like), to avoid interrupts when replies arrive at the requesting node. Barriers are
implemented with synchronous messages and no interrupts. Interrupts are delivered to processor 0 in
each node. More complicated schemes (i.e. round robin, random assignment) that result in better load
balance in interrupt handling can be used if the operating system provides the necessary support. These
schemes however, may increase the cost of delivering interrupts. In this paper we also examine a round
robin scheme.
As mentioned earlier, we focus on the following performance parameters of the communication archi-
tecture: host overhead, I/O bus bandwidth, network interface occupancy, and interrupt cost. We do not
examine network link latency, since it is a small and usually constant part of the end-to-end latency, in
system area networks (SAN). These parameters describe the basic features of the communication sub-
system. The rest of the parameters in the system, for example cache and memory configuration, total
number of processors, etc. remain constant.
When a message is exchanged between two hosts, it is put in a post queue at the network interface.
In an asynchronous send operation, which we assume, the sender is free to continue with useful work.
The network interface processes the request, prepares packets, and queues them in an outgoing network
queue, incurring an occupancy per packet. After transmission, each packet enters an incoming network
queue at the receiver, where it is processed by the network interface and then deposited directly in host
memory without causing an interrupt [2, 4]. Thus, the interrupt cost is an overhead related not so much
to data transfer but to processing requests.
While we examine a range of values for each parameter, in varying a parameter we usually keep
the others fixed at the set of achievable values. Recall that these are the values we might consider
achievable currently, on systems that provide optimized operating system support for interrupts. We
choose relatively aggressive fixed values so that the effects of the parameter being varied are observed.
In more detail:
Host Overhead is the time the host processor itself is busy sending a message. The range of this
parameter is from a few cycles to post a send in systems that support asynchronous sends, up to
the time needed to transfer the message data from the host memory to the network interface when
synchronous sends are used. If asynchronous sends are available, an achievable value for the host
overhead is a few hundred processor cycles. Recall that there is no processor overhead for a data
transfer at the destination end. The range of values we consider is between 0 (or almost
cycles and 10000 processor cycles (about 50-s with a 5ns processor clock). Systems that
support asynchronous sends will probably be closer to the smaller values and systems with synchronous
sends will be closer to the higher values depending on the message size. The achievable
value we use is an overhead of 600 processor cycles per message.
ffl The I/O Bus Bandwidth determines the host to network bandwidth (relative to processor speed). In
contemporary systems this is the limiting hardware component for the available node-to-network
network links and memory buses tend to be much faster. The range of values for the
I/O bus bandwidth is from 0.25 MBytes per processor clock MHz up to 2 MBytes per processor
clock MHz (or 50 MBytes/s to 400 MBytes/s assuming a 200 MHz processor clock). The
achievable value is 0.5 MBytes/MHz, or 100 MBytes/s assuming a 200 MHz processor clock.
ffl Network Interface Occupancy is the time spent on the network interface preparing each packet.
Network interfaces employ either custom state machines or network processors (general purpose
or custom designs) to perform this processing. Thus, processing costs on the network interface
vary widely. We vary the occupancy of the network interface from almost 0 to 10000 processor
cycles (about 50-s with a 5ns processor clock) per packet. The achievable value we use is 1000
main processor cycles, or about 5-s assuming a 200 MHz processor clock. This value is realistic
for the currently available programmable NIs, given that the programmable communication assist
on the NI is usually much slower than the main processor.
ffl Interrupt cost is the cost to issue an interrupt between two processors in the same SMP node, or the
cost to interrupt a processor from the network interface. It includes the cost of context switches
and operating system processing. Although the interrupt cost is not a parameter of the communication
subsystem, it is an important aspect of SVM systems. Interrupt cost depends on the operating
system used; it can vary greatly from system to system, affecting the performance portability of
SVM across different platforms. We therefore vary the interrupt cost from free interrupts (0 processor
cycles) to 50000 processor cycles for both issuing and delivering an interrupt (total 100000
processor cycles or 500 -s with a 5ns processor clock). The achievable value we use is 500 processor
cycles, which results in a cost of 1000 cycles for a null interrupt. This choice is significantly
more aggressive than what current operating systems provide. However it is achievable with fast
interrupt technology [21]. We use it as the achievable value when varying other parameters to
ensure that interrupt cost does not swamp out the effects of varying those parameters.
To capture the effects of each parameter separately, we keep the other parameters fixed at their achievable
values. Where necessary, we also perform additional guided simulations to further clarify the results
In addition to the results obtained by varying parameters and the results obtained for the achievable
parameter values, an interesting result is the speedup obtained by using the best value in our range for
each parameter. This limits the performance that can be obtained by improving the communication architecture
within our range of parameters. The parameter values for the best configuration are: host
overhead 0 processor cycles, I/O bus bandwidth equal to the memory bus bandwidth, network interface
occupancy per packet 200 processor cycles and total interrupt cost 0 processor cycles. In this best con-
figuration, contention is still modeled since the values for the other system parameters are still nonzero.
Table
1 summarizes the values of each parameter. With a 200 MHz processor, the achievable set of
values discussed above assumes the parameter values: host overhead 600 processor cycles, memory bus
bandwidth 400 MBytes/s, I/O bus bandwidth 100 MBytes/s, network interface occupancy per packet
1000 processor cycles and total interrupt cost 1000 processor cycles.
Parameter Range Achievable Best
Host Overhead (cycles) 0-10000 600 -0
I/O Bus Bandwidth (Mbytes/MHz) 0.25-2 0.5 2
NI Occupancy (cycles) 0-10000 1000 200
Table
1. Ranges and achievable and best values of the communication parameters under
consideration.
Applications
We use the SPLASH-2 [22] application suite. This section briefly describes the basic characteristics of
each application relevant to this study. A more detailed classification and description of the application
behavior for SVM systems with uniprocessor nodes is provided in the context of AURC and LRC in [9].
The applications can be divided in two groups, regular and irregular.
4.1 Regular Applications
The applications in this category are FFT, LU and Ocean. Their common characteristic is that they
are optimized to be single-writer applications; a given word of data is written only by the processor to
which it is assigned. Given appropriate data structures they are single-writer at page granularity as well,
and pages can be allocated among nodes such that writes to shared data are almost all local. In HLRC
we do not need to compute diffs, and in AURC we do not need to use a write through cache policy.
Protocol action is required only to fetch pages. The applications have different inherent and induced
communication patterns [22, 9], which affect their performance and the impact on SMP nodes.
Application Page Faults Page Fetches Local Lock Acquires Remote Lock Acquires Barriers
Water(nsquared) (512) 69.19 22.06 8.04 68.26 19.01 7.29
Water(spatial) (512) 97.86 21.42 9.23 93.81 17.73 6.04 0.01 1.83 2.60 3.94 2.16 1.39 4.19
Volrend (head) 105.09 44.06 34.49 104.78 29.35 6.53 0.00 29.34 43.80 44.34 17.64 3.97 1.61
Raytrace (car) 89.80 25.64 6.83 89.79 25.57 6.76 0.03 2.21 3.96 4.89 3.26 1.34 0.10
Barnes(rebuild)
Barnes(space)
Table
2. Number of page faults, page fetches, local and remote lock acquires and barriers
per processor per 10 7 cycles for each application for 1,4 and 8 processors per node.
FFT: The all-to-all, read-based communication in FFT is essentially a transposition of a matrix of
complex numbers. We use two problem sizes, 256K(512x512) and 1M(1024x1024) elements. FFT has
a high inherent communication to computation ratio.
LU: We use the contiguous version of LU, which allocates on each page data assigned to only one
processor. LU exhibits a very small communication to computation ratio but is inherently imbalanced.
We used a 512x512 matrix.
Ocean: The communication pattern in the Ocean application is largely nearest-neighbor and iterative
on a regular grid. We run the contiguous (4-d array) version of Ocean on a 514x514 grid with an error
tolerance of 0.001.
4.2 Irregular Applications
The irregular applications in our suite are Barnes, Radix, Raytrace, Volrend and Water.
Barnes: We ran experiments for different data set sizes, but present results for 8K particles. Access
patterns in Barnes are irregular and fine-grained. We use two versions of Barnes, which differ in the
manner they build the shared tree at each time step. In the first version (Barnes-rebuild, which is the one
in SPLASH-2) processors load the particles that were assigned to them for force calculation directly into
the shared tree, locking (frequently) as necessary. The second version, Barnes-space [10], is optimized
for SVM, and it avoids locking as much as possible. It uses a different tree-building algorithm, in which
disjoint subspaces that match tree cells are assigned to different processors. These subspaces include
particles which are not the same as the particles that are assigned to the processors for force calculation.
Each processor builds each own partial tree, and all partial trees are merged to the global tree without
locking.
Radix: Radix sorts a series of integer keys. It is a very irregular application with highly scattered
writes to remotely allocated data and a high inherent communication to computation ratio. We use the
unmodified SPLASH-2 version.
FFT
LU
Ocean
(contiguous) Water
(nsquared) Water
(spatial) Radix Volrend Raytrace
Barnes
(rebuild)
Barnes
(space)200600100014001800Normalized
Number
of
Messages
Sent
Figure
3. Number of messages sent per processor per 10 7 compute cycles for each application
for 1,4 and 8 processors per node.
Raytrace: Raytrace renders complex scenes in computer graphics. The version we use is modified
from the SPLASH-2 version to run more efficiently on SVM systems. A global lock that was not
necessary was removed, and task queues are implemented better for SVM and SMP [10]. Inherent
communication is small.
Volrend: The version we use is slightly modified from the SPLASH-2 version, to provide a better
initial assignment of tasks to processes before stealing [10]. This improves SVM performance greatly.
Inherent communication volume is small.
Water: We use both versions of Water from SPLASH-2, Water-nsquared and Water-spatial. Water-
nsquared can be categorized as a regular application, but we put it here to ease the comparison with
Water-spatial. In both versions, updates to water molecules positions and velocities are first accumulated
locally by processors and then performed to the shared data once at the end of each iteration. The
inherent communication to computation ratio is small. We use a data set size of 512 molecules.
Table
2 and Figures 3 and 4 can be used to characterize the applications. Table 2 presents counts of
protocol events for each application, for 1, 4 and 8 processors per node (16 processors total in all cases).
Figures
3 and 4 show the numbers of messages and MBytes of data (both application and protocol) that
are sent by each processor in the system. These characteristics are measured per 10 7 cycles of application
compute time per processor, and are averaged over all processors in the system. We can use them to
categorize the applications in terms of the communication they exhibit. Both the number of messages
and MBytes of data exchanged are important to performance; if we use the geometric mean of these
properties, which captures their multiplicative effect, as a metric then we can divide the applications in
FFT
LU
Ocean
(contiguous) Water
(nsquared) Water
(spatial) Radix Volrend Raytrace
Barnes
(rebuild)
Barnes
Normalized
MBytes
Sent
Figure
4. Number of MBytes sent per processor per 10 7 compute cycles for each application
for 1,4 and 8 processors per node.
three groups. In the first group are Barnes-rebuild, FFT and Radix that exhibit a lot of communication.
In the second group belong Water-nsquared and Volrend that exhibit less communication and in the third
group the rest of the applications, LU, Ocean, Water-spatial, Raytrace and Barnes-space that exhibit very
little communication. It is important to note that this categorization holds for the 4-processor per node
configuration. Changing the number of processors in the node, can dramatically change the behavior
of some applications, and the picture can be very different. For instance Ocean exhibits very high
communication with 1 processor per node.
5 Effects of Communication Parameters
In this section we present the effects of each parameter on the performance of an all-software HLRC
protocol for a range of values. Table 3 presents the maximum slowdowns for each application for the parameters
under consideration. The maximum slowdown is computed from the speedups for the smallest
and biggest values considered for each parameter, keeping all other parameters at their achievable val-
ues. Negative numbers indicate speedups. The rest of this section discusses the parameters one by one.
For each parameter we also identify the application characteristics that most closely predict the effect of
that parameter. The next section will take a different cut, looking at the bottlenecks on a per-application
rather than per-parameter basis. At the end of this section we also present results for AURC.
Host Overhead: Figure 5 shows that the slowdown due to the host overhead is generally low, especially
for realistic values of asynchronous message overheads. However, it varies among applications
from less than 10% for Barnes-space, Ocean-contiguous and Raytrace to more than 35% for Volrend,
Radix and Barnes-rebuild across the entire range of values. In general, applications that send more messages
exhibit a higher dependency on the host overhead. This can be seen in Figure 6, which shows two
Application Host Overhead NI Occupancy I/O Bus Bandwidth Interrupt Cost Page Size Procs/Node
FFT 22.6% 11.9% 40.8% 86.6% 72.6% 13.8%
LU(contiguous) 17.9% 7.5% 15.9% 70.8% 34.4% -35.3%
Ocean(contiguous) 4.5% 2.8% 6.5% 35.2% 19.6% 63.2%
Water(nsquared) 32.4% 16.6% 10.8% 83.2% 62.2% -87.1%
Water(spatial) 23.7% 8.5% 8.9% 67.9% 51.0% -87.5%
Radix 35.8% -31.8% 77.6% 58.7% -368.2% -699.4%
Volrend 34.7% 12.8% 15.7% 91.3% 63.9% -68.1%
Raytrace 8.2% 2.9% 8.9% 52.3% 9.1% -16.1%
Barnes(rebuild) 40.7% 21.8% 44.8% 80.3% 71.5% -383.4%
Barnes(space) 4.4% -0.6% 27.5% 59.0% -109.6% -49.4%
Table
3. Maximum Slowdowns with respect to the various communication parameters for
the range of values with which we experiment. Negative numbers indicate speedups.
curves. One is the slowdown of each application between the smallest and highest host overheads that
we simulate, normalized to the biggest of these slowdowns. The second curve is the number of messages
sent by each processor per 10 6 compute cycles, normalized to the biggest of these numbers of messages.
Note that with asynchronous messages, host overheads will be on the low side of our range, so we can
conclude that host overhead for sending messages is not a major performance factor for coarse grain
SVM systems and is unlikely to become so in the near future.
Network Interface Occupancy: Figure 7 shows that network interface occupancy has even a smaller
effect than host overhead on performance, for realistic occupancies. Most applications are insensitive
to it, with the exception of a couple of applications that send a large number of messages. For these
applications, slowdowns of up to 22% are observed at the highest occupancy values. The speedup
observed for Radix is in reality caused by timing issues (contention is the bottleneck in Radix).
I/O Bus Bandwidth: Figure 8 shows the effect of I/O bandwidth on application performance. Reducing
the bandwidth results in slowdowns of up to 82%, with 4 out of 11 applications exhibiting slowdowns
of more than 40%. However, many other applications are not so dependent on bandwidth, and only FFT,
Radix, and Barnes-rebuild benefit much from increasing the I/O bus bandwidth beyond the achievable
relationships to processor speed today. Of course, this does not mean that it is not important to worry
about improving bandwidth. As processor speed increases, if bandwidth trends do not keep up, we will
quickly find ourselves at the relationship reflected by the lower bandwidth case we examine (or even
worse). What it does mean is that if bandwidth keeps up with processor speed, it is not likely to be the
major limitation on SVM systems for applications.
Figure
9 shows the dependency between bandwidth and the number of bytes sent per processor for
each application. As before, units are normalized to the maximum of the numbers presented for each
curve. Applications that exchange a lot of data, not necessarily a lot of messages, need higher bandwidth.
shows that interrupt cost is a very important parameter in the system. Unlike
bandwidth, it affects the performance of all applications dramatically, and in many cases a relatively
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
5. Effects of host overhead on application performance. The data points for each
application correspond to a host overhead of 0, 600, 1000, 5000, and 10000 processor
cycles.
small increase in interrupt cost leads to a big performance degradation. For most applications, interrupt
costs of up to about 2000 processor cycles for each of initiation and delivery do not seem to hurt much.
However, commercial systems typically have much higher interrupt costs. Increasing the interrupt cost
beyond this point begins to hurt sharply. All applications have a slowdown of more than 50% when
the interrupt cost varies from 0 to 50000 processor cycles (except Ocean-contiguous that exhibits an
anomaly since the way pages are distributed among processors changes with interrupt cost). This suggests
that architectures and operating systems should work harder to improving interrupt costs if they are
to support SVM well, and SVM protocols should try to avoid interrupts as much as possible, Figure 11
shows that the slowdown due to the interrupt cost is closely related to the number of protocol events that
cause interrupts-page fetches and remote lock acquires.
With SMP nodes there are many options for how interrupts may be handled within a node. Our protocol
uses one particular method. Systems with uniprocessor nodes have less options, so we experimented
with such configurations as well. We found that interrupt cost is important in that case as well. The only
difference is that the system seems to be a little less sensitive to interrupt costs of between 2500 and
5000 cycles. After this range, performance degrades quickly as in the SMP configuration.
We also experimented with round robin interrupt delivery and the results look similar to the case where
all interrupts are delivered to a fixed processor in each SMP. Overall performance seems to increase
slightly, compared to the static interrupt scheme, but as in the static scheme it degrades quickly as
cost increases. Moreover implementing such a scheme in a real system may be complicated
and may incur additional costs.
LU(contiguous) Ocean(contiguous) Radix Raytrace Volrend Water(nsquared) Water(spatial)0.10.30.50.70.9Normalized
units
Slowdown due to Host Overhead (normalized to the largest slowdown)
Number of Messages sent/Processor/1M Compute Cycles (normalized to the largest)
Figure
6. Relation between slowdown due to Host Overhead and Number of Messages
sent.
AURC: As mentioned in the introduction, besides HLRC, we also used AURC to study the effect of
the communication parameters when using hardware support for automatic write propagation instead
of software diffs. The results look very similar to HLRC, with the exception that network interface
occupancy is much more important in AURC. The automatic update mechanism may generate more
traffic through the network interface because new values for the same data may be sent multiple times
to the home node before a release. More importantly, the number of packets may increase significantly
since updates are sent at a much finer granularity, so if they are apart in space or time they may not be
coalesced well into packets. Figure 12 shows how performance changes as the NI overhead increases
for both regular and irregular applications.
6 Limitations on Application Performance
In this section we examine the difference in performance between the best configuration and an ideal
system (where the speedup is computed only from the compute and local stall times, ignoring communication
and synchronization costs), and the difference in performance between the achievable and the
best configuration on a per application basis. Recall that best stands for the configuration where all
communication parameters assume their best value, and achievable stands for the configuration where
the communication parameters assume their achievable values. The goal is to identify the application
properties and architectural parameters that are responsible for the difference between the best and the
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
7. Effects of network interface occupancy on application performance. The data
points for each application correspond to a network occupancy of 50, 250, 500, 1000,
2000, and 10000 processor cycles.
ideal performance, and the parameters that are responsible for the difference between the achievable and
the best performance. The speedups for each configuration will be called ideal, best and achievable
respectively. Table 4 shows these speedups for all applications. In many cases, the achievable speedup
is close to the best speedup. However, in some cases (FFT, Radix, Barnes) there remains a gap. The
performance with the best configuration is often quite far from the ideal speedup. To understand these
effects, let us examine each application separately.
FFT: The best speedup for FFT is about 13.5. The difference from the ideal speedup of 16.2 comes
from data wait time at page faults, which have a cost even for the best configuration, despite the very
high bandwidth and the zero-cost interrupts. The achievable speedup is about 7.7. There are two major
parameters responsible for this drop in performance: the cost of interrupts and the bandwidth of the I/O
bus. Making the interrupt cost 0 results in a speedup of 11, while increasing the I/O bus bandwidth to
the memory bus bandwidth gives a speedup of 10. Modifying both parameters at the same time gives a
speedup almost the same as the best speedup.
LU: The best speedup is 13.7. The difference from the ideal speedup is due to load imbalances in
communication and due to barrier cost. The achievable speedup for LU is about the same as the best
speedup, since this application has very low communication to computation ratio, so communication is
not the problem.
Ocean: The best speedup for Ocean is 10.55. The reason for this is that when the interrupt cost is 0 an
anomaly is observed in first touch page allocation and the speedup is very low due to a large number of
MBytes/MHz13579111315Speedup
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
8. Effects of I/O bandwidth on application performance. The data points for each
application correspond to an I/O bandwidth of 2, 1, 0.5 and 0.25 MBytes per processor
clock MHz, or 400, 200, 100, and 50 MBytes/s assuming a 200 MHz processor.
page faults. The achievable speedup is 13.0, with the main cost being that of barrier synchronization. It
is worth noting that speedups in Ocean are artificially high because of local cache effects: a processor's
working set does not fit in cache on a uniprocessor, but does fit in the cache with processors. Thus
the sequential version performs poorly due to the high cache stall time.
Barnes-rebuild: The best speedup for Barnes-rebuild is 5.90. The difference from the ideal is because
of page faults in the large number of critical sections (locks). The achievable speedup is 3.9. The difference
between the best and achievable speedups in the presence of page faults is because synchronization
wait time is even higher due to the increased protocol costs. These increased costs are mostly because
of the host overhead (a loss of about 1 in the speedup) and the NI occupancy (about 0.8). To verify all
these we disabled remote page fetches in the simulator so that all page faults appear to be local. The
speedup becomes 14.64 in the best and 10.62 in the achievable cases respectively. The gap between the
best and the achievable speedups is again due to host and NI overheads.
Barnes-space: The second version of Barnes we run is an improved version with minimal locking [10].
The best speedup is 14.5, close to the ideal. The achievable speedup is 12.5. The difference between
these two is mainly because of the lower available I/O bandwidth in the achievable case. This increases
the data wait time in an imbalanced way.
Water-Nsquared: The best speedup for Water-Nsquared is 9.9 and the achievable speedup is about 9.
The reason for the not very high best speedup is page faults that occur in contended critical sections,
LU(contiguous) Ocean(contiguous) Radix Raytrace Volrend Water(nsquared) Water(spatial)0.10.30.50.70.9Normalized
units
Slowdown due to I/O Bus Bandwidth (normalized to the largest slowdown)
Number of Bytes sent/Processor/1M Compute Cycles (normalized to the largest)
Figure
9. Relation between slowdown due to I/O Bus Bandwidth and Number of Bytes
transferred.
greatly increasing serialization at locks. If we artificially disable remote page faults the best speedup
increases from 9.9 to 14.1. The cost for locks in this artificial case is very small and the non-ideal
speedup is due to imbalances in the computation itself.
Water-Spatial: The best speedup is 13.75. The difference from ideal is mainly due to small imbalances
in the computation and lock wait time. Data wait time is very small. The achievable speedup is
about 13.3.
Radix: The best speedup for Radix is 7. The difference from the ideal speedup of 16.1 is due to
data wait time, which is exaggerated by contention even at the best parameter values, and the resulting
imbalances among processors which lead to high synchronization time. The imbalances are observed to
be due to contention in the network interface. The achievable speedup is only 3. The difference from
the best speedup is due to the same factors: data wait time is much higher and much more imbalanced
due to much greater contention effects. The main parameter responsible for this is I/O bus bandwidth.
For instance, if we quadruple I/O bus bandwidth the achievable speedup for Radix becomes 7, just like
the best speedup.
Raytrace: Raytrace performs very well. The best speedup is 15.64 and the achievable speedup 14.80.
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
10. Effects of interrupt cost on application performance. The six bars for each
application correspond to an interrupt cost of 0, 500, 1000, 2500, 5000, 10000, and 50000
processor cycles.
Volrend: The best speedup is 10.95. The reason for this low number is imbalances in the computation
itself due to the cost of task stealing, and large lock wait times due to page faults in critical sections. If we
artificially eliminate all remote page faults, then computation is perfectly balanced and synchronization
costs are negligible (speedup is 14.9 in this fictional case). The achievable speedup is 9.40, close to the
best speedup.
We see that the difference between ideal and best performance is due to page faults that occur in
critical sections, I/O bandwidth limitations and imbalances in the communication and computation, and
the difference between best and achievable performance is primarily due to the interrupt cost and I/O
bandwidth limitations and less due to the host overhead. Overall, application performance on SVM
systems today appears to be limited primarily by interrupt cost, and next by I/O bus bandwidth. Host
overhead and NI occupancy per packet are substantially less significant, and in that order.
7 Page Size and Degree of Clustering
In addition to the performance parameters of the communication architecture discussed above, the
granularities of coherence and data transfer-i.e. the page size-and the number of processors per node
are two other important parameters that affect the behavior of the system. They play an important role
in determining the amount of communication that takes place in the system, the cost of which is then
determined by the performance parameters.
Page Size: The page size in the system is important for many reasons. It defines the size of the
transfers, since in all software protocols data fetches are performed at page sizes. It also affects the
LU(contiguous) Ocean(contiguous) Radix Raytrace Volrend Water(nsquared) Water(spatial)0.10.30.50.70.9Normalized
units
Slowdown due to Interrupt cost (normalized to the largest slowdown)
Number of Page Fetches and Remote Lock Acquires (normalized to the largest)
Figure
11. Relation between slowdown due to Interrupt cost and Number of Page Fetches
and Remote Lock Acquires.
amount of false sharing in the system, which is very important for SVM. These two aspects of the
page size conflict with each other: bigger pages reduce the number of messages in the system if spatial
locality is well exploited in communication, but they increase the amount of false sharing, and vice
versa. Moreover, different page sizes lead to different amounts of fragmentation in memory, which may
result in wasted resources. Figure 13 shows that the effects of page size on applications vary a lot. Most
applications seem to favor smaller page sizes, with the exception of Radix that benefits a lot from bigger
pages. We vary the page size between 2 KBytes and 32 KBytes pages. Most systems today support
either 4 KBytes or 8 KBytes pages. We should note two caveats in our study with respect to page size.
First, we did not tune the applications specifically to the different page sizes. Second, the effects of the
page size are often related to the problem sizes that are used. For applications in which the amount of
false sharing and fragmentation (i.e. the granularity of access interleaving in memory from different
processors) changes with problem size, larger problems that run on real systems may benefit from larger
pages (i.e. FFT).
Size: The degree of clustering is the number of processors per node. Figure 14 shows that for
most applications greater clustering helps even if the memory configuration and bandwidths are kept the
same 1 . We use cluster sizes of 1, 4, 8 and 16 processors, always keeping the total number of processors
This assumption, of keeping the memory subsystem the same and increasing the number of processors per node is not
very realistic, since systems with higher degrees of clustering usually have a more aggressive memory subsystem as well,
FFT
LU-contiguous
Ocean-contiguous
Water-spatial
Volrend
Raytrace
Barnes-rebuild
Figure
12. Effects of network interface occupancy on application performance for AURC.
The data points for each application correspond to a network occupancy of 50, 250, 500,
1000, 2000, and 10000 processor cycles.
in the system at 16. These configurations cover the range from a uniprocessor node configuration to a
cache-coherent, bus-based multiprocessor. Typical SVM systems today use either uniprocessor or 4-way
nodes. A couple of interesting points emerge. First, unlike most applications, for Ocean-contiguous
the optimal clustering is four processors per node. The reason is that Ocean-contiguous generates a lot
of local traffic on the memory bus due to capacity and conflict misses, and more processors on the bus
exacerbate this problem. On the other hand, Ocean-contiguous benefits a lot from clustering because of
the communication pattern. Thus when four processors per node are used, the performance improvement
over one processor per node comes from sharing. When the system has more than four processors
per node, the memory bus is saturated and although the system benefits from sharing, performance is
degrading because of memory bus contention. Radix and FFT also put greatly increased pressure on
the shared bus. The cross-node SVM communication however, is very high and the reduction in it via
increased spatial locality at page grain due to clustering outweighs this problem. The second important
point is that the applications that perform very poorly under SVM do very well on a shared bus system
at this scale. The reason is that these applications either exhibit a lot of synchronization or make fine
grain accesses, both of which are much cheaper on a hardware-coherent shared bus architecture. For
example, applications where the problem in SVM is page faults within critical sections (i.e. Barnes-
rebuild) perform much better on this architecture. These results show that bus bandwidth is not the
most significant problem for these applications at this scale, and the use of hardware coherence and
synchronization outweighs the problems of sharing a bus.
and are likely to provide greater node-to-network bandwidth.
Application Best Achievable Ideal
FFT 13.5 7.7 16.2
Ocean 10.5 13.0 16.0
Water(nsquared) 9.9 9.0 15.8
Water(spatial) 13.7 13.3 15.8
Radix 7.0 3.0 16.1
Volrend 10.9 9.40 15.4
Raytrace 15.6 14.8 16.4
Barnes(rebuild) 5.9 3.9 15.4
Barnes(space) 14.5 12.5 15.6
Table
4. Best and Achievable Speedups for each application
8 Related Work
Our work is similar in spirit to some earlier studies, conducted in [15, 7], but in different context.
In [15], the authors examine the impact of communication parameters on end performance of a network
of workstations with the applications being written in Split-C on top of Generic Active Messages. They
find that application performance demonstrates a linear dependence on host overhead and on the gap
between transmissions of fine grain messages. For SVM, we find these parameters to not be so important
since their cost is usually amortized over page granularity. Applications were found to be quite tolerant
to latency and bulk transfer bandwidth in the split-C study as well.
In [7], Holt et al. find that the occupancy of the communication controller is critical to good performance
in DSM machines that provide communication and coherence at cache line granularity. Overhead
is not so significant there (unlike in [15]) since it is very small.
In [11], Karlsson et al. find that the latency and bandwidth of an ATM switch is acceptable in a
clustered SVM architecture. In [13] a Lazy Release Consistency protocol for hardware cache-coherence
is presented. In a very different context, they find that applications are more sensitive to the bandwidth
than the latency component of communication.
Several studies have also examined the performance of different SVM systems across multiprocessor
nodes and compared it with the performance of configurations with uniprocessor nodes. Erlichson et
al. [6] find that clustering helps shared memory applications. Yeung et al. in [23] find this to be true for
SVM systems in which each node is a hardware coherent DSM machine. In [1], they find that the same
is true in general for all software SVM systems, and for SVM systems with support for automatic write
propagation.
9 Discussion and Future Work
This work shows that there is room for improving SVM cluster performance in various directions:
ffl Interrupts. Since reducing the cost of interrupts in the system can improve performance signifi-
cantly, an important direction for future work is to design SVM systems that reduce the frequency
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
13. Effects of page size on application performance. The data points for each
application correspond to a page size of 2 KBytes, 4 KBytes, 8 KBytes, 16 KBytes, and
KBytes.
and/or the cost of interrupts. Polling, better operating system support, or support for remote fetches
that do not involve the remote processor are mechanisms that can help in this direction. Operating
system and architectural support for inexpensive interrupts would improve system performance.
Unfortunately this is not always achieved, especially in commercial systems. In these cases, protocol
modifications (using non-interrupting remote fetch operations) or implementation optimizations
(using polling instead of interrupts) can improve system performance and lead to more predictable
and portable performance across different architectures and operating systems. Polling
can be done either by instrumenting the applications or (in SMP systems) by reserving one processor
for protocol processing. Recent results for interrupts versus polling in SVM systems vary.
One study finds that polling may add a significant overhead, leading to inferior performance than
interrupts for page grain SVM systems [24]. On the other hand, Stets et al. find that polling
gives generally better results than interrupts [20]. We believe more research is needed on modern
systems to understand the role of polling. Another interesting direction that we are exploring is
moving some of the protocol processing itself to the network processor found in programmable
network interfaces like such as Myrinet, thus reducing the need for interrupting the main processor.
ffl System bandwidth. Providing high bandwidth is also important, to keep up with increasing processor
speeds. Although fast system interconnects are available, software performance is, in practice,
rarely close to what the hardware provides. Low level communication libraries fail to deliver close
to raw hardware performance in many cases. Further work on low level communication interfaces
may also be helpful in providing low-cost, high-performance SVM systems. Multiple network
interfaces per node is another approach that can increase the available bandwidth. In this case
protocol changes may be necessary to ensure proper event ordering.
FFT
LU-contiguous
Ocean-contiguous
Water-nsquared
Water-spatial
Radix
Volrend
Raytrace
Barnes-space
Barnes-rebuild
Figure
14. Effects of cluster size on application performance. The data points for each
application correspond to a cluster size of 1, 4, 8, and 16 processors per node.
ffl Clustering. Up to the scale we examined, adding more processors per node helps in almost all
cases. In applications where performance does not increase quickly with the cluster size, scaling
of other system parameters, such as memory bus and I/O bandwidth, can have the desirable effects.
ffl Applications. In doing this work we found that restructuring applications is an area that can make
a big difference. Understanding how an application behaves and restructuring it properly can
dramatically improve performance far beyond the improvement in system parameters or protocols
[10]. This however, is not always easy, and, unfortunately, not many tools are available in parallel
systems to help easily discover the cause of bottlenecks and obtain insight about application
restructuring needs, especially when contention is a major problem as it often is in commodity-based
communication architectures. Architectural simulators are one of the few tools that can
currently be used to understand how an application behaves in detail.
We should point out that this work is limited to a certain family of home-based SVM protocols. Other
systems-for instance fine grain SVM systems-may exhibit different behavior and dependencies on
communication parameters. Similar studies for other protocols and architectures can help us understand
better the differences and similarities among SVM systems.
This work was based on a 16 processor system. To address the question of what happens in bigger systems
we run some experiments with a processor configuration and compared the number of protocol
events between the two configurations. Table 5 shows the ratios of protocol events and communication
traffic between a 32 and a 16 processor configuration. In most cases the event counts scale proportionally
with the size of the system which leads us to believe that the results presented so far will hold for
bigger configurations as well (at least up to 32 processors). Moreover, with larger problem sizes the
problems related to the communication architecture are usually alleviated. However, more sophisticated
Application Page Faults Page Fetches Remote Lock Acquires Local Lock Acquires Barriers MBytes Sent Messages Sent
LU 1.94 2.53 1.86 2.00 1.90 12.90 2 3.66
Ocean 0.75 0.53 2.77 1.57 1.99 2.50 1.95
Water-nsquared 2.89 2.63 1.40 2.50 1.99 2.80 2.37
Water-spatial 1.85 2.05 1.68 2.26 1.98 2.00 2.08
Radix 1.83 2.43 2.70 4.10 1.99 2.19 2.38
Volrend
Raytrace 2.08 2.08 1.33 2.40 2.00 2.08 1.83
Table
5. Ratios of protocol events for a 32 and a 16 processor configuration (4 processor
per node).
scaling models, that take into account the problem size, may be necessary for more detailed and accurate
predictions.
Another important question is how are these communication parameters going to scale with time. It
seems that the parameters that closely follow hardware performance (host overhead, network interface
occupancy, bandwidth) have more potential for getting better (relative to processor speeds) than interrupt
cost which depends on the operating system and on special architectural support.
Conclusions
We have examined the effects of communication parameters to a family of SVM protocols. Through
detailed architectural simulations of a cluster of SMPs and a variety of applications, we find that most
applications are very sensitive to interrupt cost, and a few would benefit from improvements in band-width
relative to processor speed as well. Unbalanced systems with relatively high interrupt costs and
low I/O bandwidth can result in substantial losses in application performance. In these cases we observe
slowdowns of more than 90% (a factor of 10 longer execution time). However, most applications are not
sensitive to host overhead and network interface occupancy.
Most regular applications can achieve very good SVM performance under the best configuration of
parameters. For irregular applications, though, even this best performance can be low. This is mainly
due to serialization effects in critical sections, i.e. due to page faults incurred inside critical sections,
which dilate the critical sections and increase serialization. For example by reducing the amount of
locking by using a different algorithm for parallel tree building, the performance of Barnes improves by
a factor of 2-3. Overall, the achievable application performance today is limited primarily by interrupt
cost and then by node to network bandwidth. Host overhead and NI occupancy appear less important to
improve relative to processor speed. If interrupts are free and bandwidth high relative to the processor
speed, then the achievable performance approaches the best performance in most cases.
Acknowledgments
We thank Hongzhang Shan for making available to us the improved version of Barnes, and the anonymous
reviewers for their comments and feedback.
--R
Comparison of shared virtual memory across uniprocessor and SMP nodes.
A virtual memory mapped network interface for the shrimp multicomputer.
A gigabit-per-second local area network
Design and implementation of virtual memory-mapped communication on myrinet
Active messages: A mechanism for integrated communication and computation.
The benefits of clustering in shared address space multiprocessors: An applications-driven investigation
The effects of latency
Improving release-consistent shared virtual memory using automatic update
Understanding application performance on shared virtual memory.
Application restructuring and performance portability on shared virtual memory and hardware-coherent multiprocessors
Performance evaluation of cluster-based multiprocessor built from atm switches and bus-based multiprocessor servers
Distributed shared memory on standard workstations and operating systems.
Lazy release consistency for hardware-coherent multi- processors
Effect of communication latency
The Fast Messages (FM) 2.0 streaming interface.
Tempest and typhoon: User-level shared memory
Augmint: a multiprocessor simulation environment for intel x86 architectures.
Design issues and tradeoffs for write buffers.
Fast interrupt priority management in operating system kernels.
Methodological considerations and characterization of the SPLASH-2 parallel application suite
MGS: a multigrain shared memory system.
Relaxed consistency and coherence granularity in DSM systems: A performance evaluation.
Performance evaluation of two home-based lazy release consistency protocols for shared virtual memory systems
--TR
Active messages
Virtual memory mapped network interface for the SHRIMP multicomputer
Tempest and typhoon
The benefits of clustering in shared address space multiprocessors
Lazy release consistency for hardware-coherent multiprocessors
MGS
Understanding application performance on shared virtual memory systems
Performance evaluation of two home-based lazy release consistency protocols for shared virtual memory systems
Application restructuring and performance portability on shared virtual memory and hardware-coherent multiprocessors
VM-based shared memory on low-latency, remote-memory-access networks
Myrinet
Design and Implementation of Virtual Memory-Mapped Communication on Myrinet
Fast Interrupt Priority Management in Operating System Kernels
Improving Release-Consistent Shared Virtual Memory using Automatic Update
Performance Evaluation of a Cluster-Based Multiprocessor Built from ATM Switches and Bus-Based Multiprocessor Servers
Design Issues and Tradeoffs for Write Buffers
The Effects of Latency, Occupancy, and Bandwidth in Distributed Shared Memory Multiprocessors
Effect of Communication Latency, Overhead, and Bandwidth on a Cluster
--CTR
Mainak Chaudhuri , Mark Heinrich , Chris Holt , Jaswinder Pal Singh , Edward Rothberg , John Hennessy, Latency, Occupancy, and Bandwidth in DSM Multiprocessors: A Performance Evaluation, IEEE Transactions on Computers, v.52 n.7, p.862-880, July
Cheng Liao , Dongming Jiang , Liviu Iftode , Margaret Martonosi , Douglas W. Clark, Monitoring shared virtual memory performance on a Myrinet-based PC cluster, Proceedings of the 12th international conference on Supercomputing, p.251-258, July 1998, Melbourne, Australia
Soichiro Araki , Angelos Bilas , Cezary Dubnicki , Jan Edler , Koichi Konishi , James Philbin, User-space communication: a quantitative study, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-16, November 07-13, 1998, San Jose, CA
Angelos Bilas , Courtney R. Gibson , Reza Azimi , Rosalia Christodoulopoulou , Peter Jamieson, Using System Emulation to Model Next-Generation Shared Virtual Memory Clusters, Cluster Computing, v.6 n.4, p.325-338, October
Angelos Bilas , Liviu Iftode , Jaswinder Pal Singh, Evaluation of hardware write propagation support for next-generation shared virtual memory clusters, Proceedings of the 12th international conference on Supercomputing, p.274-281, July 1998, Melbourne, Australia
Angelos Bilas , Dongming Jiang , Jaswinder Pal Singh, Accelerating shared virtual memory via general-purpose network interface support, ACM Transactions on Computer Systems (TOCS), v.19 n.1, p.1-35, Feb. 2001
Sanjeev Kumar , Yitzhak Mandelbaum , Xiang Yu , Kai Li, ESP: a language for programmable devices, ACM SIGPLAN Notices, v.36 n.5, p.309-320, May 2001
Zoran Radovi , Erik Hagersten, Removing the overhead from software-based shared memory, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.56-56, November 10-16, 2001, Denver, Colorado
Angelos Bilas , Cheng Liao , Jaswinder Pal Singh, Using network interface support to avoid asynchronous protocol processing in shared virtual memory systems, ACM SIGARCH Computer Architecture News, v.27 n.2, p.282-293, May 1999
Salvador Petit , Julio Sahuquillo , Ana Pont , David Kaeli, Addressing a workload characterization study to the design of consistency protocols, The Journal of Supercomputing, v.38 n.1, p.49-72, October 2006 | distributed memory;bandwidth;latency;network occupancy;shared memory;host overhead;communication parameters;interrupt cost;clustering |
509603 | Compiling parallel code for sparse matrix applications. | We have developed a framework based on relational algebra for compiling efficient sparse matrix code from dense DO-ANY loops and a specification of the representation of the sparse matrix. In this paper, we show how this framework can be used to generate parallel code, and present experimental data that demonstrates that the code generated by our Bernoulli compiler achieves performance competitive with that of hand-written codes for important computational kernels. | Introduction
Sparse matrix computations are ubiquitous in computational science. However, the development
of high-performance software for sparse matrix computations is a tedious and
error-prone task, for two reasons. First, there is no standard way of storing sparse matri-
ces, since a variety of formats are used to avoid storing zeros, and the best choice for the
format is dependent on the problem and the architecture. Second, for most algorithms, it
takes a lot of code reorganization to produce an efficient sparse program that is tuned to a
particular format. We illustrate these points by describing two formats - a classical format
called Compressed Column Storage (CCS) [10] and a modern one used in the BlockSolve
library [11] - which will serve as running examples in this abstract.
CCS format is illustrated in Fig. 1. The matrix is compressed along the columns and is
stored using three arrays: COLP, VALS and ROWIND. The values of the non-zero elements of
each column j are stored in the array section 1)). The row
indices for the non-zero elements of the column j are stored in
1)). This is illustrated in Fig. 1(b). If a matrix has many zero columns, then the zero
columns are not stored, which results in what is called Compressed Compressed Column
Storage format (CCCS). In this case, another level of indirection is added (the COLIND
array) to compress the column dimension, as well (Fig. 1(c)).
COLP
VALS
ROWIND
COLP
VALS
ROWIND
3 6 8(a) An example matrix (b) CCS format (c) CCCS format
Figure
1: Illustration of Compressed Column Storage format
This is a very general and simple format. However, it does not exploit any application
specific structure in the matrix. The format used in the BlockSolve library exploits structure
present in sparse matrices that arise in the solution of PDE's with multiple degrees of
freedom. Figure 2(a) (adapted from [11]) illustrates a grid that would arise from 2-D, linear,
multi-component finite-element model with three degrees of freedom at each discretization
point. The degrees of freedom are illustrated by the three dots at each discretization point.
The stiffness matrix for such model would have groups of rows with identical column structure
called i-nodes ("identical nodes"). Non-zero values for each i-node can be gathered into
a dense matrix as shown in Fig. 2(c).
Such matrices are also rich in cliques (a partition into cliques is shown in Fig. 2(a) using
dashed rectangles). The library colors the contracted graph induced by the cliques and
reorders the matrix as shown in Fig. 2(b). For symmetric matrices, only the lower half is
stored together with the diagonal. Black triangles along the diagonal correspond to dense
matrices induced by the cliques. Gray off-diagonal blocks correspond to sparse blocks of the
matrix (stored using i-nodes). Notice that the matrix is stored as a collection of smaller
dense matrices. This fact helps reduce sparse storage overhead and improve performance of
matrix-vector products.
For parallel execution, each color is divided among the processors. Therefore each processor
receives several blocks of contiguous rows. On each processor, the off-diagonal blocks are
actually stored by column (in column i-nodes). When performing a matrix-vector product,
this storage organization makes the processing of messages containing non-local values of the
vector more efficient. In addition, this allows the overlap of computation and communication
by separating matrix-vector product into a portion which accesses only local data and one
that deals with non-local data in incoming messages.
The main algorithm we will consider in this paper is matrix-vector product which is
the core computation in iterative solvers for linear systems. Consider the performance (in
Mflops) of sparse matrix-vector product on a single processor of an IBM SP-2 for a variety
of matrices and storage formats, shown in Table 1 (descriptions of the matrices and the
formats can be found in Appendix A). Boxed numbers indicate the highest performance for
a given matrix. It is clear from this set of experiments that there is no single format that
is appropriate for all kinds of problems. This demonstrates the difficulty of developing a
"sparse BLAS" for sparse matrix computations. Even if we limit ourselves to the formats in
Table
1, one still has to provide at least 6 versions of sparse matrix-matrix product
Color #1
Color #0
Color #2
(a) A subgraph generated by 2D linear
finite element model
(b) Color/clique reordering in the Block-
Solve library
a b
f
e
a
c d
e
f
(c) I-node storage
Figure
2: Illustration of the BlockSolve format
(assuming that the result is stored in a single format)!
The lack of extensibility in the sparse BLAS approach has been addressed by object-oriented
solver libraries, like the PETSc library from Argonne [4]. Such libraries provide
templates for a certain class of solvers (for example, Krylov space iterative solvers) and
allow a user to add new formats by providing hooks for the implementations of some algebraic
operations (such as matrix-vector product). However, in many cases the implementations of
matrix-vector products themselves are quite tedious (as is the case in the BlockSolve library).
Also, these libraries are not very useful in developing new algorithms.
A radically different solution is to generate sparse matrix programs by using restructuring
compiler technology. The compiler is given a dense matrix program with declarations about
which matrices are actually sparse, and it is responsible for choosing appropriate storage
formats and for generating sparse matrix programs. This idea has been explored by Bik and
[6, 7], but their approach is limited to simple sparse matrix formats that are not
representative of those used in high-performance codes. Intuitively, they trade the ability to
handle a variety of formats for the ability to compile arbitrary loop nests.
We have taken a different approach. Previously, we have shown how efficient sparse
sequential code can be generated for a variety of storage formats for DOALL loops and
loops with reductions [13, 14]. Our approach is based on viewing arrays as relations, and the
execution of loop nests as evaluation of relational queries. We have demonstrated that our
method of describing storage formats through access methods is general enough to specify
Name Diagonal Coordinate CRS ITPACK JDiag BS95
small 21.972 8.595 16.000 7.446 21.818 2.921
685 bus 1.133 5.379 20.421 4.869 31.406 2.475
gr
memplus 0.268 4.648 15.299 0.250 12.111 4.138
sherman1
Table
1: Performance of sparse matrix-vector product
a variety of formats yet specific enough to allow important optimizations. Since the class
of "DOANY" loops covers not only matrix-vector and matrix-matrix products, but also
important kernels within high-performance implementations of direct solvers and incomplete
preconditioners, this allows us to address the needs of a number of important applications.
One can think of our sparse code generator as providing an extensible set of sparse BLAS
codes, which can be used to implement a variety of applications, just like dense BLAS
routines.
For parallel execution, one need to specify how data and computation are partitioned.
Such information (we call it distribution relation) can come in a variety of formats. Just as
is the case with sparse matrix formats, the distribution relation formats are also application
dependent. In the case of regular block/cyclic distributions the distribution relations can
be specified by a closed-form formula. This allows ownership information to be computed
at compile-time. However, regular distributions might not provide adequate load-balance in
many irregularly structured applications.
The HPF-2 standard [9] provides for two kinds of irregular distributions: generalized
block and indirect. In generalized block distribution, each processor receives a single block of
continuous rows. It is suggested in the standard that each processor should hold the block
sizes for all processors - that is the distribution relation should be replicated. This permits
ownership to be determined without communication. Indirect distributions are the most
general: the user provides an array MAP such that the element MAP(i) gives the processor to
which the ith row is assigned. The MAP array itself can be distributed a variety of ways.
However, this can require communication to determine ownership of non-local data.
The Chaos library [15] allows the user to specify partitioning information by providing
the list of row indices assigned to each processor. The list of indices are transferred into
distributed translation table which is equivalent to having a MAP array partitioned block-
wise. This scheme is as general as the indirect scheme used in HPF-2 and it also requires
communication to determine ownership and to build the translation table.
As we have already discussed, the partitioning scheme used in the BlockSolve library is
somewhat different. It is more general than the generalized block distribution provided by
HPF-2, yet it has more structure than the indirect distribution. Furthermore, the distribution
relation in the BlockSolve library is replicated, since each processor usually receives
only a small number of contiguous rows.
Our goal is to provide a parallel code generation strategy with the following properties:
ffl The strategy should not depend on having a fixed set of sparse matrix formats.
ffl It should not depend on having a fixed set of distributions.
ffl The system should be extensible. That is it should be possible to add new formats
without changing the overall code generation mechanism.
ffl At the same time, the generality should not come at the expense of performance. The
compiler must exploit structure available in sparse matrix and partitioning formats.
To solve this problem we extend our relational approach to the generation of parallel
sparse code starting from dense code, a specification of sparse matrix formats and data
partitioning information. We view arrays as distributed relations and parallel loop execution
as distributed query evaluation. In addition, different ways of representing partitioning
information (regular and irregular) are unified by viewing distribution maps themselves as
relations.
Here is the outline of the rest of the paper. In Section 2, we outline our relational
approach to sequential sparse code generation. In Section 3, we describe our sparse parallel
code generation algorithm. In Section 4, we present experimental evidence of the advantages
of our approach. Section 5 presents a comparison with previous work. Section 6 presents
conclusions and ongoing work.
Relational Model of Sparse Code Generation
Consider the matrix-vector product
Suppose that the matrix A and the vector x are sparse, and that the vector y is dense. To
execute this code efficiently, it is necessary to perform only those iterations (i,j) for which
and X(j) are not zero. This set of iterations can be described by the following set
of constraints:!
a
(1)
The first row represents the loop bounds. The constraints in the second row associate values
with array indices: for example, the predicate A(i; j; a) constraints a to be the value of
j). Finally, the constraints in the third row specify which iterations update Y with
non-zero values.
Our problem is to compute an efficient enumeration of the set of iterations specified by
the constraints (1). For these iterations, we need efficient access to the corresponding entries
in the matrices and vectors. Since the constraints are not linear and the sets being computed
are not convex, we cannot use methods based on polyhedral algebra, such as Fourier-Motzkin
elimination [2], to enumerate these sets.
Our approach is based on relational algebra, and it models A, X and Y as relations
(tables) that hold tuples of array indices and values. Conceptually, the relation corresponding
to a sparse matrix contains both zero and non-zero values. We view the iteration space of
the loop as a relation I of hi; ji tuples. Then we can write the first two rows of constraints
from (1) as the following relational query (relational algebra notation is summarized in
Appendix
To test if elements of sparse arrays A and X are non-zero, we use predicates NZ(A(i; j))
and NZ(X(j)). Notice that because Y is dense, NZ(Y (i)) evaluates to true for all array
. Therefore, the constraints in the third row of (1) can be now rewritten
as:
The predicate P is called the sparsity predicate. We use the algorithm of Bik and Wijshoff [6,
7] to compute the sparsity predicate in general.
Using the definition of the sparsity predicate, we can finally write down the query which
defines the indices and values in the sparse computation:
(oe is the relational algebra selection operator.)
We have now reduced the problem of efficiently enumerating the iteration points that
satisfy the system of constraints (1) to the problem of efficiently computing a relational
query involving selections and joins. This problem in turn is solved by determining an
efficient order in which the joins in (4) should be performed and determining how each of
the joins should be implemented. These decisions depend on the storage formats used for
the sparse arrays.
2.1 Describing Storage Formats
Following ideas from relational database literature [16, 20], each sparse storage format is
described in terms of its access methods and their properties. Unlike database relations,
which are usually stored as "flat" collections of tuples, most sparse storage formats have
hierarchical structure, which must be exploited for efficiency. For example, the CCS format
does not provide a way of enumerating row indices without first accessing a particular column.
We use the following notation to describe such hierarchical structure of array indices:
which means that for a given column index j we can access a set of hi; vi tuples of row indices
and values of the matrix. The - operator is used to denote the hierarchy of array indices.
For each term in the hierarchy (J and (I; V ) in the example), the programmer must
provide methods to search and enumerate the indices at that level, and must specify the
properties of these methods such as the cost of the search or whether the enumeration
produces sorted output. These methods and their properties are used to determine good
join orders and join implementations for each relational query extracted from the program,
as described in [14].
This way of describing storage formats to the compiler through access methods and
properties solves the extensibility problem: a variety of storage formats can be described to
the compiler, and the compilation strategy does not depend on a fixed set of formats. For
the details on how the formats are specified to the compiler, see [13].
indices. Permutations and other kinds of index translations can be easily incorporated into
our framework. Suppose we have a permutation P which is stored using two integer arrays:
PERM and IPERM - which represent the permutation and its inverse. We can view P as a
relation of tuples hi; i 0 i, where i is the original index and i 0 is the permuted index.
Now suppose that rows of the matrix in our example have been permuted using P . Then
we can view A as relation of hi tuples and the query for sparse matrix-vector product
is:
where the sparsity predicate is:
2.3
Summary
Here are the highlights of our
ffl Arrays (sparse and dense) are relations
ffl Access methods define the relation as a view of the data structures that implement a
particular format
ffl We view loop execution as relational query evaluation
ffl The query optimization algorithm only needs to know the high-level structure of the
relations as provided by the access methods and not the actual implementation (e.g.
the role of the COLP and ROWIND arrays in the CCS storage).
ffl Permutations also can be handled by our compiler
ffl The compilation algorithms are independent of any particular set of storage formats
and new storage formats can be added to the compiler.
Generating parallel code
Ancourt et al. [1] have described how the problem of generating SPMD code for dense HPF
programs can be reduced to the computation of expressions in polyhedral algebra. We now
describe how the problem of generating sparse SPMD code for a loop nest can be reduced to
the problem of evaluating relational algebra queries over distributed relations. Section 3.1
describes how distributed arrays are represented. Section 3.2 describes how a distributed
query is translated into a sequence of local queries and communication statements. In
Section 3.3 we discuss how our code generation algorithm is used in the context of the
BlockSolve data structures.
3.1 Representing distributed arrays
In the uniprocessor case, relations are high-level views of the underlying data structures.
In the parallel case, each relation is a view of the partitions (or fragments) stored on each
processor. The formats for the fragments are defined using access methods as outlined in
Sec. 2.1. The problem we must address is that of describing distributed relations from the
fragments.
Let's start with the following simple example:
ffl The matrix A is partitioned by row. Each processor p gets a fragment matrix A (p) .
ffl Let i and j be the row and column indices of an array element in the original matrix,
and let i 0 and j 0 be the corresponding indices in a fragment A (p) . Because the partition
is by row, the the column indices are the same (j is the global
row index, whereas i 0 can be thought of the local row offset. To translate between i
and i 0 , each processor keeps an integer array IND (p) such that IND (p) (i 0
each processor keeps the list of global row indices assigned to it.
How do we represent this partition?
Notice that on each processor p the array IND (p) can be viewed as a relation IND (p) (i; i 0 ).
The local fragment of the matrix can also be viewed as a relation: A (p) (i a). We can
define the global matrix as follows:
(The projection operator - is defined in the Appendix B.)
In this case, each processor p carries the information that translates its own fragment
A (p) into the contribution to the global relation. But there are other situations, when a
processors other than p might own the translation information for the fragment stored on p.
A good example is the distributed translation table used in the Chaos library [15]. Suppose
that the global indices fall into the range be the
number of processors. Let e. Then for a given global index i the index of the
owner processor p and the the local offset i 0 are stored on processor
Each processor q holds the array of hp; indexed by
We need a general way of representing such index translation schemes. The key is to
view the index translation relation itself as a distributed relation. Then, in the first case this
global relation is defined as:
In the example from the Chaos library, the relation is defined by:
where IND (q) (h; is the view of the above mentioned array of hp; i 0 i tuples and the
relation BLOCK(i; q; h) is the shorthand for the constraints in (8) and (9).
Once we have defined the index translation relation IND(i;
where IND can be defined by, for example, (10) or (11).
Similarly, we can define the global relations X and Y for the vectors in the matrix-vector
product (assuming they are distributed the same way as the rows of A):
In general, distributed relations are described by:
- a
IND(a;
where R is the distributed relation, R (p) is the fragment on processor p and IND is the
global-to-local index translation relation. The index translation relation can be different for
different arrays, but we assume that it always specifies a 1-1 mapping between the global
index a and the pair hp; a 0 i. Notice that our example partitioning of the IND relation in
(10) and (11) themselves satisfy definition (15). We call (15) the fragmentation equation.
How do we specify the distribution of computation? Recall that the iteration set of the
loop is also represented as a relation: I(i; j), in our matrix-vector product example. We
A (p)
Access methods
Fragmentation
COLP
Global relations
VALS
Low-level data structures
A
ROWIND A (p)
Bernoulli Compiler HPF
A
Distributed arrays
Alignment/Distribution
+Compiler
Figure
3: Flow of information in HPF and Bernoulli Compiler
could require the user to supply the full fragmentation equation for I. But this would be
too burdensome: the user would have to provide the local iteration set I (p) , but this set
should really be determined by the compiler using some policy (such as the owner-computes
rule). In addition, because the relation I is not stored, there is no need to allow multiple
storage formats for it. Our mechanisms are independent of the policy used to determine the
distribution relation for iterations; given any distribution relation IND, we can define the
local iteration set by:
I (p) (i
This simple definition allows us to treat the iteration set relation I uniformly together with
other relations in question.
Notice that the fragmentation equation (15) is more explicit than the alignment-distri-
bution scheme used in HPF. In the Bernoulli compiler global relations are described through
a hierarchy of views: first local fragments are defined through access methods as the views
of the low-level data structures. Then the global relations are defined as views of the local
fragments through the fragmentation equation.
In HPF, alignment and distribution provide the mapping from global indices to proces-
sors, but not the full global-to-local index translation. Local storage layout (and the full
index translation) is derived by the compiler. This removes from the user the responsibility
for (and flexibility in) defining local storage formats. The difference in the flow of information
between HPF and Bernoulli Compiler is illustrated in Fig. 3.
By mistake, the user may specify inconsistent distribution relations IND. These incon-
sistencies, in general, can only be detected at runtime. For example, it can only be verified
at run-time if a user specified distribution relation IND in fact provides a 1-1 and onto
map. This problem is not unique to our framework - HPF with value-based distributions
[21] has a similar problem. Basically, if a function is specified by its values at run-time, its
properties can only be checked at run-time. It is possible to generate a "debugging" version
of the code, that will check the consistency of the distributions, but this is beyond the scope
of this paper.
3.2 Translating distributed queries
Let us return to the query for sparse matrix-vector product:
The relations A, X and Y are defined by (12), (13) and (14). We translate the distributed
query (17) into a sequence of local queries and communication statements by expanding the
definitions of the distributed relations and doing some algebraic simplification, as follows.
3.2.1 General strategy
In the distributed query literature the optimization problem is: find the sites that will evaluate
parts of the query (17). In the context of, say, a banking database spread across branches
of the bank, the partitioning of the relations is fixed, and may not be optimal for each query
submitted to the system. This is why the choice of sites might be non-trivial in such ap-
plications. See [20] for a detailed discussion of the general distributed query optimization
problem.
In our case, we expect that the placement of the relations is correlated with the query
itself and is given to us by the user. In particular, the placement of the iteration space
relation I tells us where the query should be processed. That is the query to be evaluated
on each processor p is:
I (p) (i;
where I (p) is the set of iterations assigned to processor p. We resolve the references to the
global relations A, X and Y by, first, exploiting the fact that the join between some of
them (in this case A and Y ) do not require any communication at all and can be directly
translated into the join between the local fragments. Then, we resolve the remaining references
by computing communication sets (and performing the actual communication) for
other relations (X in our example).
We now outline the major steps.
3.2.2 Exploiting collocation
In order to expose the fact that the join between A and Y can be done without communi-
cation, we expand the join using the definitions of the relations:
Because we have assumed that the index translation relation IND provides a 1-1 mapping
between global index and processor numbers, we can deduce that q. This is nothing
more than the statement of the fact that A and Y are aligned [3, 5]. So the join between A
and Y can be translated into:
A (p) (i
Notice that the join on the global index i has been translated into the join on the local
offsets . The sparsity predicate P originally refers to the distributed relations:
In the translated query, we replace the references to the global
relations with the references to the local relations.
3.2.3 Generating communication
The query:
Used (p)
A (p) (i
computes the set of global indices j of X that are referenced by each processor. The join of
this set with the index translation relation will tell us where to get each element:
RecvInd (p)
Used (p)
This tells us which elements of X must be communicated to processor p from processor
q. If the IND relation is distributed (as is the case in the Chaos library), then evaluation
of the query (22) might itself require communication. This communication can also be
expressed and computed in our framework by applying the parallel code generation algorithm
recursively.
3.2.4
Summary
Here is the summary of our
We represent distributed arrays as distributed relations.
We represent global-to-local index translation relations as distributed relations.
We represent parallel DOANY loop execution as distributed query evaluation.
ffl For compiling dense HPF programs, Ancourt et al. [1] describe how the computation
sets, communication sets etc. can be described by expressions in polyhedral algebra.
We derive similar results for sparse programs, using relational algebra.
3.3 Compiling for the BlockSolve formats.
As was discussed in the introduction, the BlockSolve library splits the matrix into two
disjoint data structures: the collection of dense matrices along the diagonal, shown using
black triangles in Figure 2(b), and the off-diagonal sparse portions of the matrix stored using
i-node format (Figure 2(c)).
In the computation of a matrix-vector product x the dense matrices along the
diagonal refer only to the local portions of the vector x. Also the off-diagonal sparse blocks
are stored in a way that makes it easy to enumerate separately over those elements of the
matrix that refer only to the local elements of x and over those that require communication.
Altogether, we can view a matrix A stored in the BlockSolve library format as a sum
AD +A SL +A SNL , where:
ffl AD represents the dense blocks along the diagonal
ffl A SL represents the portions of the sparse blocks that refer to local elements of x
ffl A SNL represents the portions of the sparse blocks that refer to non-local elements of x
AD , A SL and A SNL are all partitioned by row. The distribution in the library assigns
a small number of continuous rows to each processor. The distribution relation is also
replicated, thus reducing the cost of computing the ownership information.
The hand-written library code does not have to compute any communication sets or
index translations for the products involving AD and A SL - these portions of the matrix
access directly the local elements of x.
How can we use our code generation technology to produce code competitive with the
hand-written code?
The straight-forward approach is to start from the sequential dense matrix data-parallel
program for matrix-vector product. Since the matrix is represented as three fragments (AD ,
A SL and A SNL ), our approach essentially computes three matrix vector products:
The performance of this code is discussed in the next section. Careful comparison of this
code with the handwritten code reveals that the performance of our code suffers from the fact
that even though the products involving AD and A SL do not require any communication,
they still require global-to-local index translation for the elements of x that are used in the
computation. If we view AD and A SL as global relations that stored global row and column
indices, then we hide the fact that the local indices of x can be determined directly from the
data structures for AD and A SL . This redundant index translation introduces extra level of
indirection in the accesses to x and degrades node program performance. At this point, we
have no automatic approach to handling this problem.
We can however circumvent the problem at the cost of increasing the complexity of the
input program by specifying the code for the products with AD and A SL at the node program
level. The code for the product with A SNL is still specified at the global (data-parallel) level:
local: y
local: y
where y (p) , etc are the local portions of the arrays and y, A SNL and x are the global views.
The compiler then generates the necessary communication and index translations for the
product with A SNL . This mixed specification (both data-parallel and node level programs)
is not unique to our approach. For example, HPF allows the programmer to "escape" to the
node program level by using extrinsics [9].
In general, sophisticated composite sparse formats, such as the one used in the BlockSolve
library, might require algorithm specification at a different level than just a dense loop. We
are currently exploring ways of specifying storage formats so that we can get good sequential
performance without having to drop down to node level programs for some parts of the
application.
4 Experiments
In this section, we present preliminary performance measurements on the IBM SP-2. The
algorithm we studied is a parallel Conjugate Gradient [18] solver with diagonal preconditioning
(CG), which solves large sparse systems of linear equations iteratively. Following
the terminology from Chaos project, the parallel implementation of the algorithm can be
divided into the inspector phase and the executor phase [15]. The inspector determines the
the set of values to be communicated and performs some other preprocessing. The executor
performs the actual computation and communication. In iterative applications the cost of
the inspector can usually be amortized over several iterations of the executor.
In order to verify the quality of the compiler-generated code and to demonstrate the
benefit of using the mixed local/global specification (24) of the algorithm in this application
we have measured the performance of the inspector and the executor in the following
implementations of the CG algorithm:
ffl BlockSolve is the hand-written code from the BlockSolve library.
ffl Bernoulli-Mixed is the code generated by the compiler starting from the mixed lo-
cal/global specification in (24).
ffl Bernoulli is the "naive" code generated by the compiler starting from fully data-parallel
specification (23).
We ran the different implementations of the solver on a set of synthetic three-dimensional
grid problems. The connectivity of the resulting sparse matrix corresponds to a 7-point
stencil with 5 degrees of freedom at each discretization point. Then, we ran the solver on 2,
4, 8, 16, 32 and 64 processors of the IBM SP-2 at Cornell Theory Center. During each run
we kept the problem size per processor constant at \Theta 30. This places 135 \Theta 10 3 rows
with about 4:5 \Theta 10 6 non-zeroes total on each processor. We limited the number of solver
iterations to 10. Tab. 2 shows the times (in seconds) for the numerical solution phase (the
executor). Tab. 3 shows the overhead of the inspector phase as the ratio of the time taken
by the inspector to the time taken by a single iteration of the executor.
The comparative performance of the Bernoulli-Mixed and BlockSolve versions verifies
the quality of the compiler generated code. The 2-4% difference is due to aggressive
BlockSolve Bernoulli-Mixed Bernoulli
sec sec diff. sec diff.
Table
2: Numerical computation times (10 iterations)
BlockSolve Bernoulli-Mixed Bernoulli Indirect-Mixed Indirect
Table
3: Inspector overhead
overlapping of communication and computation done in the hand-written code. Currently,
the Bernoulli compiler generates simpler code, which first exchanges the non-local values of
x and then does the computation. While the inspector in Bernoulli-Mixed code is about
twice as expensive as that in the BlockSolve code, its cost is still quite negligible (2:7% of
the executor with 10 iterations).
The comparison of the Bernoulli and Bernoulli-Mixed code illustrates the importance
of using the mixed local/global specification (24). The Bernoulli code has to perform
redundant work in order to discover that most of the references to x are in fact local and do
not require communication. The amount of this work is proportional to the problem size (the
number of unknowns) and is much larger than the number of elements of x that are actually
communicated. As the result, the inspector in the Bernoulli code is an order of magnitude
more expensive than the one in the BlockSolve or Bernoulli-Mixed implementations. The
performance of the executor also suffers because of the redundant global-to-local translation,
which introduces an extra level of indirection in the final code even for the local references
to x. As the result, the executor in Bernoulli code is about 10% slower than in the
Bernoulli-Mixed code.
To demonstrate the benefit of exposing structure in distribution relations, we have measured
the inspector overhead for using the indirect distribution format from the HPF-2
standard [9]. We have implemented two versions of the inspectors using the support for the
indirect distribution in the Chaos library [15]:
ffl Indirect-Mixed is the inspector for the mixed local/global specification of (24).
ffl Indirect is the inspector for the fully data parallel specification.
Tab. 3 shows the ratio of the time taken by the Indirect-* inspectors to the time taken by
the single iteration of the Bernoulli-* executors - the executor code is exactly the same in
both cases and we have only measured the executors for the Bernoulli-* implementations.
The order of magnitude difference between the performance of Indirect-Mixed and
Bernoulli-Mixed inspectors is due to the fact that the Indirect-Mixed inspector has to
perform asymptotically more work and requires expensive communication. Setting up the
distributed translation table in the Indirect-Mixed inspector, which is necessary to resolve
non-local references, requires the round of all-to-all communication with the volume proportional
to the problem size (i.e. the number of unknowns). Additionally, querying the
translation table (in order to determine the ownership information) again requires all-to-all
communication: for each global index j the processor
queried for the ownership information - even though the communication pattern for our
problems has limited "nearest-neighbor" connectivity.
The difference between Indirect and Bernoulli inspectors is not as pronounced - the
number of references that has to be translated is proportional to the problem size. Still, the
Indirect inspector has to perform all-to-all communication to determine the ownership of
the non-local data.
The relative effect of the inspector performance on the overall solver performance de-
pends, of course, on the number of iterations taken by the solver, which, in turn, depends on
the condition number of the input matrix. To get a better idea of the relative performance of
the Bernoulli-Mixed and Indirect-Mixed implementation for a range of problems we have
plotted in Fig. 4 the ratios of the time that the Indirect-Mixed implementation would take
to the time that the Bernoulli-Mixed implementation would take on 8 and 64 processors
for a range of iteration counts 5 - k - 100. The lines in Fig. 4 plot the values of the ratio:
is the inspector overhead for the Bernoulli-Mixed version, r I is inspector overhead
for the Indirect-Mixed version and k is the iteration count. A simple calculation shows that
it would take 77 iterations of an Indirect-Mixed solver on 64 processors to get within 10%
of the performance of the Bernoulli-Mixed. On 8 processors the number is 43 iterations.
To get within 20% it would take 21 and 39 iteration on 8 and 64 processors, respectively.
These data demonstrate that, while the inspector cost is somewhat amortized in an
iterative solver, it is still important to exploit the structure in distribution relations - it can
lead to order of magnitude savings in the inspector cost and improves the overall performance
of the solver.
It should also be noted that the Indirect-Mixed version is not only slower than the two
Bernoulli versions but also requires more programming effort. Our compiler starts with the
specification at the level of dense loops both in (23) and (24), whereas an HPF compiler
needs sequential sparse code as input. For our target class of problems - sparse DOANY
loops - our approach results in better quality of parallel code while reducing programming
effort.
Indirect-mixed
Bernoulli-mixed
Number of iterations
Figure
4: Effect of problem conditioning on the relative performance
5 Previous work
The closest alternative to our work is a combination of Bik's sparse compiler [6, 7] and the
work on specifying and compiling sparse codes in HPF Fortran [19, 21, 22]). One could use
the sparse compiler to translate dense sequential loops into sparse loops. Then, the Fortran
D or Vienna Fortran compiler can be used to compile these sparse loops. However, both
Bik's work and the work done by Ujaldon et al. on reducing inspector overheads in sparse
codes limits a user to the fixed set of sparse matrix storage and distribution formats, this
reducing possibilities for exploiting problem-specific structure.
6 Conclusions
We have presented an approach for compiling parallel sparse codes for user-defined data struc-
tures, starting from DOANY loops. Our approach is based on viewing parallel DOANY loop
execution as relational query evaluation and sparse matrices and distribution information
as distributed relations. This relational approach is general enough to represent a variety of
storage formats.
However, this generality does not come at the expense of performance. We are able
to exploit both the properties of the distribution relation in order to produce inexpensive
inspectors, as well as produce quality numerical code for the executors. Our experimental
evidence shows that both are important for achieving performance competitive with hand-written
library codes.
So far, we have focused our efforts on the versions of iterative solvers, such as the Conjugate
Gradient algorithm, which do not use incomplete factorization preconditioners. The
core operation in such solvers is the sparse matrix-vector product or the product of a sparse
matrix and a skinny dense matrix. We are currently investigating how our techniques can be
used in the automatic generation of high-performance codes for such operations as matrix
factorizations (full and incomplete) and triangular linear system solution.
--R
A linear algebra framework for static hpf code distribution.
Scanning polyhedra with do loops.
Global optimizations for parallelism and locality on scalable parallel machines.
Solving alignment using elementary linear algebra.
Advanced compiler optimizations for sparse computations.
Automatic data structure selection and transformation for sparse matrix computations.
The Quality of Numerical Software: Assessment and Enhancement
High Performance Fortran Forum.
Computer Solution of Large Sparse Positive Definite Systems.
users manual: Scalable library software for the parallel solution of sparse linear systems.
Algorithm 586 ITPACK 2C: A FORTRAN package for solving large sparse linear systems by adaptive accelerated iterative methods.
Compiling parallel sparse code for user-defined data structures
A relational approach to sparse matrix compilation.
Database Management Systems.
Solving Elliptic Problems Using ELLPACK.
Kyrlov subspace methods on supercomputers.
New data-parallel language features for sparse matrix computations
Principles of Database and Knowledge-Base Systems
Distributed memory compiler design for sparse problems.
--TR
Solving elliptic problems using ELLPACK
Principles of database and knowledge-base systems, Vol. I
Krylov subspace methods on supercomputers
Scanning polyhedra with DO loops
Global optimizations for parallelism and locality on scalable parallel machines
Runtime compilation techniques for data partitioning and communication schedule reuse
Advanced compiler optimizations for sparse computations
Automatic Data Structure Selection and Transformation for Sparse Matrix Computations
Database management systems
Matrix market
Algorithm 586: ITPACK 2C: A FORTRAN Package for Solving Large Sparse Linear Systems by Adaptive Accelerated Iterative Methods
Computer Solution of Large Sparse Positive Definite
Distributed Memory Compiler Design For Sparse Problems
Solving Alignment Using Elementary Linear Algebra
--CTR
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Compression Methods for Multidimensional Sparse Array Operations Based on the EKMR Scheme, IEEE Transactions on Computers, v.52 n.12, p.1640-1646, December
Yuan Lin , David Padua, On the automatic parallelization of sparse and irregular Fortran programs[1]This work is supported in part by Army contract DABT63-95-C-0097; Army contract N66001-97-C-8532; NSF contract MIP-9619351; and a Partnership Award from IBM. This work is not necessarily representative of the positions or policies of the Army or Government., Scientific Programming, v.7 n.3-4, p.231-246, August 1999
Roxane Adle , Marc Aiguier , Franck Delaplace, Toward an automatic parallelization of sparse matrix computations, Journal of Parallel and Distributed Computing, v.65 n.3, p.313-330, March 2005
Chun-Yuan Lin , Yeh-Ching Chung, Data distribution schemes of sparse arrays on distributed memory multicomputers, The Journal of Supercomputing, v.41 n.1, p.63-87, July 2007
Eun-Jin Im , Katherine Yelick , Richard Vuduc, Sparsity: Optimization Framework for Sparse Matrix Kernels, International Journal of High Performance Computing Applications, v.18 n.1, p.135-158, February 2004
Chun-Yuan Lin , Jen-Shiuh Liu , Yeh-Ching Chung, Efficient Representation Scheme for Multidimensional Array Operations, IEEE Transactions on Computers, v.51 n.3, p.327-345, March 2002
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Parallel Algorithms for Multidimensional Array Operations Based on the EKMR Scheme for Distributed Memory Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.14 n.7, p.625-639, July | sparse matrix computations;parallelizing compilers |
509605 | Compiling stencils in high performance Fortran. | For many Fortran90 and HPF programs performing dense matrix computations, the main computational portion of the program belongs to a class of kernels known as stencils. Stencil computations are commonly used in solving partial differential equations, image processing, and geometric modeling. The efficient handling of such stencils is critical for achieving high performance on distributed-memory machines. Compiling stencils into efficient code is viewed as so important that some companies have built special-purpose compilers for handling them and others have added stencil-recognizers to existing compilers.In this paper we present a general compilation strategy for stencils written using Fortran90 array constructs. Our strategy is capable of optimizing single or multi-statement stencils and is applicable to stencils specified with shift intrinsics or with array-syntax all equally well. The strategy eliminates the need for pattern-recognition algorithms by orchestrating a set of optimizations that address the overhead of both intraprocessor and interprocessor data movement that results from the translation of Fortran90 array constructs. Our experimental results show that code produced by this strategy beats or matches the best code produced by the special-purpose compilers or pattern-recognition schemes that are known to us. In addition, our strategy produces highly optimized code in situations where the others fail, producing several orders of magnitude performance improvement, and thus provides a stencil compilation strategy that is more robust than its predecessors. | Introduction
High-Performance Fortran (HPF)[14], an extension of Fortran90, has attracted considerable
attention as a promising language for writing portable parallel programs. HPF offers a simple
programming model shielding programmers from the intricacies of concurrent programming
and managing distributed data. Programmers express data parallelism using Fortran90 array
operations and use data layout directives to direct partitioning of the data and computation
among the processors of a parallel machine.
In many programs performing dense matrix computations, the main computational portion
of the program belongs to a class of kernels known as stencils. For HPF to gain acceptance
as a vehicle for parallel scientific programming, it must achieve high performance on
this important class of problems. Compiling stencils into efficient code is viewed as so important
that some companies have built special-purpose compilers for handling them [4, 5, 6]
and others have added stencil-recognizers to existing HPF compilers [1, 2]. Each of these
previous approaches to stencil compilation had significant limitations that restricted the
types of stencils that they could handle.
In this paper, we focus on the problem of optimizing stencil computations, no matter
how they are instantiated by the programmer, for execution on distributed-memory archi-
tectures. Our strategy orchestrates a set of optimizations that address the overhead of both
intraprocessor and interprocessor data movement that results from the translation of For-
array constructs. Additional optimizations address the issues of scalarizing array
assignment statements, loop fusion, and data locality.
In the next section we briefly discuss stencil computations and their execution cost on
distributed-memory machines. In Section 3 we give an overview of our compilation strategy,
and then discuss the individual optimizations. In Section 4 we present an extended example
to show how our strategy handles a difficult case. Experimental results are given in Section 5,
and in Section 6 we compare this strategy with other known efforts.
Computations
In this section we introduce stencil computations and give an overview of their execution
cost on distributed-memory machines. We also introduce the normalized intermediate form
which our compiler uses for all stencils.
2.1 Stencils
A stencil is a stylized matrix computation in which a group of neighboring data elements
are combined to calculate a new value. They are typically combined in the form of a sum
of products. This type of computation is common in solving partial differential equations,
image processing, and geometric modeling. The Fortran90 array assignment statement in
Figure
1 is commonly referred to as a 5-point stencil. In this statement src and dst are
arrays, and C1-C5 are either scalars or arrays. Each interior element of the result array dst
is computed from the corresponding element of the source array src and the neighboring
Figure
1: 5-point stencil computation.
Figure
2: 9-point stencil computation.
Figure
3: Problem 9 from the Purdue Set.
elements of src on the North, West, South, and East. A 9-point stencil that computes all
grid elements by exploiting the cshift intrinsic might be specified as shown in Figure 2.
In the previous two examples the stencils were specified as a single array assignment
statement, but this need not always be the case. Consider again the 9-point stencil above.
If the programmer attempted to optimize the program by hand, or if the stencil was pre-processed
by other optimization phases of the compiler, we might be presented with the code
shown in Figure 3 1 .
A goal of our work is to generate the same, highly-optimized code for all stencil computa-
tions, regardless of how they have been written in HPF. For this reason, we have designed our
optimizer to target the most general, normalized input form. All stencil and stencil-like computations
can be translated into this normal form by factoring expressions and introducing
temporary arrays. In fact, this is the intermediate form used by several distributed-memory
compilers [18, 23, 3]. The normal form has several distinguishing characteristics:
ffl cshift intrinsics and temporary arrays have been inserted to perform data movement
needed for operations on array sections that have different processor mappings.
This example was taken from Problem 9 of the Purdue Set [21] as adapted for Fortran D benchmarking
by Thomas Haupt of NPAC.
Figure
4: Intermediate form of 5-point stencil computation.
ffl Each cshift intrinsic occurs as a singleton operation on the right-hand side of an array
assignment statement and is only applied to whole arrays.
ffl The expression that actually computes the stencil operates on operands that are perfectly
aligned, and thus no communication operations are required.
For example, given the 5-point stencil computation presented in Figure 1, the CM Fortran
compiler would translate it into the sequence of statements shown in Figure 4.
For the rest of this paper we assume that all stencil computations have been normalized
into this form, and that all arrays are distributed in a block fashion. And although we
concentrate on stencils expressed using the cshift intrinsic, the techniques presented can
be generalized to handle the eoshift intrinsic as well.
2.2 Stencil Execution
The execution of a stencil computation on a distributed-memory machine has two major
components: the data movement associated with a set of cshift operations and the calculation
of the sum of products.
In the first phase of a stencil computation, all data movement associated with cshift
operations is performed. We illustrate the data movement for a single cshift using an
example. Figure 5 shows the effects of a cshift by -1 along the second dimension of a
two-dimensional block-distributed array. When a cshift operation is performed on a
distributed array, two major actions take place:
1. Data elements that must be shifted across processing element (PE) boundaries are sent
to the appropriate neighboring PE. This is the interprocessor component of the shift.
In
Figure
5, the dashed lines represent this type of data movement, in this case the
transfer of a column of data between neighboring processors.
2. Data elements shifted within a PE are copied to the appropriate locations in the
destination array. This is the intraprocessor component of the shift. The solid lines in
Figure
5 represent this data movement.
Following data movement, the second phase of a stencil computation is the execution
of a loop nest to calculate a sum of products. The loop nest for a stencil computation is
Figure
5:
constructed during compilation in two steps. First the compiler applies scalarization [24] to
replace Fortran 90 array operations with a serial loop nest that operates on individual data
elements. Next, the compiler transforms this loop nest into SPMD code [8]. The SPMD
code is synthesized by reducing the loop bounds so that each PE computes values only for
the data it owns. A copy of this transformed loop nest, known as the subgrid loop nest,
executes on each PE of the parallel machine.
Due to the nature of stencils which make many distinct array references, these subgrid
loops can easily become memory bound. In such loops, the CPU must often sit idle while it
waits for the array elements to be fetched from memory.
3 Compilation Strategy
In this section we start with an overview of our compilation strategy, and then present the
individual component optimizations.
Given a stencil computation in normal form (as described in Section 2.1), we optimize
it by applying a sequence of four optimizations. The first addresses the intraprocessor data
movement associated with the cshift operations, eliminating it when possible. The second
rearranges the statements into separate blocks of computation operations and communication
operations. This optimizes the stencil by promoting loop fusion for the computation
operations and it prepares the communication operations for further optimization by the
following phase. Next, the interprocessor data movement of the cshift operations is optimized
by eliminating redundant and partially-redundant communication. Finally, loop-level
transformations are applied to optimize the computation.
3.1 Optimizing Intraprocessor Data Movement
Intraprocessor data movement associated with shift intrinsics is completely eliminated when
possible. This is accomplished by an optimization we call offset arrays [15]. This optimization
determines when the source array (src) and the destination array (dst) of the cshift can
share the same memory locations. If this is the case only the interprocessor data movement
needs to occur. We exploit overlap areas [11] to receive the data that is copied between
processors. After this has been accomplished, appropriate references to the destination
array can be rewritten to refer to the source array with indices offset by the shift amount.
The principal challenge then is to determine when the source and destination arrays can
share storage. We have established a set of criteria to determine when it is safe and profitable
to create an offset array. These criteria, and an algorithm used to verify them are described
in detail elsewhere [15, 22]. In general, our approach allows the source and destination arrays
of a shift operation to share storage between destructive updates to either array when the
shift offset is a small constant.
Once we have determined that the destination array of an assignment statement
CSHIFT(SRC,SHIFT,DIM) may be an offset array, we perform the following transformations
on the code. These transformations take advantage of the data that may be shared between
the source array src and destination array dst and move only the required data between
the PEs.
First we replace the shift operation with a call to a routine that moves the off-processor
data of SRC into an overlap area: CALL OVERLAP SHIFT(SRC,SHIFT,DIM). We then replace
all uses of the array dst, that are reached from this definition, with a use of the array src.
The newly created references to src carry along special annotations representing the values
of shift and dim. Finally, when creating subgrid loops during the scalarization phase, we
alter the subscript indices used for the offset arrays. The array subscript used for the offset
reference to src is identical to the subscript that would have been generated for dst with
the exception that the dim-th dimension has been incremented by the shift amount.
The algorithm that we have devised for verifying the criteria and for performing the
above transformations is based upon the static single assignment (SSA) intermediate representation
[9]. The algorithm, after validating the use of an offset array at a shift operation,
transforms the program and propagates that information in an optimistic manner. The propagation
continues until there are no more references to transform or one of the criteria has
been violated. When a criterion has been violated, it may be necessary to insert an array copy
statement into the program to maintain its original semantics. The inserted copy statement
performs the intraprocessor data movement that was avoided with the overlap shift.
Due to the offset array algorithm's optimistic nature, it is able to eliminate intraprocessor
data movement associated with shift operations in many difficult situations. In particular,
it can determine when offset arrays can be exploited even when their definition and uses are
separated by program control flow. This allows our stencil compilation strategy to eliminate
intraprocessor data movement in situations where other strategies fail.
3.2 Statement Reordering
We follow the offset array optimization with our context partitioning optimization [17].
This optimization partitions a set of Fortran90 statements into groups of congruent array
statements 2 , scalar expressions, and communication operations. This assists the compilation
of stencils in the following two ways:
1. First, by grouping congruent array statements together, we ensure that as subgrid
loops are generated, via scalarization and loop fusion, as much computation as possible
is placed within each loop without causing the loops to be over-fused [22]. Loops
are over-fused when the code produced for the resulting parallel loops exhibits worse
performance than the code for the separate parallel loops. Also, the structure of the
subgrid loops produced is very regular. These characteristics increase the chances that
loop transformations performed later are successful in exploiting data reuse and data
locality.
2. Second, by grouping together communication operations, we simplify the task of reducing
the amount of interprocessor data movement, which we discuss in the next
subsection.
To accomplish context partitioning, we use an algorithm proposed by Kennedy and
M c Kinley [16]. While this algorithm was developed to partition parallel and serial loops
into fusible groups, we use it to partition Fortran90 statements into congruence classes.
The algorithm works on the data dependence graph (ddg)which must be acyclic. Since we
apply it to a set of statements within a basic block, our dependence graph contains only
loop-independent dependences and thus is acyclic. A complete description of our context
partitioning algorithm is available elsewhere [17, 22], along with a discussion of its advantages
for both SIMD and MIMD machines.
Context partitioning is key to our ability to optimize multi-statement stencils as fully as
single-statement stencils. No other stencil compilation strategy has this capability.
3.3 Minimizing Interprocessor Data Movement
Once intraprocessor data movement has been eliminated and we have partitioned the statements
into groups of congruent operations, we focus our attention on the interprocessor data
movement that occurs during the calls to cshift. Due to the nature of offset arrays, we
are presented with many opportunities to eliminate redundant and partially redundant data
movement. We call this optimization communication unioning [22], since it combines a set
of communication operations to produce a smaller set of operations.
There are two key observations that allow us to find and eliminate redundant inter-processor
data movement. First, shift operations, including overlap shift, are commutative
Array statements are congruent if they operate on arrays with identical distributions and cover the same
iteration space.
Thus, for arrays that are shifted more than once, we can order the shift operations in any
manner we like without affecting the result. Second, since all overlap shifts move data
into the overlap areas of the subgrids, a shift of a large amount in a given direction and
dimension may subsume all shifts of smaller amounts in the same direction and dimension.
More formally, an overlap shift of amount i in dimension k is redundant if there exists
an overlap shift of amount j in dimension k such that jjj - jij and
Since we have already applied our context partitioning optimization to the program, we can
restrict our focus to the individual groups of calls to overlap shift.
To eliminate redundant data movement using communication unioning, we first use the
commutative property to rewrite all the shifts for multi-offset arrays such that the overlap
shifts for the lower dimensions occur first and are used as input to the overlap shifts
for higher dimensions. We then reorder all the calls to overlap shift, sorting them by
the shifted dimension, lowest to highest. We now scan the overlap shifts for the lowest
dimension and keep only the largest shift amount in each direction. All others can be
eliminated as redundant.
Communication unioning then proceeds to process the overlap shifts for each higher
dimension in ascending order by performing the following three actions:
1. We scan the overlap shifts for the given dimension to determine the largest shift
amount in each direction.
2. We look for source arrays that are already offset arrays, indicating a multi-offset array.
For these, we use the annotations associated with the source array to create an RSD to
be used as an optional fourth argument in the call to overlap shift. The argument
indicates those data elements from the adjacent overlap areas that should also be moved
during the shift operation. Mapping the annotations to the RSD is simply a matter of
adding the annotations to the corresponding RSD dimension; the annotation is added
to the lower bound of the RSD if the shift amount is negative, otherwise it is added to
the upper bound. As with shift amounts, larger RSDs subsume smaller RSDs.
3. We generate a single overlap shift in each direction, using the largest shift amount
and including the RSD as needed - all other overlap shifts for that dimension can
be eliminated.
This procedure eliminates all redundant offset-shift communication, including partially redundant
data movement associated with accessing "corner elements" of stencils.
This algorithm is unique in that it is based upon the understanding and analysis of the
shift intrinsics, rather than being based upon pattern-matching as is done in many stencil
compilers. This optimization eliminates all communication for a shifted array, except for a
single message in each direction of each dimension. The number of messages for the stencil
is thus minimized.
As an example, consider again the 9-point stencil computation that we presented in
Figure
2. The original stencil specification required twelve cshift intrinsics. After applying
communication unioning, only the four calls to overlap shift shown in Figure 6 are
required.
Figures
7-10 display the data movement that results from these calls. The figures contain
a 5 \Theta 5 subgrid (solid lines) surrounded by its overlap area (dashed lines). Portions of the
Figure
of communication unioning for 9-point stencil.
Figure
7: First half of 9-point stencil com-
munication
Figure
8: Result of communication operation
adjacent subgrids are also shown. Figure 7 depicts the data movement specified by the first
two calls. The result of that data movement is shown in Figure 8, where the overlap areas
have been properly filled in. The data movement of the last two calls is shown in Figure 9.
Notice how the last two calls pick up data from the overlap areas that were filled in by the
first two calls, and thus they populate all overlap area elements needed for the subsequent
computation, as shown in Figure 10.
Figure
9: Second half of 9-point stencil
communication
Figure
10: Result of communication operation
3.4 Optimizing the Computation
Finally, after scalarization has produced a subgrid loop nest, we can optimize it by applying
a set of loop-level transformations designed to improve the performance of memory-bound
programs. These transformations include unroll-and-jam, which addresses memory refer-
ences, and loop permutation, which addresses cache references. Each of these optimize the
program by exploiting reuse of data values. These optimizations are described in detail
elsewhere [7, 19] and are not addressed in this paper.
4 An Extended Example
In this section, we trace our compilation strategy through an extended example. This detailed
examination shows how our strategy is able to produce code that matches or beats hand-optimized
code. It also demonstrates how we are able to handle stencil computations that
cause other methods to fail.
For this exercise, we have chosen to use Problem 9 of the Purdue Set [21], as adapted for
Fortran D benchmarking by Thomas Haupt of NPAC [20, 13]. The program kernel is shown
in
Figure
3. The arrays T, U, RIP, and RIN are all two-dimensional and have been distributed
in a (block,block) fashion. This kernel computes a standard 9-point stencil, identical to
that computed by the single-statement stencil shown in Figure 2. The reason it has been
written in this fashion is to reduce memory requirements. Given the single-statement 9-point
stencil, most Fortran90 compilers will generate 12 temporary arrays, one for each cshift.
This greatly restricts the size of the problem that can be solved on a given machine. In
contrast, the Problem 9 specification can be computed with only 3 temporary arrays since
the live-ranges of the last 6 cshifts do not overlap. This reduces the temporary storage
requirements by a factor of four! Additionally, the assignments of the cshifts into RIP and
RIN perform a common subexpression elimination, removing four duplicate cshifts from
the original specification of the stencil.
Figure
11 shows a comparison of execution times for the single-statement cshift stencil
in
Figure
2 and the multi-statement Problem 9 stencil in Figure 3. The programs were
compiled with IBM's xlhpf compiler and executed on a 4-processor SP-2 for varying problem
sizes. As can be seen, the single-statement stencil specification exhausted the available
memory for the larger problem sizes, even though each PE had 256Mbytes of real RAM.
4.1 Program Normalization
We now step through the compilation of the stencil code in Figure 3 using the strategy
presented in this paper. Figure 12 shows the stencil code after normalization. The six
cshifts that are subexpressions in the assignment statements to array T are hoisted from
the statements and assigned to compiler-generated temporary arrays. Since the live ranges
of the temporary arrays do not overlap, a single temporary can be shared among all the
statements. Alternatively, each cshift could receive its own temporary array - that would
not affect the results of our stencil compilation strategy.
Execution
time
(ms)
Subgrid size (squared)
exceeded
memory
CSHIFT specification
Purdue Problem 9
Figure
11: Comparison of two 9-point stencil specifications.
Figure
Problem 9 after normalization.
4.2 Offset Array Optimization
Once all shift operations have been identified and hoisted into their own assignment state-
ments, we apply our offset array optimization. For this example, our algorithm determines
that all the shifted arrays can be made into offset arrays. As can be seen in Figure 13, all
the cshift operations have been changed into overlap shift operations, and references
to the assigned arrays have been replaced with offset references to the source array U. All
intraprocessor data movement has thus been eliminated.
In addition, notice how the temporary arrays, both the compiler-generated TMP array
Figure
13: Problem 9 after offset array optimization.
Figure
14: Problem 9 after context partitioning optimization.
and the user-defined RIP and RIN, are no longer needed to compute the stencil. If there are
no other uses of these arrays in the routine, they need not be allocated. This reduction in
storage requirements allows for larger problems to be solved on a given machine.
4.3 Context Partitioning Optimization
After offset array optimization, we apply our context partitioning algorithm. This algorithm
begins by determining the congruence classes present in the section of code. In
this example there are only two congruence classes: the array statements, which are all
congruent, and the communication statements. The dependence graph is computed next.
There are only two types of dependences that exist in the code: true dependences from the
overlap shift operations to the expressions that use the offset arrays, and the true and
anti-dependences that exist between the multiple occurrences of the array T. Since all the
Figure
15: Problem 9 after communication unioning optimization.
dependences between the two classes are from statements in the communication class to
statements in the congruent array class, the context partitioning algorithm is able to partition
the statements perfectly into two groups. The result is shown in Figure 14. Since the
array statements are now adjacent, scalarization will be able to fuse them into a single loop
nest. Similarly, the communication statements are adjacent and communication unioning
will be successful at its task.
4.4 Communication Unioning Optimization
We now turn our attention to the interprocessor data movement specified in the overlap
shift operations. As described in Section 3.3, we first exploit the commutativity
of overlap shift operations and rewrite multi-dimensional overlap shifts so that the
lower dimensions are shifted first. No rewriting is necessary for this example since all the
dimension 1 shifts occur first, as can be seen in Figure 14.
Next we look at the shifts across the first dimension. Since there is only a single shift
of distance one in each direction, there is no redundant communication to eliminate. In
the second dimension we again find only shifts of distance one. However, we discover four
multi-offset arrays. Examining the annotations of the offset arrays, we create RSD's that
summarize the overlap areas that are necessary. We generate the two calls to overlap shift
that include the RSD's and then eliminate all other overlap shift calls for the second
dimension. The resulting code is shown in Figure 15. Communication unioning has reduced
the amount of communication to a minimum: a single communication operation for each
dimension in each direction.
4.5 Scalarization and Memory Optimizations
Figure
shows the code after scalarization. The code now contains only 4 interprocessor
communication operations, and no intraproceesor data movement is performed. Final transformations
refine the loop bounds to generate a node program that only accesses the subgrids
local to each PE. Our strategy has generated a single loop nest which, due to the nature
of stencil computations, is ripe with opportunities for memory hierarchy optimization. We
hand the final code to an optimizing node compiler that performs loop-level transformations
such as scalar replacement and unroll-and-jam.
ENDDO
ENDDO
Figure
Problem 9 after scalarization.
It is important to note that our strategy also produces the exact same code when given
the single-statement 9-point stencil from Figure 2. This example shows how our stencil compilation
algorithm is capable of fully optimizing stencils, no matter how they are instantiated
by the programmer.
5 Experimental Results
To measure the performance boost supplied by each step of our stencil compilation strategy,
we ran a set of tests on a 4-processor IBM SP-2. We started by generating a naive translation
of the Problem 9 test case into Fortran77+MPI. This is considered our "original" version.
We then successively applied the transformations as outlined in the preceding section and
measured the execution time. The results are shown in Figure 17.
Before analyzing the results in Figure 17, it is worthwhile to compare them to the results
shown in Figure 11 for the Problem 9 code. The performance of our "original" MPI version
of the code for this example is already an order of magnitude faster than the code produced
by IBM's xlhpf compiler: 0.475 seconds versus 4.77 seconds for the largest problem size.
After applying our offset array optimization to the Fortran77+MPI test case as shown
in
Figure
13, execution time improves by 45%, equivalent to a speedup of 1.80. Next, after
applying context partitioning, as shown in Figure 14, scalarization was able to merge all of
the computation into a single loop nest, improving execution time an additional 31%. At
this point, we have reduced the execution time of the original program by 62%, a speedup
of 2.64.
As shown in Figure 15, our communication unioning optimization eliminates four communication
operations, which reduces the execution time by 41% when compared to the
context-optimized version. Applying memory optimizations such as scalar replacement and
unroll-and-jam further reduce the execution time another 14%. The execution time of the
original program has been trimmed by 81%, equivalent to a speedup of 5.19. Comparing our
code to the code produced by IBM's xlhpf compiler shows a speedup by a factor of 52!
Lest someone think that we have chosen IBM's xlhpf compiler as a straw man, we have
collected some additional performance numbers. We generated a third version of a 9-point
Execution
time
(ms)
Subgrid size (squared)
Original program
Offset arrays
Context Partitioning
Communication Unioning
Memory Optimizations
Figure
17: Step-wise results from stencil compilation strategy on Problem 9 when executed
on an SP-2.
stencil computation, this one using array syntax similar to the 5-point stencil shown in
Figure
1. This 9-point stencil computation only computes the interior elements of the matrix;
that is, elements 2:N-1 in each dimension. A graph comparing its execution time to the other
two 9-point stencil specifications is given in Figure 18. The IBM xlhpf compiler was used
in all cases. It is interesting to note that for the array syntax stencil the xlhpf compiler
produced performance numbers that tracked our best performance numbers for all problem
sizes except the largest, where we had a 10% advantage.
It is important to note that the stencil compilation strategy that we have presented
handles all three specifications of the 9-point stencil equally well. That is because our
algorithm is based upon the analysis and optimization of the base constructs upon which
stencils are built. Our algorithm is designed to handle the lowest common denominator - a
form into which our compiler can transform all stencil computations.
6 Related Work
One of the first major efforts to specifically address the compilation of stencil computations
for a distributed-memory machine was the stencil compiler for the CM-2, also known as the
convolution compiler [4, 5, 6]. The compiler eliminated intraprocessor data movement and
optimized the interprocessor data movement by exploiting the CM-2's polyshift communication
[10]. The final computation was performed by hand-optimized library microcode that
took advantage of several loop transformations and a specialized register allocation scheme.
Our general compilation methodology produces the same style code as this specialized
compiler. We both eliminate intraprocessor data movement and minimize interprocessor
Execution
time
(ms)
Subgrid size (squared)
exceeded
memory
CSHIFT specification
Purdue Problem 9
Array syntax
Figure
18: Comparison of three 9-point stencil specifications.
data movement. Finally, our use of a loop-level optimizer to perform the unroll-and-jam
optimization accomplish the same data reuse as the stencil compiler's ``multi-stencil swath''.
The CM-2 stencil compiler had many limitations however. It could only handle single-
statement stencils. The stencil had to be specified using the cshift intrinsic; no array-syntax
stencils would be accepted. Since the compiler relied upon pattern matching, the stencil had
to be in a very specific form: a sum of terms, each of which is a coefficient multiplying a
shift expression. No variations were possible. And finally, the programmer had to recognize
the stencil computation, extract it from the program and place it in its own subroutine to
be compiled by the stencil compiler.
Our compilation scheme handles a strict superset of patterns handled by the CM-2 stencil
compiler. In their own words, they "avoid the general problem by restricting the domain
of applicability." [6] We have placed no such restrictions upon our work. Our strategy
optimizes single-statement stencils, multi-statement stencils, cshift intrinsic stencils, and
array-syntax stencils all equally well. And since our optimizations were designed to be incorporated
into an HPF compiler, they benefit those computations that only slightly resemble
stencils.
There are also some other commercially available compilers that can handle certain styl-
ized, single-statement stencils. The MasPar Fortran compiler avoids intraprocessor data
movement for single-statement stencils written using array notation. This is accomplished
by scalarizing the Fortran90 expression (avoiding the generation of cshifts) and then using
dependence analysis to find loop-carried dependences that indicate interprocessor data
movement. Only the interprocessor data is moved, and no local copying is required. How-
ever, the compiler still performs all the data movement for single-statement stencils written
using shift intrinsics. This strategy is shared by many Fortran90/HPF compilers that focus
on handling scalarized code. As with the CM-2 stencil compiler, our methodology is a strict
superset of this strategy.
Gupta, et al. [12], in describing IBM's xlhpf compiler, state that they are able to reduce
the number of messages for multi-dimensional shifts by exploiting methods similar to ours.
However, they do not describe their algorithm for accomplishing this, and it is unknown
whether they would be able to eliminate the redundant communication that arises from
shifts over the same dimension and direction but of different distances.
The Portland Group's pghpf compiler, as described by Bozkus, et al. [1, 2], performs
stencil recognition and optimizes the computation by using overlap shift communication.
They also perform a subset of our communication unioning optimization. However, they are
limited to single-statement expressions in both cases.
In general, there have been several different methods for handling specific forms of stencil
computations. Our strategy handles a more general form of stencil computations than these
earlier methods.
7 Conclusion
In this paper, we presented a general compilation scheme for compiling HPF stencil computations
for distributed-memory architectures. The strategy optimizes such computations
by orchestrating a unique set of optimizations. These optimizations eliminate unnecessary
intraprocessor data movement resulting from cshift intrinsics, rearrange the array statements
to promote profitable loop-fusion, eliminate redundant interprocessor data movement,
and optimize memory accesses via loop-level transformations. The optimizations are general
enough to be included in a general-purpose HPF/Fortran90 compiler as they will benefit
many computations, not just those that fit a stencil pattern.
The strength of these optimizations is that they operate on a normal form into which
all stencil computations can readily be translated. This enables us to optimize all stencil
computations regardless of whether they are written using array syntax or explicit shift in-
trinsics, or whether the stencil is computed by a single statement or multiple statements.
This approach is significantly more general than stencil compilation approaches in previous
compilers. Even though we focused on the compilation of stencils for distributed-memory
machines in this paper, the techniques presented are equally applicable to optimizing stencil
computations on shared-memory and scalar machines (with the exception of reducing
interprocessor movement).
Acknowledgments
This work has been supported in part by the IBM Corporation, the Center for Research
on Parallel Computation (an NSF Science and Technology Center), and DARPA Contract
DABT63-92-C-0038. This work was also supported in part by the Defense Advanced Re-search
Projects Agency and Rome Laboratory, Air Force Materiel Command, USAF, under
agreement number F30602-96-1-0159. The U.S. Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any copyright annotation
thereon. The views and conclusions contained herein are those of the authors and should
not be interpreted as representing the official policies or endorsements, either expressed or
implied, of the Defense Advanced Research Projects Agency and Rome Laboratory or the
U.S. Government.
--R
Techniques for compiling and executing HPF programs on shared-memory and distributed-memory parallel systems
Compiling data parallel programs to message passing programs for massively parallel MIMD systems.
A stencil compiler for the Connection Machine models CM-2/200
A stencil compiler for the Connection Machine model CM-5
Compiling Fortran 77D and 90D for MIMD distributed-memory machines
Efficiently computing static single assignment form and the control dependence graph.
communications software for the Connection Machine systems CM-2 and CM-200
Updating distributed variables in local computations.
An HPF compiler for the IBM SP2.
Low level HPF compiler benchmark suite.
High Performance Fortran Forum.
Optimizing Fortran 90 shift operations on distributed-memory multicomputers
Context optimization for SIMD execution.
Optimization techniques for SIMD Fortran compilers.
Improving data locality with loop transfor- mations
Applications benchmark set for Fortran-D and High Performance For- tran
Problems to test parallel and vector languages.
Optimizing Fortran90D/HPF for Distributed-Memory Computers
A compiler for a massively parallel distributed memory MIMD computer.
Optimizing Supercompilers for Supercomputers.
--TR
Updating distributed variables in local computations
Fortran at ten gigaflops
Efficiently computing static single assignment form and the control dependence graph
Compiler optimizations for improving data locality
POLYSHIFT communications software for the connection machine system CM-200
An HPF compiler for the IBM SP2
Improving data locality with loop transformations
PGHPFMYAMPERSANDmdash;an optimizing High Performance Fortran compiler for distributed memory machines
Optimizing Fortran90D/HPF for distributed-memory computers
Optimizing Supercompilers for Supercomputers
Optimizing Fortran 90 Shift Operations on Distributed-Memory Multicomputers
--CTR
David Wonnacott, Achieving Scalable Locality with Time Skewing, International Journal of Parallel Programming, v.30 n.3, p.181-221, June 2002
M. Kandemir, 2D data locality: definition, abstraction, and application, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.275-278, November 06-10, 2005, San Jose, CA
Hitoshi Sakagami , Hitoshi Murai , Yoshiki Seo , Mitsuo Yokokawa, 14.9 TFLOPS three-dimensional fluid simulation for fusion science with HPF on the Earth Simulator, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-14, November 16, 2002, Baltimore, Maryland
G. Chen , M. Kandemir, Optimizing inter-processor data locality on embedded chip multiprocessors, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Armando Solar-Lezama , Gilad Arnold , Liviu Tancau , Rastislav Bodik , Vijay Saraswat , Sanjit Seshia, Sketching stencils, ACM SIGPLAN Notices, v.42 n.6, June 2007
Steven J. Deitz , Bradford L. Chamberlain , Lawrence Snyder, Eliminating redundancies in sum-of-product array computations, Proceedings of the 15th international conference on Supercomputing, p.65-77, June 2001, Sorrento, Italy
Gerald Roth , Ken Kennedy, Loop fusion in high performance Fortran, Proceedings of the 12th international conference on Supercomputing, p.125-132, July 1998, Melbourne, Australia
A. Ya. Kalinov , A. L. Lastovetsky , I. N. Ledovskikh , M. A. Posypkin, Compilation of Vector Statements of C[] Language for Architectures with Multilevel Memory Hierarchy, Programming and Computing Software, v.27 n.3, p.111-122, May-June 2001
Mary Jane Irwin, Compiler-directed proactive power management for networks, Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems, September 24-27, 2005, San Francisco, California, USA
Zhang , Zhengqian Kuang , Baiming Feng , Jichang Kang, Auto-CFD-NOW: A pre-compiler for effectively parallelizing CFD applications on networks of workstations, The Journal of Supercomputing, v.38 n.2, p.189-217, November 2006
Daniel J. Rosenkrantz , Lenore R. Mullin , Harry B. Hunt III, On minimizing materializations of array-valued temporaries, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.6, p.1145-1177, November 2006 | stencil compilation;high performance Fortran;communication unioning;statement partitioning;shift optimization |
509611 | Portable performance of data parallel languages. | A portable program executes on different platforms and yields consistent performance. With the focus on portability, this paper presents an in-depth study of the performance of three NAS benchmarks (EP, MG, FT) compiled with three commercial HPF compilers (APR, PGI, IBM) on the IBM SP2. Each benchmark is evaluated in two versions: using DO loops and using F90 constructs and/or HPF's Forall statement. Base-line comparison is provided by versions of the benchmarks written in Fortran/MPI and ZPL, a data parallel language developed at the University of Washington.While some F90/Forall programs achieve scalable performance with some compilers, the results indicate a considerable portability problem in HPF programs. Two sources for the problem are identified. First, Fortran's semantics require extensive analysis and optimization to arrive at a parallel program; therefore relying on the compiler's capability alone leads to unpredictable performance. Second, the wide differences in the parallelization strategies used by each compiler may require an HPF program to be customized for the particular compiler. While improving compiler optimizations may help to reduce some performance variations, the results suggest that the foremost criteria for portability is a concise performance model that the compiler must adhere to and that the users can rely on. | Introduction
Portability is defined as the ability to use the same program on different platforms and to achieve
consistent performance. Developing a parallel program that is both portable and scalable is
well recognized as a challenging endeavor. However, the difficulty is not necessarily an intrinsic
property of parallel computing. This assertion is especially clear in the case of data parallel
algorithms which provide abundant parallelism and tend to involve computation that is very
regular. The data parallel model is not adequate for general parallel programming. However,
its simplicity coupled with the prevalence of data parallel problems in scientific applications has
This research was supported by the IBM Resident Study Program and DARPA Grant N00014-92-J-4041 and
motivated the development of many data parallel languages, all with the goal of simplifying
programming while achieving scalable and portable performance.
Of these languages, High Performance Fortran [10] constitutes the most widespread effort,
involving a large consortium of companies and universities. One of HPF's distinctions is that
it is the first parallel language with a recognized standard - indeed, HPF can be regarded as
the integration of several similar data parallel languages including Fortran D, Vienna Fortran
and CM Fortran [6, 14, 22]. The attractions of HPF are manifold. First, its use of Fortran
as a base language promises quick user acceptance since the language is well established in the
target community. Second, the use of directives to parallelize sequential programs implies ease
of programming since the directives can be added incrementally without affecting the program's
correctness. In particular cases, the compiler may even be able to parallelize the program without
user assistance.
On the other hand, HPF also has potential disadvantages. First, as an extension of a sequential
language, it is likely to inherit language features that are either incompatible with parallelization
or difficult for a compiler to analyze. Second, the portability of a program must not be affected
by differences in the technology of the compilers or the machines since the principal purpose for
creating a standard is to ensure that programs are portable. HPF's design presents some potential
conflicts with the goal of portability. For instance, hiding most aspects of communication from
the programmer is convenient, but it forces the user to rely completely on the compiler for
generating efficient communication.
Differences between compilers will always be present. However, to maintain the program portability
in the language, the differences must not force the users to make program modifications to
accommodate a specific compiler. In other words, the user should be able to use any compiler
to develop a program that scales, then have the option of migrating to a different machine or
compiler for better scalar performance. This requires a tight coupling between the language
specification and the compiler in the sense that the compiler implementations must provide a
consistent behavior for the abstractions provided in the language. To this end, the language
specification must serve as a consistent contract between the compiler and the programmer. We
call this contract the performance model of the language [18]. A robust performance model
has a dual effect: the program performance is (1) predictable to the user and (2) portable across
different platforms.
With the focus on the portability issue, we study in-depth the performance of three NAS
benchmarks compiled with three commercial HPF compilers on the IBM SP2. The benchmarks
are: Embarrassingly Parallel (EP), Multigrid (MG), and Fourier Transform (FT). The HPF compilers
include Applied Parallel Research, Portland Group, and IBM. To evaluate the effect of data
dependences on compiler analysis, we consider two versions of each benchmark: one programmed
using DO loops, and the second using F90 constructs and/or HPF's Forall statement.
For the comparison, we also consider the performance of each benchmark written in MPI and
ZPL [16], a data parallel language developed at the University of Washington. Since message
passing programs yield scalable performance but are not convenient, the MPI results represent a
level of performance that the HPF programs should use as a point of reference. The motivation
for including the ZPL results is as follows. ZPL is a data parallel language developed from first
principles. The lack of a parent language allows ZPL to introduce new language constructs and
incorporate a robust performance model, creating a concrete delineation between parallel and
sequential execution. Consequently, the programming model presented to the user is clear, and
the compiler is relatively unhindered by artificial dependencies and complex interactions between
language features. One may expect that it is both easier to develop a ZPL compiler and to write
a ZPL program that scales well. Naturally, the downside of designing a new language without a
legacy is the challenge of gaining user acceptance. For this study, the ZPL measurement gives an
indication as to whether consistent and scalable performance can be achieved when the compiler
is not hampered by language features unrelated to parallel computation.
Our results show that programs that scale well using a particular HPF compiler may not
perform similarly with a different compiler, indicating a lack of portability. Some F90/Forall
programs achieve scalable performance, but the results are not uniform. For the other programs,
the results suggest that Fortran's sequential nature leads to considerable difficulties in the compil-
er's analysis and optimization of the communication. By analyzing in detail the implementations
by the HPF compilers, we find that the wide differences in the parallelization strategies and their
varying degrees of success contribute to the portability problem of HPF programs. While improving
compiler optimizations may help to reduce some performance variations, it is clear that
a robust solution will require more than a mature compiler technology. The results suggest that
the foremost criteria for portability is a concise performance model that the compiler must adhere
to and that the users can rely on. This performance model will serve as an effective contract
between the users and the compiler.
In related work, APR published the performance of its HPF compiler for a suite of HPF pro-
grams, along with detailed descriptions of their program restructuring process using the APR
FORGE tool to improve the codes [3, 11]. The programs are well tuned to the APR compiler and
in many cases rely on the use of APR-specific directives rather than standard HPF directives.
Although the approach that APR advocates (program development followed by profiler-based
program restructuring) is successful for these instances, the resulting programs may not be
portable with respect to performance, particularly in cases that employ APR directives. There-
fore, we believe that the suite of APR benchmarks is not well suited for evaluating HPF compilers
in general.
Similarly, papers by vendors describing their individual HPF compilers typically show some
performance numbers; however it remains difficult to make comparisons across compilers [8, 12,
13].
Lin et al. used the APR benchmark suite to compare the performance of ZPL versions of the
programs against the corresponding HPF performance published by APR and found that ZPL
generally outperforms HPF [17]. However, without access to the APR compiler at the time,
detailed analysis was not possible, limiting the comparison to the aggregate timings.
This paper makes the following contributions:
1. An in-depth comparison and analysis of the performance of HPF programs with three
current HPF compilers and alternative approaches (MPI, ZPL).
2. A comparison of the DO loop with the F90 array syntax and the Forall construct.
3. An assessment of the parallel programming model presented by HPF.
The remainder of the paper is organized as follows: Section 2 describes the methodology for
the study, including a description of the algorithms and the benchmark implementations. In
Section 3, we examine and analyze the benchmarks' performance, detailing the communication
generated in each implementation and quantifying the effects of data dependences in the HPF
programs. Section 4 provides our observations and our conclusions.
2.1 ZPL
Overview
ZPL is an array language designed at the University of Washington expressly for parallel ex-
ecution. In the context of this paper, it serves two purposes. First, it sets a bound on the
performance that can be expected from a high level data parallel language that is not an extension
of an existing sequential language. Second, it illustrates the importance of the performance
model in a parallel language.
ZPL is implicitly parallel - i.e. there are no directives. The concurrency is derived entirely
from the semantics of the array operations. Array decompositions, specified at run time, partition
arrays into either 1D or 2D blocks. Processors perform the computations for the values they
own. Scalars are replicated on all processors and kept coherent by redundantly computing scalar
computations.
ZPL introduces a new abstraction called region which is used to allocate distributed arrays
and to specify distributed computation. ZPL provides a full complement of operations to define
regions relative to each other (e.g. [east of R]), to refer to adjacent elements (e.g. A@west), to
perform full and partial prefix operations (e.g. big := max !! A), to express strided computations,
to establish boundary conditions (e.g. wrap A), and to accomplish other powerful operations (e.g.
flooding). ZPL also contains standard operators, data types and control structures, using a syntax
similar to Modula-2.
2.2 Benchmark selection
To emphasize the portability issue, we establish these criteria:
1. The benchmarks should be derived from an independent source to insure objectivity.
2. A message passing version should be included in the study to establish the target performance
3. For HPF, there should be separate versions that employ F77 DO loop and F90/Forall
because there is a significant difference between the two types of constructs.
4. The algorithm should be parallel and there should be no algorithmic differences between
versions of the same benchmark.
5. Tuning must adhere to the language specification rather than any specific compiler capability
6. Because support for HPF features is not uniform, the benchmarks should not require any
feature that is not supported by all HPF compilers.
Following these criteria proves to be challenging given the disparate benchmark availabil-
ity. The NAS benchmark version 1.0 (NPB1) was implemented by computer vendors and was
intended to measure the best performance possible on a parallel machine without regard to
portability. The available sources for NPB1 are generally sequential implementations. Although
they are valid HPF programs, the sequential nature of the algorithms may be too difficult for
the compilers to parallelize and may not reflect a natural approach to parallel programming. For
instance, a programmer may simply choose a specific parallel algorithm to implement in HPF.
The NAS version 2.1 (NPB2.1), intended to measure portable performance, is a better choice
since the programs implement inherently parallel algorithms and they use the same MPI interface
as the compilers in the study. NPB2.1 contains 7 benchmarks 1 , all of which should ideally
be included in the study. Unfortunately, a portable HPF version of these benchmarks is not
available, severely limiting an independent comparison. While APR and other HPF vendors
publish the benchmarks used to acquire their performance measurements, these benchmarks
are generally tuned to the specific compiler and are not portable. This limitation forces us
to carefully derive HPF versions from the NPB2.1 sources with the focus on portability while
avoiding external effects such as algorithmic differences.
Among the benchmarks, CG is not available in NPB2.1. SP, BT and LU are not included in
this study because they require block cyclic and 3-D data distributions that are not supported
by all HPF compilers. These limitations do not prevent these benchmarks from being implemented
in HPF and ZPL; however, the implementations will have an algorithmic difference that
cannot be factored from the performance. This leaves only FT and MG as potential candidates.
Fortunately, EP is by definition highly parallel; therefore its sequential implementation can be
trivially parallelized.
2.3 Benchmark implementation
The HPF implementations are derived by reverse engineering the MPI programs: communication
calls are removed and the local loop bounds are replaced with global loop bounds. To
parallelize the programs, HPF directives are then added to recreate the data partitioning of the
MPI versions. The HPF compilers are thus presented with a program that is fully data parallel
(not sequential) and ready to be parallelized. Conceptually, the task for the compilers is to
repartition the problem as specified by the HPF directives and to regenerate the communication.
The benchmarks we chose require only the basic BLOCK distribution which is supported by all
three compilers and use only the basic HPF intrinsics. Therefore they can stress the compilers
without exceeding their capability.
The implementations in the ZPL language are derived from the same source as the HPF
implementations, but in the following manner: the sequential computation is translated directly
from Fortran to the corresponding ZPL syntax, while the parallel execution is expressed using
ZPL's parallel constructs.
One was added recently.
2.3.1 EP
The benchmark EP generates N pairs of pseudo-random floating point values in the
interval (0,1) according to the specified algorithm, then redistributes each value x j and y j onto
the range (-1,1) by scaling them as 2x Each pair is tested for the condition:
If true, the independent Gaussian deviates are computed:
Then the new pair (X; Y ) is tested to see if it falls within one of the 10 square annuli and a total
count is tabulated for each annulus.
l - max(jXj; jY
The pseudo-random numbers are generated according to the following linear congruential recursion
The values in a pair are consecutive values of the recursion. To scale to the (0,1) range,
the value x k is divided by 2 46 .
The computation for each pair of Gaussian deviates can proceed independently. Each processor
would maintain its own counts of the Gaussian deviates and communicate at the end to obtain
the global sum. The random number generation, however, presents a challenge. There are two
ways to compute a random value
1. x k can be computed quickly from the preceding value x using only one multiplication
and one mod operation, leading to a complexity of O(n). However, the major drawback is
the true data dependence on the value x
2. x k can be computed independently using k and the defined values of a and x 0
. This will
result in an overall complexity of O(n 2 ). Fortunately, the property of the mod operation
allows x k to be computed in O(log steps by using a binary exponentiation algorithm [4].
The goal then is to balance between method (1) and (2) to achieve parallelism while maintaining
the O(n) cost. Because EP is not available in the NPB2.1 suite, we use the implementation
provided by APR as the DO loop version. This version is structured to achieve the balance
between (1) and (2) by batching: the random values are generated in one sequential batch at a
time and saved; the seed of the batch is computed using the more expensive method (2), and the
remaining values are computed using the less expensive method (1). A DO loop then iterates
to compute the number of batches required, and this constitutes the opportunity for parallel
execution.
The F90/Forall version is derived from the DO loop version with the following modifications:
ffl All variables in the main DO loop that cause an output dependence are expanded into arrays
of the size of the loop iteration. In other words, the output dependence is eliminated by
essentially renaming the variables so that the computation can be expressed in a fully data
parallel manner. Since the iteration count is just the number of sequential batches, the
expansion is not excessive.
ffl Directives are added to partition the arrays onto a 1-D processor grid.
ffl The DO loop for the final summation is also recoded using the HPF reduction intrinsic.
A complication arises involving the subroutine call within the Forall loop, which must be free
of side effects in order for the loop to be distributed. Some slight code rearrangement was done
to remove a side effect in the original subroutine, then the PURE directives were added to assert
freedom from side effects.
The ZPL version is translated in a straightforward manner from the DO loop version. The
only notable difference is the use of the ZPL region construct to express the independent batch
computations.
2.3.2 MG
Multigrid is interesting for several reasons.
First, it illustrates the need for data parallel languages such as HPF or ZPL. The NPB2.1
implementation contains over 700 lines of code for the communication - about 30% of the program
- which are eliminated when the program is written in a data parallel language.
Second, since the main computation is a 27-points stencil, the reference pattern that requires
communication is simply a shift by a constant, which results in a simple neighbor exchange
in the processor grid. All compilers (ZPL and HPF) recognize this pattern well and employ
optimizations such as message vectorization and storage preallocation for the nonlocal data
[3, 8, 9, 12]. Therefore, although the benchmark is rather complex, the initial indication is that
both HPF and ZPL should be able to produce efficient parallel programs.
The benchmark is a V-cycle multigrid algorithm for computing an approximate solution to
the discrete Poisson problem:
where r 2 is the Laplacian operator r 2
The algorithm consists of 4 iterations of the following three steps:
residual
correction
apply correction
where A is the trilinear finite element discretization of the Laplace operator r 2 , M k is the V-cycle
multigrid operator as defined in the NPB1 benchmark specification [1, 4].
The algorithm implemented in the NPB2.1 version consists of three phases: the first phase
computes the residual, the second phase is a set of steps that applies the M k operator to compute
the correction while the last phase applies the correction.
The HPF DO loop version is derived from the NPB2.1 implementation as follows:
ffl The MPI calls are removed.
ffl The local loop bounds are replaced with the global bounds.
ffl The use of a COMMON block of storage to hold the set of hierarchical arrays (different
sizes) is incompatible with HPF; therefore the arrays are renamed and declared explicitly.
ffl HPF directives are added to partition the arrays onto a 3-D processor grid. The array
distribution is maintained across subroutine calls by using the transcriptive directives to
prevent unnecessary redistribution.
The HPF F90/Forall version requires the additional step of rewriting all data parallel loops in
F90 syntax.
The ZPL version has a similar structure to the HPF F90/Forall version, the notable difference
being the use of strided region to express the hierarchy of 3-D grids. A strided region is a sparse
index set over which data can be declared and computation can be specified.
2.3.3 FT
Consider the partial differential equation for a point x in 3-D space:
ffit
The FT benchmark solves the PDE by (1) computing the forward 3-D Fourier Transform
of u(x; 0), (2) multiplying the result by a set of exponential values, and (3) computing the inverse
3-D Fourier Transform. The problem statement requires 6 solutions, therefore the benchmark
consists of 1 forward FFT and 6 pairs of dot products and inverse FFTs.
The NPB2.1 implementation follows a standard parallelization scheme [5, 2]. The 3-D FFT
computation consists of traversing and applying the 1-D FFT along each of the three dimensions.
The 3-D array is partitioned along the third dimension to allow each processor to independently
carry out the 1-D FFT along the first and second dimension. Then the array is transposed to
enable the traversal of the third dimension. The transpose operation constitutes most of the
communication in the program. The program requires moving the third dimension to the first
dimension in the transpose so that the memory stride is favorable for the 1-D FFT; therefore
the HPF REDISTRIBUTE function alone is not sufficient 2 .
The HPF DO loop implementation is derived with the following modifications:
1. HPF directives are added to distribute the arrays along the appropriate dimension. Tran-
scriptive directives are used at subroutine boundaries to prevent unnecessary redistribution.
2. The communication for the transpose step is replaced with a global assignment statement.
3. A scratch array that is recast into arrays of different ranks and sizes between subroutines is
replaced with multiple arrays of constant rank and size. Although passing an array section
in a formal argument is legitimate in HPF, some HPF compilers have difficulty managing
array sections.
data distribution specifies the partition to processor mapping, not the memory layout.
The HPF F90/Forall version requires the additional step of rewriting all data parallel loops in
F90 syntax.
The ZPL implementation allocates the 3-D arrays as regions of 2-D arrays; the transpose
operation is realized with the ZPL permute operator.
2.4 Platform
Program portability should be evaluated across multiple platforms of different compilers and
machines. In this study, we elect to factor out the difference due to the machine architecture and
focus on the compiler difference. Although the machine architecture sets the upper bound on the
possible performance, a language and its compiler determine the achievable performance. The
targeted parallel platform is the IBM SP2 at the Cornell Theory Center. The HPF compilers
used in the study include:
ffl Portland Group pghpf version 2.1
ffl IBM xlhpf version 1.0
Applied Parallel Research xhpf version 2.0
All compilers generate MPI calls for the communication and use the same MPI library, ensuring
that the communication fabric is identical for all measurements. The PGI and APR HPF
compilers generate intermediate Fortran code that is then processed by the standard IBM Fortran
compiler (xlf). The IBM HPF compiler generates machine code directly but otherwise is based
on the same xlf compiler. The ZPL compiler generates intermediate C code. All measurements
use the same compiler options and system environment that NAS and APR specified in their
publications (using SP2 wide nodes only), and spotchecks confirmed that the published NAS and
APR performances are reproduced.
3 Parallel Performance
In this section we examine the performance of the programs. Figure 1 shows the aggregate
timing for all versions (MPI, HPF, ZPL) and for the small and large problem size (class S, class
A). Notes:
ffl Because the execution time may be excessive depending on the success of the compilers, we
first examine the small problem size (class S), then only the programs with a reasonable
performance and speedup with the large problem size (class A).
ffl The time axis uses a log scale because of the wide performance spread.
ffl For EP, an MPI version is not used; for class S, the F90/Forall version is not shown.
ffl For MG class A, the APR F90/Forall version does not scale and is not included.
ffl For FT class A, only one data point is available for the IBM F90/Forall version.
Processors
time(secs)
(b) EP class A
time(secs)
(a) EP class S
Processors
time(secs)
(c) MG class S
Processors
time(secs)
(d) MG class A
time(secs)
Processors
time(secs)
(f) FT class A
MPI APR_do IBM_do PGI_do
ZPL APR_F90 IBM_F90 PGI_F90
Figure
1: Performance for EP, MG and FT
(see notes in Section
3.1 NAS EP benchmark
In
Figure
1(a), the first surprising observation is that the IBM and PGI compilers achieve no
speedup with the HPF DO loop version although the APR compiler produces a program that
scales well (recall that the EP DO loop version is from the APR suite). Inspecting the code
reveals that no distribution directives were specified for the arrays, resulting in a default data
distribution. Although the default distribution is implementation dependent, the conventional
choice is to replicate the array. The IBM and PGI compilers distribute the computation strictly
by the owner-computes rule; therefore, in order for the program to be parallelized, some data
structures must be distributed. Since the arrays in EP are replicated by default, no computation
is partitioned among the processors: each processor executes the full program and achieves no
speedup. By contrast, the APR parallelization strategy does not strictly adhere to the owner-computes
rule. This allows the main loop to be partitioned despite the fact that none of the
arrays within the loop are distributed. Note that the HPF language specification does not
specify the default distribution for the data nor the partitioning scheme for the computation.
The omission was likely intended to maximize the opportunity for the compiler to optimize;
however the observation for EP suggests that the different schemes adopted by the compilers
may result in a portability problem with HPF programs.
When directives were inserted to distribute the arrays, it was found that the main array in EP
is intended to hold pseudo-random values generated sequentially, therefore there exists a true
dependence in the loop computing the values. If the array is distributed, the compiler will adjust
the loop bounds to the local partition, but the computation will be serialized.
The HPF F90/Forall version corrects this problem by explicitly distributing the arrays and
the IBM and PGI compilers were able to parallelize. The class A performance in Figure 1(b)
shows that all compilers achieve the expected linear speedup. However, expanding the arrays to
express the computation into a more data parallel form seems to introduce overhead and degrade
the scalar performance. It is possible for advanced compiler optimizations such as loop fusion
and array contraction to remove this overhead, but these optimizations were either not available
or not successful in this case.
The ZPL version scales linearly as expected and the scalar performance is slightly better than
the APR version.
3.2 NAS MG benchmark
Compared to EP, MG allows a more rigorous test of the languages and compilers. We first discuss
the performance for the class S in Figure 1(c).
The p=1 column shows considerable variation in the scalar performance with all versions
showing overhead of 1 to 2 orders of magnitude over the MPI performance.
For the base cases, both the original MPI program and the ZPL version scale well. The ZPL
compiler partitions the problem in a straightforward manner according to the region and strided
region semantics, and the communication is vectorized with little effort. The scalar performance
does however show over a 6x overhead compared to the MPI version.
The HPF DO loop version clearly does not scale with any HPF compiler, as explained below.
The PGI compiler performs poorly in vectorizing the communication when the computation
is expressed with DO loops: the communication calls tend to remain in the innermost loop,
resulting in a very large number of small messages being generated. In addition, the program
uses guards within the loop instead of adjusting the loop bound.
The APR compiler only supports a 1-D processor grid, therefore the 3-D distribution specified
in the HPF directives is collapsed by default to a 1-D distribution. This limitation affects the
asymptotic speedup but does not necessarily limit the parallelization of the 27-point stencil
computation. For one subroutine, the compiler detects through interprocedural analysis an alias
between two formal arguments, which constitutes an inhibitor for the loop parallelization within
the subroutine. However, the analysis does not go further to detect from the index expressions
of the array references that no dependence actually exists. For most of the major loops in the
program, the APR compiler correctly partitions the computation along the distributed array
dimension, but generates very conservative communication before the loop to obtain the latest
value for the RHS and after the loop to update the LHS. As a result, the performance degrades
with the number of processors.
The IBM compiler does not parallelize because it detects an output dependence on a number
of variables although the arrays are replicated. In this case, the compiler appears to be overly
conversative in maintaining the consistency of the replicated variables. Other loops do not
parallelize because they contain an IF statement.
The INDEPENDENT directive is treated differently by the compilers. The PGI compiler
interprets the directive literally and parallelizes the loop as directed. The IBM compiler on the
other hand ensures correctness by performing a more rigorous dependence check nevertheless and
because of detected dependence, it does not parallelize the loop.
For the HPF F90/Forall version, the IBM and PGI compilers are more successful: the IBM
compiler performance and scalability approach ZPL's, while the PGI compiler now experiences
little problem in vectorizing the communication; indeed its scalar performance exceeds IBM's.
The APR compiler does not result in slowdown but does not achieve any speedup either. It
partitions the computation in the F90/Forall version similarly to the DO loop version, but is
able to reduce the amount of communication. It continues to be limited by its 1-D distribution
as well as an alias problem with one subroutine. Note that the version of MG from the APR
suite employs APR's directives to suppress unnecessary communication. These directives are not
used in our study because they are not a part of HPF, but it is worth noting that it is possible
to use APR's tools to analyze the program and manually insert APR's directives to improve the
speedup with the APR compiler.
Given that the DO loop version fails to scale with any compiler, one may conjecture whether
the program may be written differently to aid the compilers. The specific causes for each compiler
described above suggest that the APR compiler would be more successful if APR's directives are
used, that the PGI compiler may benefit from the HPF INDEPENDENT directive, and that the
IBM compiler would require actual removal of some data dependences. Therefore, it does not
appear that any single solution is portable across the compilers.
3.3 NAS FT benchmark
FT presents a different challenge to the HPF compilers. In terms of the reference pattern, FT
consists of a dot product and the FFT butterfly pattern. The former requires no communication
and is readily parallelized by all compilers. For the latter, the index expression is far too complex
to optimize the communication, but fortunately the index variable is limited to one dimension at
a time; therefore the task for the compiler is to partition the computation along the appropriate
dimensions. The intended data distribution is 1-D and is thus within the capability of the APR
compiler.
Figure
1(e) shows the full set of performance results for the small problem size. As with MG,
the MPI and ZPL versions scale well and the scalar performance of all data parallel implementations
shows an overhead of 1 to 2 orders of magnitude over the MPI implementation.
For the HPF DO loop version, the APR compiler exhibits the same problem as with MG: it
generates very conservative communication before and after many loops. In addition, the APR
compiler does not choose the correct loop to parallelize. The discrepancy arises because APR's
strategy is to choose the partitioning based on the array references within the loop. In this case
the main computation and thus the array references are packaged in a subroutine called from the
loop, the intention being that the loop is parallelized and the subroutine operates on the local
data. When the compiler proceeds to analyze the loops in this subroutine (1-D FFT), it finds
that the loops are not parallelizable.
The PGI compiler also generates suboptimal communication, although its principal limitation
is in vectorizing the messages. The IBM compiler does not parallelize because of assignments to
replicated variables.
The HPF F90/Forall version requires considerable experimentation and code restructuring to
arrive at a version that is accepted by all compilers, partly because of differences in supported
features among the compilers and partly because of the nested subroutines structure of the
original program. All HPF compilers achieve speedup to varying degrees. APR is particularly
successful since the principal parallel loop has been moved to the innermost subroutine. Its
scalar performance approaches MPI performance, although communication overhead limits the
speedup. PGI shows good speedup while IBM's speedup is more limited.
3.4 Communication
The communication generated by the compilers is a useful indication of the effectiveness of the
parallelized programs. For the versions of the benchmarks that scale, table 1 shows the total
number of MPI message passing calls and the differences in the communication scheme employed
by each compiler. The APR and PGI compilers only use the generic send and receive while the
IBM compiler also uses the the nonblocking calls and the collective communication; this may
have ramifications in the portability of the compiler to other platforms. The ZPL compiler uses
nonblocking MPI calls to overlap computation with communication as well as MPI collective
communication.
Benchmark version point-to-point collective type of MPI calls
EP (class Allreduce, Barrier
IBM F90 70 120 Send, Recv, Bcast
MG (class S) MPI 2736 40 Send, Irecv, Allreduce, Barrier
ZPL 9504 56 Isend, Recv, Barrier
APR F90 126775 8 Send, Recv, Barrier
Bcast
FT (class S) MPI 0 104 Alltoall, Reduce
APR F90 58877 8 Send, Recv, Barrier
IBM F90 728 258048 Send, Irecv, Bcast
Table
1: Dynamic communication statistics for EP class A, MG class S and FT class S: p=8
3.5 Data Dependences
HPF compilers derive parallelism from the data distribution and the loops that operate on the
data. Loops with no dependences are readily parallelized by adjusting the loop bounds to the
local bounds. Loops with dependences may still be parallelizable but will require analysis; for
instance, the IBM compiler can recognize the dependence in a loop that performs a reduction
and generate the appropriate HPF reduction intrinsic. In other instances, loop distribution may
isolate the portion containing the dependence to allow the remainder of the original loop to be
parallelized. To approximately quantify the degree of difficulty that a program presents to the
parallelizing compiler in terms of dependence analysis, we use the following simple metric:
count of all loops with dependences
count of all loops
A value of 0 would indicate that all loops can be trivially parallelized, while a value of 1 would
indicate that any parallelizable loops depend on the analysis capability of the compiler. Using
the KAPF tool, we collect the loop statistics from the benchmarks for the major subroutines;
they are listed in Table 2. This metric is not complete since it does not account for the data
distribution; for instance, for 3 nested loops and a 1-D distribution, only 1 loop needs to be
partitioned to parallelize the program and 2 loops may contain dependences with no ill effects.
In addition, this metric is static and may not correlate directly with the dynamic characteristics
of the programs. However, the metric gives a coarse indication for the demand on the compiler.
The loop dependence statistics show clear trends that correlate directly with the performance
data. We observe the expected reduction in dependences from the DO loop version to the
F90/Forall version. The reduction greatly aids the compilers in parallelizing the F90/Forall
programs, but also highlights the difficulty with parallelizing programs with DO loops.
subroutine DO F90/Forall
EP embar 3/5 1/31
get start seed - 1/1
FT fftpde 2/16 2/16
cfft3 0/6 0/6
cfftz 3/5 1/4
subroutine DO F90/Forall
MG hmg 1/2 1/27
psinv 4/4 0/6
resid 4/4 0/6
rprj3 4/4 0/6
norm2u3 3/3 0/0
Table
2: Statistics on dependences for EP, MG and FT: m
is the count of loops with
data dependences or subroutine calls, and n is the total loop count; for F90/Forall, the counts
are obtained after the array statements have been scalarized
For MG, the difference is significant; the array syntax eliminates the dependence in most cases.
Some HPF compilers implement optimizations for array references that are affine functions of
the DO loop indices, particularly for functions with constants. These optimizations should have
been effective for the MG DO loop version, however it does not appear that they were successful.
Note that the loops in the subroutine norm2u3 are replaced altogether with the HPF reduction
intrinsics.
For FT, the low number of dependences in fftpde comes from the dot-products which are
easily parallelized. The top-down order of the subroutines listed also represents the nesting level
of the subroutines. The increasing dependences in the inner subroutine reflect the need to achieve
parallelism at the higher level. As explained earlier, this proves to be a challenge to the APR
compiler which focuses on analyzing individual loops to partition the work.
For data parallel applications, the recent progress in languages and compilers allows us to experimentally
evaluate an important issue: program portability. We recognize that many factors
affect the development and success of a parallel language and our study only focuses on the
portability factor.
Three NAS benchmarks were studied across three current HPF compilers. We examined
different styles of expressing the computation in HPF and we also consider the same benchmarks
written in MPI and ZPL to understand the interaction between performance, portability and
convenient programming.
The HPF compilers show a general difficulty in detecting parallelism from DO loops. The
compilers are more successful with the F90 array syntax and Forall construct, although even
in this case the success in parallelization is not uniform. Significant variation in the scalar
performance also exists between the compilers.
While the HPF directives and constructs provide information on the data and computation
partitioning, the sequential semantics of Fortran leave many potential dependences in the pro-
gram. An HPF compiler must analyze these dependences, and when unable to do so, it must
make a conservative assumption. This analysis capability differentiates the various vendor imple-
mentations. However, because it is difficult for the compilers to parallelize reliably, a user cannot
consistently estimate the parallel behavior and thus the speedup of the program. In addition,
because the parallelization strategy by the compilers varies widely, different ways to express the
same computation can lead to drastically different performance. The unpredictable variations
reflect a shortcoming in the performance model of HPF; as a result, a user needs to continually
experiment with each compiler to learn the actual behavior. In doing so, the user is effectively
supplementing the performance model provided by the language with empirical information. Yet,
such an enhanced model tends to be platform specific and not portable.
ZPL programs show consistent scalable performance, illustrating that it is possible to incorporate
a robust performance model in a high level language. The language design ensures that
the language abstractions behave in a predictable manner with respect to parallel performance.
Although ZPL is supported on multiple parallel systems, the results in this study do not directly
show ZPL's portability across platforms because multiple independent compiler implementations
for ZPL are not available. However, the existence of the performance model, evident in the predictable
performance behavior, ensures that ZPL programs will be portable across independently
developed platforms.
The results also show that significant overhead in the scalar performance remains in all implementations
compared to the MPI programs. One source for the overhead is the large number of
temporary arrays generated by the compiler across subroutine calls and parallelized loops. They
require dynamic allocation/deallocation and copying, and generally degrade the cache perfor-
mance. The index computation also contributes significantly to the overhead. It is clear that
to become a viable alternative to explicit message passing, compilers for data parallel languages
must achieve a much lower scalar overhead.
--R
David Klepacki Rick Lawrence
An efficient parallel algorithm for the 3-D FFT NAS parallel benchmark
Applied Parallel Research.
The NAS parallel benchmarks.
Rob van der Wijngaart
Vienna Fortran 90.
Compiling Fortran 90D/HPF for distributed memory mimd computers.
Compiling High Performance Fortran.
Factor-Join: A unique approach to compiling array languages for parallel machines.
High Performance Fortran Forum.
Fortran parallelization hand- book
An HPF compiler for the IBM SP2.
Compiling High Performance Fortran for distributed-memory systems
Evaluating compiler optimizations for Fortran D.
ZPL language reference manual.
ZPL: An array sublanguage.
ZPL vs. HPF: A comparison of performance and programming style.
The Role of Performance Models in Parallel Programming and Languages.
On the influence of programming models on shared memory computer performance.
NAS parallel benchmark 2.1 results: 8/96.
A ZPL programming guide.
CM Fortran Programming Guide
--TR
Compiling Fortran 90D/HPF for distributed memory MIMD computers
Evaluating compiler optimizations for Fortran D
High-performance parallel implementations of the NAS kernel benchmarks on the IBM SP2
Compiling high performance Fortran for distributed-memory systems
The role of performance models in parallel programming and languages
Factor-Join
ZPL
--CTR
Bradford L. Chamberlain , Sung-Eun Choi , E Christopher Lewis , Lawrence Snyder , W. Derrick Weathersby , Calvin Lin, The Case for High-Level Parallel Programming in ZPL, IEEE Computational Science & Engineering, v.5 n.3, p.76-86, July 1998
Bradford L. Chamberlain , Steven J. Deitz , Lawrence Snyder, A comparative study of the NAS MG benchmark across parallel languages and architectures, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.46-es, November 04-10, 2000, Dallas, Texas, United States
M. Govett , L. Hart , T. Henderson , J. Middlecoff , D. Schaffer, The scalable modeling system: directive-based code parallelization for distributed and shared memory computers, Parallel Computing, v.29 n.8, p.995-1020, 1 August
Bradford L. Chamberlain , Sung-Eun Choi , E. Christopher Lewis , Calvin Lin , Lawrence Snyder , W. Derrick Weathersby, ZPL: A Machine Independent Programming Language for Parallel Computers, IEEE Transactions on Software Engineering, v.26 n.3, p.197-211, March 2000 | MPI;performance model;NAS;HPF;ZPL;data paraller language |
509627 | Massively parallel simulations of diffusion in dense polymeric structures. | An original computational technique to generate close-to-equilibrium dense polymeric structures is proposed. Diffusion of small gases are studied on the equilibrated structures using massively parallel molecular dynamics simulations running on the Intel Teraflops (9216 Pentium Pro processors) and Intel Paragon (1840 processors). Compared to the current state-of-the-art equilibration methods this new technique appears to be faster by some orders of magnitude. The main advantage of the technique is that one can circumvent the bottlenecks in configuration space that inhibit relaxation in molecular dynamics simulations. The technique is based on the fact that tetravalent atoms (such as carbon and silicon) fit in the center of a regular tetrahedron and that regular tetrahedrons can be used to mesh the three-dimensional space. Thus, the problem of polymer equilibration described by continuous equations in molecular dynamics is reduced to a discrete problem where solutions are approximated by simple algorithms. Practical modeling applications include the construction of butyl rubber and ethylene-propylene-dimer-monomer (EPDM) models for oxygen and water diffusion calculations. Butyl and EPDM are used in O-ring systems and serve as sealing joints in many manufactured objects. Diffusion coefficients of small gases have been measured experimentally on both polymeric systems, and in general the diffusion coefficients in EPDM are an order of magnitude larger than in butyl. In order to better understand the diffusion phenomena, 10,000 atoms models were generated and equilibrated for butyl and EPDM. The models were submitted to a massively parallel molecular dynamics simulation to monitor the trajectories of the diffusing species. The massively parallel molecular dynamics code used in this paper achieves parallelism by a spatial-decomposition of the workload which enables it to run large problems in a scalable way where both memory cost and per-timestep execution speed scale linearly with the number of atoms being simulated. It runs efficiently on several parallel platforms, including the Intel Teraflops at Sandia. There are several diffusion modes observed depending if the diffusion is probed at short time scale (anomalous mode) or long time scale (normal mode). Ultimately, the diffusion coefficient that need to be compared with experimental data corresponds to the normal mode. The dynamics trajectories obtained with butyl and EPDM demonstrated that the normal mode was reached for diffusion within one nanosecond of simulation. In agreement with experimental evidences, the oxygen and water diffusion coefficients were found larger for EPDM than butyl. | Introduction
Many technological processes depend on the design of polymers with desired permeation characteristics of small molecules.
Examples include gas separation with polymeric membranes, food packaging, and encapsulant of electronic components in
polymers that act as barriers to atmospheric gases and moisture. To solve the polymer design problem successfully, one needs
to relate the chemical composition of the polymer to the diffusivities of the the penetrants molecules within it. One way of
establishing structure-property relationships is the development of empirical relationships (namely QSPRs - quantitative
structure property relationships). A more sophisticated approach is the application of simulation techniques that rely directly
upon fundamental molecular science. While QSPRs can generally be established from the the monomers configurations,
simulations involve the generation of configurations representing the whole polymer. From these configurations, structural,
thermodynamics, and transport properties are estimated. In principle, simulations can provide exact results for a given model
representation of the polymer/penetrant system. In practice, computer time consideration necessitates the introduction of
approximation, and particularly for polymeric systems, the use of high performance computing.
In this paper, we propose an original technique to generate close-to-equilibrium polymeric structures. We are presenting data
that suggest our technique is some orders of magnitude faster than the current state-of-the-art methods used to prepare and
equilibrate dense polymeric systems. The technique is used to generate initial model structures for polymers present in O-ring
systems. Oxygen and water diffusions though the O-ring are then probed using massively parallel molecular dynamics.
Results are compared with similar simulations and experimental data.
Methodology
As pointed out in the introduction, the ability to represent the molecular level structure and mobility of polymeric structures
is a prerequisite for simulating diffusion in them. One fundamental parameter characterizing molecular structures is the
potential energy. Polymer chains in a equilibrium melt or amorphous glass remains essentially at minimum energy
configurations. The potential energy is the sum of bond and bond angle distortion terms, tortional potentials, as well as
intermolecular and intramolecular repulsive and attractive interactions (i.e., van der Waals), and electrostatic (i.e.,
Coulombic) interactions. These energy terms are expressed as a function of the atoms coordinates constituting the polymer
and a set of parameters computed from experimental data or quantum mechanics calculations. The functional forms of the
energy terms and their associated parameters is called a forcefield. In the present study, we are making use of the
commercially available CHARMm forcefield [1], as well as forcefield parameters taken from Muller-Plathe et al. [2]
Molecular mechanics and molecular dynamics. Molecular mechanics is the procedure by which one locates local minima
of energy. Molecular mechanics simply consists of a minimization routine (conjugate gradient, steepest descent, etc.) that
finds the first energy minimum from a starting configuration. In the present paper, a parallel implementation of conjugate
gradient is utilized [3]. The main disadvantage of molecular mechanics is that thermal fluctuations are not explicitly taken
into account, and therefore cannot be used for diffusion calculations. Moreover, the procedure followed in generating
minimum energy configurations does not correspond to any physical process of polymer formation. Nonetheless, static
minimum energy structures provide satisfactory starting configurations for molecular dynamics simulations.
Molecular dynamics (MD) follows the temporal evolution of a microscopic model system through numerical integration of
the equations of motions for all the degrees of freedom. MD simulations can be performed in the microcanonical ensemble
(constant number of molecules, volume, and total energy, or NVE), the canonical ensemble (constant number of molecules,
volume, and temperature, or NVT), as well as the isothermal-isobaric ensemble (constant number of molecules, pressure, and
temperature, or NPT). The major advantage of MD is that it provides detailed information of short-time molecular motions.
Its limitation resides in computer time consideration. Hundreds of CPU hours on a vector supercomputer are required to
simulate a nanosecond of actual atomistic motions. However, computer time can be decreased by making use of massively
parallel processing. In the present work we are using a large-scale atomic/molecular massively parallel simulator (LAMMPS)
[4]. LAMMPS is a new parallel MD code suitable for modeling large molecular systems. LAMMPS has been written as part
of a CRADA (Cooperative Research and Development Agreement) between two DOE labs: Sandia and Lawrence Livermore,
and three industrial partners: Bristol-Myers Squibb, Cray Research, and Dupont. LAMMPS is capable of modeling a variety
of molecular systems such as bio-membranes, polymers, liquid-crystals, and zeolites. The code computes two kinds of forces:
(1) short-range forces such as those due to van der Waals interactions and molecular bond stretching, bending, and torsions,
and (2) long-range forces due to Coulombic effects. In the latter case, LAMMPS uses either Ewald or particle
particle/particle-mesh (PPPM) techniques to speed the calculation [5].
LAMMPS achieves parallelism by a spatial-decomposition of the workload which enables it to run large problems in a
scalable way where both memory cost and per-timestep execution speed scale linearly with the number of atoms being
simulated [6]. It runs efficiently on several parallel platforms, including the Intel Teraflops at Sandia (9216 processors), the
large Intel Paragon at Sandia (1840 processors) and large Cray T3D machines at Cray Research.
Generate initial structure. Creating an atomic-level model of a completely equilibrated dense polymer melt is a challenging
task, given current computational capabilities. van der Vegt et al. [7] discuss the various approaches to this problem. The
common techniques used in simulations of liquids which consists of melting an idealized structure generally takes tens of
picosecond of equilibration using MD. Unfortunately, the equilibration time for dense polymers is many orders of magnitude
larger than are feasible with MD [8]. Consequently, one needs other efficient ways of preparing initial polymer structures,
that resemble the equilibration structure. One of the most common methods has been to pack chains into the simulation box
at the experimental density, either by random placement followed by energy minimization or with Monte Carlo chain growth
techniques. However, these methods have tended to produce structures which are rather inhomogeneous and which lead to an
overestimation of solubility values for small permeants. In the present paper, we are proposing and comparing two alternative
construction methods: the compression box technique and the lattice construction technique.
Compression box technique. van der Vegt et al. [7] suggest that a more efficient way of producing near-equilibrium
structures is by starting with a dilute model polymer and compressing it slowly until the target experimental density is
reached. A set of chains is built with the correct dihedral angle distributions, at about 1/8 of the experimental density. Using
an NPT MD technique with a pressure ramp, the model system is compressed to the desired density over ~ 500 picoseconds.
During this compression, only the repulsive part of the nonbond (van der Waals) interaction is used, to avoid "clustering" of
the polymer. Then the structure is further equilibrated with the full nonbond interactions at the new density, for ~ 1000
picoseconds. van der Vegt et al. [7] observed that a model of poly(dimethylsiloxane) built with this "slow-compression"
technique had significantly less stress in the dihedral angles than a model built with a "random packing" method.
Furthermore, the slowly compressed model yielded small-permeant solubility results which were in much better agreement
with experiment.
Lattice construction technique. Many natural and synthetic polymers including all structures considered in this paper are
formed with hydrocarbon chains. More precisely, these polymers are composed of linear chains of tetravalent carbon atoms
to which are attached molecular groups (methyl, phenyl,. The lattice technique is based on the fact that any tetravalent
atom (such as carbon and silicon) fits in the center of a regular tetrahedron where its four bonds are perpendicular to the faces
of the tetrahedron (cf. Figure 1a). It is well known that regular tetrahedrons can be used to mesh the three-dimensional space.
Figure
1. Tetrahedron meshing. a) Carbon atom in center of tetrahedron. b) Two dimensional projection of meshed
space.
Using the previous observation, the experimental polymer volume is meshed and the polymer chains are constructed by
generating random walks following adjacent tetrahedrons (cf. Figure 1.b). One may notice from Figure 1 that the distance
between two adjacent tetrahedrons is equal to a carbon-carbon bond length, and the angle between three adjacent tetrahedrons
is equal to a carbon-carbon-carbon angle. Therefore, bond and angle energy terms are directly minimized by the lattice
construction procedure. Excluded volume interactions are treated by keeping track of the occupancy of each tetrahedron. As
the chain construction progresses the next atom position is always chosen in a non occupied tetrahedron. In the unprobable
event where all adjacent tetrahedrons are already occupied the construction routine back-tracks to the previous position. In
order to keep homogeneous density the first atom of the chain is chosen at random in the least occupied region of the volume
box. The goal of above excluded volume procedure is to keep the intermolecular and intramolecular repulsive forces to a
minimum value.
The lattice construction technique can be used with cubic cells instead of tetrahedrons. In such a case, each cell represents a
monomer, and the cell length, width, and height are those of the boundary box of the monomer. The advantage of the cubic
cells lattice is a reduction of the computational complexity since the objects manipulated are monomers instead of atoms (for
instance, for butyl, there are 12 atoms per monomer). The disadvantage of the cubic lattice is that bond lengths and bond
angles between monomers are no longer necessarily valid. Therefore, structures generated with cubic lattices must be energy
minimized prior using MD simulations in order to restore the correct bond lengths and bond angles.
An important parameter characterizing equilibrium melts or amorphous glasses is the mean square end-to-end distance of the
polymer chains. According to Flory "random coil hypothesis", for which there is ample experimental support [9], at
equilibrium the mean square end-to-end distance of any polymer chain in the bulk is related to the number n of skeletal bond
lengths l as , where is the characteristic ratio of the polymer. It is also known that the correct end-to-end
distance starting from any random initial configuration can be reached by MD using a simulation time proportional to O(n 3 )
[10]. Our lattice construction technique makes use of Flory's result while avoiding the computational complexity of MD
simulations. For each chain to be constructed the initial atom position is chosen at random (in the least occupied region) and
the final atom position is chosen within a distance equal to the value given by the Flory equation. Then, a path is constructed
between the two chosen positions. At each step of the construction the non-occupied adjacent tetrahedrons are ranked using
their respective distance to the final position. The tetrahedron or cubic cell corresponding to the shortest distance is chosen as
the next position. When the last position is reached, if the path length is greater than n, then the chain is deleted and two new
positions are chosen. Note that this situation is unlikely to occur . However, in the likely event where the path
length is smaller than n, additional atoms are added to the chain until the correct length is reached. As illustrated in Figure 2,
the addition of new atoms is carried out by deleting at random a bond along the chain, thus creating two non-bonded atoms
from which the chain is extended. Thus, the chain extension is grown using the same path construction as above with the
exception that the initial and final positions are not chosen at random but are the location of two non-bonded atoms.
Figure
2. Increase of polymer chain length. a) Bond i-j is deleted. b) Chain extension is carried out by growing a new
chain from position i and j.
Note that the lattice construction procedure allows one to construct cross-linked polymers since the basic operation is to
generate a chain between two given positions. Hence, cross-linking chains are constructed by generating chains between pairs
of branch-points.
Once the polymeric structures have been constructed, minimization followed by NVT MD simulations are used to further
equilibrate the structures. While minimization is carried out using the aforementioned conjugate gradient algorithm, MD is
performed using the massively parallel LAMMPS code.
Diffusion calculations. Once polymeric structures have been generated and equilibrated using the compression box or lattice
construction technique, diffusion calculations can be carried out. Microscopically, the diffusion coefficient can be calculated
from the motion of the diffusing particles, provided they have been traced long enough so that they perform random walks.
There are several formulations to derive coefficients of diffusion. With MD simulations the most often used expression is the
so called Einstein relation
where D is the diffusion coefficient, is the number of spatial dimension, t is the time, and r(t) is the position vector of a
particle at time t. The angle brackets denote the ensemble average, which in MD simulations is realized by averaging over all
particles of the diffusing species. The calculation of diffusion coefficients rests on the fact that for sufficiently long times the
mean-square displacement of a diffusing particle increases linearly with time. There are however, cases in which the
mean-square displacement is not linearly proportional to time, but obeys a different power law
where n is lower than 1 (normal diffusion). This process is called anomalous diffusion. It is caused by some mechanism
which forces the particles onto a path that is not a random walk. These can be obstacles in the way of the diffusant and thus
inhibit random motion, such as to force the diffusant to remain inside a cavity. Anomalous diffusion persists only on short
time scales. At long enough time scales (and hence length scales) the trajectory of the diffusant becomes randomized and a
change to normal diffusion occurs.
Results and discussion
The main goal of our study is to measure oxygen and water diffusion coefficient in two polymeric materials, butyl rubber and
ethylene-propylene-dimer-monomer (EPDM). Butyl and EPDM are used in O-ring systems and serve as sealing joints in
many manufactured objects. Butyl rubber is a copolymer of isobutene and isoprene (Figure 3). EPDM is a terpolymer of
ethylene, propylene, and 1,4 hexadiene (Figure 3). The experimental density for both butyl and EPDM is 0.91 g/cm 3 . Small
gases diffusion coefficients have been measured experimentally on both polymeric systems, and in
general the diffusion coefficients in EPDM are an order of magnitude larger than in butyl. Presently this
decrepency is poorly understood.
Butyl rubber:
EPDM:
Figure
3. Butyl rubber and EPDM structural formulas.
The results outlined in this section were obtained using the lattice construction code and the energy minimization program both running on a SGI
R10000 platform. MD simulations were carried out using the MD LAMMPS code running on both the 9216 processors Teraflops Intel and 1840
processors Intel Paragon. All butyl MD runs were carried out on the Paragon using 216 nodes, EPDM and polyisobutylene MD runs were
performed on the Teraflops using 125 nodes. We observed a speed up factor of 7 between the Teraflops and the Paragon for the same number of
nodes.
Polymer construction. In prior calculations of diffusion coefficients we probed the performances of the lattice construction code for dense polymeric
systems. It has been suggested by van der Vegt et al. [7] that polymer building techniques which create models directly at the experimental density
may introduce significant nonequilibrium strain in the angle bending and dihedral torsion degrees of freedom. Furthermore, the time required to
anneal this strain may be beyond MD time scales, thus prohibiting the generation of true equilibrium polymer structures. van der Vegt et al. [7]
showed that model polymers built by slow compression from a low density initial state exhibit less strain in the angle and dihedral degrees of
freedom, and they suggest that these structures may be more representative of an equilibrated polymer. Since the lattice construction technique we
introduce here is novel, and it creates models directly at experimental densities, we thought it appropriate to compare it to the "slow-compression"
building method. The polymer used in this study was poly(isobutylene), or PIB, this polymer is enssentially an non cross-linked butyl rubber. The
forcefield parameters were taken from Muller-Plathe et al. [2] The Lorenz-Berthelot mixing rules were used for nonbond interactions
between different atomic species, and all studies were performed at 300K.
Model A was built with the slow-compression method described in the Methodology section. First, a set of polymer chains was built at a density
of 0.11 g/cm 3 , which is approximately 1/8 of the experimental density. There were 21 chains with
lengths distributed randomly between 80 and 120 monomer units. The full-atom model consisted
of 25104 atoms, 25076 bonds, 50040 angles, and 74594 dihedrals. To anneal out nonbond (van der
Waals) overlaps in the initial configuration, an MD simulation with a small timestep was run for a
few picoseconds (ps). We then performed two MD stages as suggested by van der Vegt et al. [7]
The first stage was a compression using only the repulsive part of the nonbond potential (i.e., the
Lennard-Jones interaction between species i and j was truncated at ij ) but full bonded
interactions (bonds, angles, dihedrals). The model was compressed from its initial density to the
experimental density, 0.91 g/cc, over 525 picoseconds, using constant-temperature MD with a
pressure ramp. The Nose-Hoover method was used to control both temperature and pressure,
with time constants of 0.1 and 0.5 ps, respectively. The second stage was initiated from the last
configuration of the first stage, with the attractive nonbond interactions turned on by increasing
the Lennard-Jones cutoffs to 74.5 nm. A constant pressure MD simulation was performed such
that the experimental density (0.91 g/cm 3 ) was maintained. This run lasted for about 120
picoseconds. About 20 picoseconds into the second stage, the polymer seemed to be equilibrated;
changes in the various components of potential energy were not significant or systematic.
Model B was built at the experimental density of 0.91 g/cm 3 using the lattice construction
technique described earlier. It was created to be close in size to Sample A, although it ended up
2.8% smaller. It contained 21 chains with lengths distributed randomly between 80 and 120
monomer units. It was a full-atom model consisting of 24408 atoms, 24387 bonds, 48690 angles,
and 72658 dihedrals. Prior to running dynamic simulations, this model was minimized to reduce
nonbond overlaps using the procedure described in the Methodology section. A MD run at
constant volume and temperature was then performed for 380 picoseconds. The Nose-Hoover
method with a time constant of 0.1 ps was used to maintain temperature at 300K. Equilibration
was reached within the first 10 picoseconds.
Our first comparison of the two polymer models (i.e, PIB A and B) is given in Table I, which
shows the various contributions to potential energy in the systems. The averages and standard
deviations were taken over the last 10 ps of the dynamics for Model A and for the last 80
picoseconds of the dynamics for Model B. All values represent total amounts for each
contribution; since the two models were approximately the same size, we did not normalize the
values. Interestingly, we observe differences between the two models in the van der Waals, angle,
and dihedral contributions; in each case the model created with the lattice contruction technique
(Model B) has a higher energy. The total potential energy difference between the models is 8650
kJ/mol, or 0.14 k B T per atom. The largest differences are found in the angle and dihedral energies,
accounting for about 80% of the total difference. Both of these observations are consistent with those
of van der Vegt et al. [7] for poly(dimethylsiloxane) polymer models. They observed a total difference
of per atom between models made by direct packing and slow compression methods, with
about 70% of that due to angle, dihedral, 1-4 nonbond, and 1-5 nonbond contributions.
Table
I. Potential energy contributions in PIB models (kcal/mol)
PIB Model van der Waals bonds angles dihedrals
Since the concern of this paper is the calculation of diffusion coefficients of small molecules in
polymers, we are interested in how the differences between Models A and B will affect such
calculations. We studied the diffusion of helium (He) through each of the polymer models, using the
Lennard-Jones parameters given by Muller-Plathe et al. [2] for He. In the case of Sample A, a
polymer configuration with a density of exactly 0.91 g/cm 3 was taken from the end of the second-stage runs. Then 100 He
atoms were added to the configuration. In the case of sample B, 100 He atoms were added to the final configuration. The overlaps of the new He atoms
with the polymer were relaxed using constant volume and temperature MD with a small time step. Diffusion MD runs were then performed with constant
volume and temperature (300K) for 360 picoseconds for model A and 770 picoseconds for model B. Figure 4 visualizes the mean-square displacement of
He molecules in samples A and B. The straight lines are drawn with slopes of exactly unity and one can see that at long times eq. 2 is valid for 1. The
normal mode is reached for both systems within 1000 picosecond. The diffusion coefficient for model A is
model B. Both diffusion coefficients are close to the value found by Muller-Plathe et al. this is due to the fact we are using the
same forcefield. Most importantly the diffusion coefficients differences between systems A and B are not significant. Hence, the lattice construction
technique appears to generate equilibrated polymer structures that have the same behavior that those created by the slow compression method in so far
as diffusion calculations are concerned.
Figure
4. Trajectories of helium particles versus time in PIB models A & B (cf. text). The straight line represents the normal diffusion regime.
We further probed the performances of the lattice construction technique with PIB systems of increasing sizes. As shown in Figure 5, the computational
complexity appears to scale linearly with the number of atoms. This is a substantial gain compare to other MD techniques in which the number of steps
of the simulation must be at least n 3 , where n is the number of atoms of the polymer chains (cf. Methodology section). As a consequence, we were able to
generate PIB structures up to 1,380,000 atoms. To the best of our knowledge, these models are the largest non-crystalline bonded atomic systems ever
generated and in general are several orders of magnitude larger than the current models used in polymer science. It is also important to note that the
structures generated by the lattice construction program are equilibrated using, at most, ten picoseconds MD simulations, rather than several hundred
picoseconds with other techniques.
Figure
5. Computational running time of the lattice construction code versus number of atoms and number of monomers.
Oxygen and water diffusion calculations. The lattice construction code was used to generate initial structures for butyl and EPDM. Both structures
contained approximately 10,000 atoms and were generated in a cubic box of 4.5 nm size, which lead to a density of 0.91 g/cm 3 . In both systems, oxygen
molecules were added up to 3% of the total weight, while water molecules were added up to 9% weight. Although 3% weight is the experimental value for
both oxygen and water in butyl and EPDM, a higher value for water was chosen to obtain a better statistical average when calculating diffusion
coefficients. Once the diffusing species were added, the two resulting structures were energy minimized and equilibrated with MD simulations. The
simulation time used for equilibration did not exceed 10 picoseconds.
In order to compute diffusion coefficients using eq. 1, MD was run up to a 1000 picoseconds simulation. This large simulation time was chosen with the
hope of being able to reach the normal diffusion mode (cf. eq. 2 and discussion below). Figures 6 and 7 visualize the mean-square displacement of
oxygen and water molecules in butyl and EPDM as a function of time. With the exception of water in EPDM the normal mode was reached in all cases.
The diffusion coefficients D were evaluated from the positions of these lines.
Figure 6a. Trajectories of oxygen particles versus time in butyl rubber. 6b. Trajectories of oxygen particles versus time in EPDM. The straight line
represents the normal diffusion regime.
Figure 7a. Trajectories of water particles versus time in butyl rubber. 7b. Trajectories of water particles versus time in EPDM.
The non convergence for diffusion calculations of water in EPDM may be attributed to the fact that several diffusing molecules can eventually fit into
EPDM cavities. Indeed, when visualizing the solvated EPDM model it was observed that some cavities contained several water molecules. Because of the
electrostatic attractions between water molecules, one can hypothesize that when several penetrants are present in one cavity, it is energetically favorable
for the penetrants to remain inside the cavity rather than jumping to neighboring cavities. Since 9% weight for water is overestimated compared to the
corresponding experimental value, a new EPDM model was constructed containing only 3% water. With this new model, the normal mode was reached
in less than 1000 picoseconds. The diffusion coefficient values are listed in Table II.
Table
II. Diffusion coefficients for O 2 and water (10 -6 cm 2 /s)
Model system Diffusion coef. (this work) Diffusion coef. (experiment) [11]
O 2 in rubber 0.285 0.081
O in rubber 0.159 not reported
O 2 in EPDM 0.781 0.177
O in EPDM 1.921 not reported
The diffusion coefficients for EPDM are larger than for butyl as expected from experiments. A tentative explanation was provided when visualizing the
volumes of the equilibrated polymers. Free volumes are void spaces in model structures that are accessible by penetrant molecules. The free
volumes were computed using a program developed by one of the authors of this paper. [12] The determination of the free volume within a given model
system is conducted as follows. The radii of all the polymer atoms are augmented by the length equal to the penetrant radius, and the unoccupied volume
of the resulting model system is then calculated. For both butyl and EPDM it was observed that the free volumes did not percolate for penetrant having a
diameter greater 0.30 nm (diameter of atomic oxygen is 0.35 nm). In other words, all cavities having an entrance size greater than 0.30 nm were not
connected. This observation is consistent with the hoping mechanism picture proposed by some authors. [13][14][15] Penetrant molecules spend
relatively long times in cavities before performing infrequent jumps between adjacent cavities. The same authors have performed visual inspection of
polymer models in the vicinity of penetrants, and it appeared that jumping events occur after channels between neighboring cavities are formed. Once
the channels are formed, the penetrants slip through it without much effort. If such a picture is true, since EPDM has higher diffusion coefficients than
butyl, EPDM should have a greater number of channels than butyl. We probed the free volume distribution for butyl and EPDM versus the cavity
entrance size. As shown in Figure 8, butyl and EPDM have a different free volume distribution. EPDM contains significantly more free volumes having
an entrance size smaller than 0.30 nm than butyl. It is important to note that these small free volumes provide links between larger cavities since the free
volume network percolates for entrance sizes smaller than 0.30 nm. Consequently, the curves presented in Figure 8 are consistent with the hoping
mechanism picture, and according to this picture, EPDM has higher diffusion coefficients because it comprises more channels between cavities.
Figure
8. Free volume distribution in butyl rubber and EPDM.
Conclusion
We have proposed a new technique to generate close-to-equilibrium dense polymeric structures. This technique leads to diffusion results that are similar
to results obtained by previously published methods, while being several order of magnitude faster. We have shown the new technique to scale linearly
with the number of atoms, and have used it to generate dense polymer models comprising up to 1,380,000 atoms. To the best of our knowledge these
models are the largest non-crystalline bonded atomic systems ever generated and are many times larger than the current models used in polymer science.
We have used the new technique to construct 10,000 atoms models of butyl rubber and EPDM for the purpose of simulating the diffusion of oxygen and
water molecules. The simulations were carried out using LAMMPS molecular dynamics code, and were run on Sandia's Intel Teraflop and Sandia's
Intel Paragon. In agreement with experimental results the diffusion coefficients in EPDM were found an order of magnitude larger than in butyl. A
tentative explanation of the differences between the coefficients was advanced when comparing the free volume distributions of the two polymer models:
diffusion is facilitated in EPDM due to a larger number of channels between cavities than in butyl.
Acknowledgments
We are pleased to acknowledge the funding provided by the Accelerated Strategic Computing Initiative (ASCI) of the U.S. Department of Energy, Sandia
National Laboratories under contract DE-AC04-76DP00789.
--R
"CHARMm: A program for macromolecular energy, minimization and dynamics calculations"
Personal communication
"Particle-Mesh Ewald and rRESPA for Parallel Molecular Dynamics Simulations"
Principles of Polymer Chemistry
Scaling Concepts in Polymer Physics
in Polymer Handbook
--TR | gas diffusion;teraflop;molecular builder;molecular dynamics;polymer |
509641 | A scalable mark-sweep garbage collector on large-scale shared-memory machines. | This work describes implementation of a mark-sweep garbage collector (GC) for shared-memory machines and reports its performance. It is a simple ''parallel'' collector in which all processors cooperatively traverse objects in the global shared heap. The collector stops the application program during a collection and assumes a uniform access cost to all locations in the shared heap. Implementation is based on the Boehm-Demers-Weiser conservative GC (Boehm GC). Experiments have been done on Ultra Enterprise 10000 (Ultra Sparc processor 250 MHz, 64 processors). We wrote two applications, BH (an N-body problem solver) and CKY (a context free grammar parser) in a parallel extension to C++.Through the experiments, We observe that load balancing is the key to achieving scalability. A naive collector without load redistribution hardly exhibits speed-up (at most fourfold speed-up on 64 processors). Performance can be improved by dynamic load balancing, which exchanges objects to be scanned between processors, but we still observe that straightforward implementation severely limits performance. First, large objects become a source of significant load imbalance, because the unit of load redistribution is a single object. Performance is improved by splitting a large object into small pieces before pushing it onto the mark stack. Next, processors spend a significant amount of time uselessly because of serializing method for termination detection using a shared counter. This problem suddenly appeared on more than processors. By implementing non-serializing method for termination detection, the idle time is eliminated and performance is improved. With all these careful implementation, we achieved average speed-up of 28.0 in BH and 28.6 in CKY on 64 processors. | Introduction
Shared-memory architecture is attractive platform for implementation of general-purpose
parallel programming languages that support irregular, pointer-based data
structures [4, 20]. The recent progress in scalable shared-memory technologies is also
making these architectures attractive for high-performance, massively parallel computing
one of the important issues not yet addressed in the implementation of general-purpose
parallel programming languages is scalable garbage collection (GC) technique
for shared-heaps. Most previous work on GC for shared-memory machines is concurrent
GC [6, 10, 17], by which we mean that the collector on a dedicated processor
runs concurrently with application programs, but does not perform collection itself in
parallel. The focus has been on shortening pause time of applications by overlapping
the collection and the applications on different processors. Having a large number of
processors, however, such collectors may not be able to catch up allocation speed of
applications. To achieve scalability, we should parallelize collection itself.
This paper describes the implementation of a parallel mark-sweep GC on a large-scale
(up to 64 processors), multiprogrammed shared-memory multiprocessor and
presents the results of empirical studies of its performance. The algorithm is, at
least conceptually, very simple; when an allocation requests a collection, the application
program is stopped and all the processors are dedicated to collection. Despite
its simplicity, achieving scalability turned out to be a very challenging task. In the
empirical study, we found a number of factors that severely limit the scalability, some
of which appear only when the number of processors becomes large. We show how to
eliminate these factors and demonstrate the speed-up of the collection.
We implemented the collector by extending the Boehm-Demers-Weiser conservative
garbage collection library (Boehm GC [2, 3]) on two systems: a 64-processor Ultra
Enterprise 10000 and a 16-processor Origin 2000. The heart of the extension is dynamic
task redistribution through exchanging contents of the mark stack (i.e., data that are
live but yet to be examined by the collector). At present, we achieved 14-28-fold
speed-up on Ultra Enterprise 10000, and 3.7 to 6.3-fold speed-up on Origin 2000.
The rest of the paper is organized as follows. Chapter 2 compares our approach
with previous work. Chapter 3 briefly summarizes Boehm GC, on which our collector
is based. Chapter 4 describes our parallel marking algorithm and solutions for performance
limiting factors. Chapter 5 describes the experimental conditions. Chapter 6
shows experimental results, and we conclude in Chapter 7.
Chapter 2
Previous Work
Most previous published work on GCs for shared-memory machines has dealt with
concurrent GC [6, 10, 17], in which only one processor performs a collection at a time.
The focus of such work is not on the scalability on large-scale or medium-scale shared-memory
machines but on shortening pause time by overlapping GC and the application
by utilizing multiprocessors. When GC itself is not parallelized, the collector may fail
to finish a single collection cycle before the application exhausts the heap (Figure 2.1).
This will occur on large-scale machines, where the amount of live data will be large
and the (cumulative) speed of allocation will be correspondingly high.
We are therefore much more interested in "parallel" garbage collectors, in which a
single collection is performed cooperatively by all the processors. Several systems use
this type of collectors [7, 16] and we believe there are many unpublished work too, but
there are relatively few published performance results. To our knowledge, the present
paper is the first published work that examines the scalability of parallel collectors on
real, large-scale, and multiprogrammed shared-memory machines. Most of previous
publications have reported only preliminary measurements.
Uzuhara constructed a parallel mark sweep collector on symmetric multiprocessors
[22]. When the amount of free space in the shared heap becomes smaller than a
threshold, some processors start a collection, while other processors continue application
execution. The collector processors cooperatively mark all reachable objects with
dynamic load balancing by using global task pool. Then they sweep the heap and
The time when memory
region can be reused GC
Time Time
Application
PEs
Concurrent GC Parallel GC (Our approach)
Figure
2.1: Difference between concurrent GC and our approach. If only one dedicated
processor performs GC, a collection cycle becomes longer in proportion to the number
of processors.
join the application workers. This approach has the same advantage as concurrent
GC, and it can prevent a single collection cycle from becoming longer on large-scale
machines.
Ichiyoshi and Morita proposed a parallel copying GC for a shared heap [11]. It
assumes that the heap is divided into several local heaps and a single shared heap.
Each processor collects its local heap individually. Collection on the shared-heap is
done cooperatively but asynchronously. During a collection, live data in the shared-
heap (called 'from-space' of the collection) are copied to another space called `to-space'.
Each processor, on its own initiative, copies data that is reachable from its local heap to
to-space. Once a processor has copied data reachable from its local heap, it can resume
application on that processor, which works in the new shared-heap (i.e., to-space).
Our collector is much simpler than both of Uzuhara's collector and Ichiyoshi and
Morita's collector; it simply synchronizes all the processors at a collection and all the
processors are dedicated to the collection until all reachable objects are marked. Although
Ichiyoshi and Morita have not mentioned explicitly, we believe that a potential
advantage of their method over ours is its lower susceptibility to load imbalance of
a collection. That is, the idle time that would appear in our collector is effectively
filled by the application. The performance measurement in Chapter 6 shows a good
speed-up up to our maximum configuration, 64 processors, and indicates that there is
no urgent need to consider using the application to fill the idle time. We prefer our
method because it does not interfere with SPMD-style applications, in which global
synchronizations are frequent. 1 Both of Uzuhara's method and Ichiyoshi and Morita's
method may interact badly with such applications because it exhibits a very long
marking cycle, during which the applications cannot utilize all the processors. Taura
also reached a similar conclusion on distributed-memory machines [21].
Our collector algorithm is most similar to Imai and Tick's parallel copying collector
[12]. In their study, all processors perform copying tasks cooperatively and any memory
object in one shared heap can be copied by any processor. Dynamic load balancing
1 A global synchronization occurs even if the programming language does not provide explicit barrier
synchronization primitives. It implicitly occurs in many places, such as reduction and termination
detection.
is achieved by exchanging memory pages to be scanned in the to-space among proces-
sors. Speed-up is calculated by a simulation that assumes processors become idle only
because of load imbalance-the simulation overlooks other sources of performance degrading
factors such as spin-time for lock acquisition. As we will show in Chapter 6,
such factors become quite significant, especially in large-scale and multiprogrammed
environments.
Chapter 3
Boehm-Demers-Weiser Conservative GC
Library
The Boehm-Demers-Weiser conservative GC library (Boehm GC) is a mark-sweep GC
library for C and C++. The interface to applications is very simple; it simply replaces
calls to malloc with calls to GC MALLOC. The collector automatically reclaims memory
no longer used by the application. Because of the lack of precise knowledge about types
of words in memory, a conservative GC is necessarily a mark-sweep collector, which
does not move data. Boehm GC supports parallel programs using Solaris threads.
The current focus seems to support parallel programs with minimum implementation
efforts; it serializes all allocation requests and GC is not parallelized.
3.1 Sequential Mark-Sweep Algorithm
The mark-sweep collector's work is to find all garbage objects, which are unreachable
from the root set (machine registers, stacks and global variables) via any pointer paths,
and to free those objects. To tell whether an object is live (reachable) or garbage,
each object has its mark bit, which shows 0(= 'unmarked') before a collection cycle.
We mention how Boehm GC maintains the mark bits in section 3.2. A collection
cycle consist of two phases; in the mark phase, the collector traverse objects that are
reachable from root set recursively, and sets (marks) their mark bits at 1(= 'marked').
To mark objects recursively, Boehm GC uses a data structure called mark stack as
shown in section 3.3. In the sweep phase, the collector scans all mark bits and frees
objects whose mark bits are still 'unmarked'. The sweeping method heavily depends
on how the free objects are managed. We describe aspects relevant to the sweep phase
in section 3.4.
3.2 Heap Blocks and Mark Bitmaps
Boehm GC manages a heap in units of 4-KB blocks, called heap blocks. Objects
in a single heap block must have the same size and be word-aligned. For each block
separate header record (heap block header) is allocated that contains information about
the block, such as the size of the objects in it. Also kept in the header is a mark bitmap
for the objects in the block. A single bit is allocated for each word (32 bits in our
experimental environments); thus, a mark bitmap is 128-byte length. The j th bit of the
th byte in the mark bitmap describes the state of an object that begins at (BlockAddr
BlockAddr is the start address of the corresponding heap block.
Put differently, each word in a mark bitmap describes the states of consecutive
words in the corresponding heap block, which may contain multiple small objects.
Therefore, in parallel GC algorithms, visiting and marking an object must explicitly
be done atomically. Otherwise, if two processors simultaneously mark objects that
share a common word in a mark bitmap, either of them may not be marked properly.
3.3 Mark Stack
Boehm GC maintains marking tasks to be performed with a vector called mark
stack . It keeps track of objects that have been marked but may directly point to an
unmarked object. Each entry is represented by two words:
ffl the beginning address of an object, and
ffl the size of the object.
Figure
3.1 shows the marking process in pseudo code; each iteration pops an entry
from the mark stack and scans the specified object, 1 possibly pushing new entries onto
precisely, when the specified object is very large (? 4 KB), the collector scans only the first
4 KB and keeps the rest in the stack.
push all roots (registers, stack, global variables) onto mark stack.
while (mark stack is not empty) f
for size of o; i++) f
if (o[i] is not a pointer) do nothing
else if (mark bit of o[i] == 'marked') do nothing
else f
mark bit of
push(o[i], mark stack)
Figure
3.1: The marking process with a mark stack.
the mark stack. A mark phase finishes when the mark stack becomes empty.
3.4 Sweep
In the sweep phase, Boehm GC does not free each garbage object actually. Instead,
it distinguish empty heap blocks from other heap blocks.
Boehm GC examines the mark bitmaps of all heap blocks in the heap. A heap block
that contains any marked object is linked to a list called reclaim list, to prepare for
future allocation requests 2 . Heap blocks that are found empty are linked to a list
called list, in which heap blocks are sorted by their addresses, and
adjacent ones are coalesced to form a large contiguous block. Heap block free list is
examined when an allocation cannot be served from a reclaim list.
2 The system does not free garbage objects on nonempty heap blocks, until the program requests
objects of the proper size (lazy sweeping). In order to find a garbage objects from those heap blocks,
mark bitmaps are preserved until next collection
Chapter 4
Parallel GC Algorithm
Our collector supports parallel programs that consist of several UNIX processes. We
assume that all processes are forked at the initialization of a program and are not
added to the application dynamically. Interface to the application program is the
same as that of the original Boehm GC; it provides GC MALLOC, which now returns a
pointer to shared memory (acquired by a mmap system call).
We could alternatively support Solaris threads. The choice is arbitrary and somewhat
historical; we simply thought having private global variables makes implementation
simpler. We do not claim one is better than the other.
4.1 Basic Algorithm
4.1.1 Parallel Marking
Each processor has its own local mark stack. When GC is invoked, all application
processes are suspended by sending signals to them. When all the signals have been
delivered, every processor starts marking from its local root, pushing objects onto its
local mark stack. When an object is marked, the corresponding word in a mark bitmap
is locked before the mark bit is read. The purpose of the lock is twofold. one is to
ensure that a live object is marked exactly once, and the other is to atomically set the
appropriate mark bit of the word. When all reachable objects are marked, the mark
phase is finished.
This naive parallel marking hardly results in any recognizable speed-up because of
Objects marked by PE2
Objects marked by PE1
PE1's root PE2's root
heap
Figure
4.1: In the simple algorithm, all nodes of a shared tree are marked by one
processor.
the imbalance of marking tasks among processors. Load imbalance is significant when
a large data structure is shared among processors through a small number of externally
visible objects. For example, a significant imbalance is observed when a large tree is
shared among processors only through a root object. In this case, once the root node
of the tree is marked by one processor, so are all the internal nodes (Figure 4.1).
To improve marking performance, our collector performs dynamic load balancing by
exchanging entries stored in mark stacks.
4.1.2 Dynamic Load Balancing of Marking
Besides a local mark stack, each processor maintains an additional data structure
named stealable mark queue, through which "tasks" (entries in mark stacks) are exchanged
4.2). During marking, each processor periodically checks its stealable
mark queue. If it is empty, the processor moves all the entries in the local mark stack
Tasks
Mark stack Lock Stealable
mark queue
Figure
4.2: Dynamic load balancing method: tasks are exchanged through stealable
mark queues.
(except entries that point to the local root, which can be processed only by the local
processor) to the stealable mark queue. When a processor becomes idle (i.e., when
its mark stack becomes empty), it tries to obtain tasks from stealable mark queues.
The processor examines its own stealable mark queue first, and then those of other
processors, until it finds a non-empty queue. Once it finds one, it steals half of the
entries 1 in the queue and stores them into its mark stack. Because several processors
may become idle simultaneously, this test-and-steal operation must acquire a lock on
a queue. The mark phase is terminated when all the mark stacks and stealable mark
queues become empty. The termination is detected by using a global counter to maintain
the number of empty stacks and empty queues. The counter is updated whenever
a processor becomes idle or obtains tasks.
1 If the queue has n entries and n is an odd number, (n + 1)=2 entries are stolen.
4.1.3 Parallel Sweeping
In the parallel algorithm, all processors share a single heap block free list, while each
processor maintains a local reclaim list. In the sweep phase, each processor examines a
part of the heap and links empty heap blocks to the heap block free list and non-empty
ones to its local reclaim list. Since each processor has a local reclaim list, inserting
blocks to a reclaim list is straightforward. Inserting blocks to the heap block free
list is, however, far more difficult, because the heap block free list is shared, blocks
must sorted by their addresses, and adjacent blocks must be coalesced. To reduce the
contention and the overhead on the shared list, we make the unit of work distribution
in the sweep phase larger than a single heap block and perform tasks as locally as
possible; each processor acquires a large number of (64 in the current implementation)
contiguous heap blocks at a time and processes them locally. Empty blocks are locally
sorted and coalesced within the blocks acquired at a time and accumulated in a local
list called partial heap block free list. Each processor repeats this process until all
the blocks have been examined. Finally, the lists of empty blocks accumulated in
partial heap block free lists are chained together to form the global heap block free
list, possibly coalescing blocks at joints. When this sweep phase is finished, we restart
the application.
4.2 Performance Limiting Factors and Solutions
The basic marking algorithm described in previous section exhibits acceptable speed-up
on small-scale systems (e.g., approximately fourfold speed-up on eight processors).
As we will see in Chapter 6, however, several factors severely limit speed-up and this
basic form never yields more than a 12-fold speed-up. Below we list these factors and
describe how did we address them in turn.
Load imbalance by large objects: We often found that a large object became a
source of significant load imbalance. Recall that the smallest unit of task distribution
is a single entry in a stealable mark queue, which represents a single object
in memory. This is still too large! We often found that only some processors were
busy scanning large objects, while other processors were idle. This behavior was
most prominent when applications used many stacks or large arrays. In one of
our parallel applications, the input data, which is a single 800-KB array caused
significant load imbalance. In the basic algorithm, it was not unusual for some
processors to be idle during the entire second half of a mark phase.
We address this problem by splitting large objects (objects larger than 512 bytes)
into small (512-byte) pieces before it is pushed onto the mark stack. In the
experiments described later, we refer to this optimization as SLO (Split Large
Object).
Delay in testing mark bitmap: We observed cases where processors consumed a
significant amount of time acquiring locks on mark bits. A simple way to guarantee
that a single object is marked only once is to lock the corresponding mark
bit (more precisely, the word that contains the mark bit) before reading it. How-
ever, this may unnecessarily delay processors that read the mark bit of an object
to just know the object is already marked. To improve the sequence, we replaced
this "lock-and-test" operation with optimistic synchronization. We tests
a mark bit first and quit if the bit is already set. Otherwise, we calculate the
new bitmap for the word and write the new bitmap in the original location, if
the location is the same as the originally read bitmap. This operation is done
atomically by compare&swap instruction in SPARC architecture or load-link and
store-conditional instructions in MIPS architecture. We retry if the location has
been overwritten by another processor. These operations eliminate useless lock
acquisitions on mark bits that are already set. We refer to this optimization as
MOS (Marking with Optimistic Synchronization) in the experiments below.
Another advantage of this algorithm is that it is a non-blocking algorithm [8,
18, 19], and hence does not suffer from untimely preemption. A major problem
with the basic algorithm is, however, that locking a word in a bitmap every
time we check if an object is marked causes contention (even in the absence of
preemption). We confirmed that a "test-and-lock-and-test" sequence that checks
the mark bit before locking works equally well, though it is a blocking algorithm.
Serialization in termination detection: When the number of processors becomes
large, we found that the GC speed suddenly dropped. It revealed that processors
spent a significant amount of time to acquire a lock on the global counter that
maintains the number of empty mark stacks and empty stealable mark queues.
We updated this counter each time a stack (queue) became empty or tasks were
thrown into an empty stack (queue). This serialized update operation on the
counter introduced a long critical path in the collector.
We implemented another termination detection method in which two flags are
maintained by each processor; one tells whether the mark stack of the processor
is currently empty and the other tells whether the stealable mark queue of the
processor is currently empty. Since each processor maintains its own flags on
locations different from those of the flags of other processors, setting flags and
clearing flags are done without locking.
Termination is detected by scanning through all the flags in turn. To guarantee
the atomicity of the detecting process, we maintain an additional global flag
detection-interrupted , which is set when a collector recovers from its idle state.
A detecting processor clears the detection-interrupted flag, scans through all
the flags until it finds any non-empty queue, and finally checks the detection-
interrupted flag again if all queues are empty. It retries if the process has been
interrupted by any processor. We must take care of the order of updating flags
lest termination be detected by mistake. For example, when processor A steals
all tasks of processor B, we need to change flags in the following order: (1) stack-
empty flag of A is cleared, (2) detection-interrupted flag is set, and (3) queue-
empty flag of B is set. We refer to this optimization as NSB (Non-Serializing
Barrier).
Chapter 5
Experimental Conditions
We have implemented the collector on two systems: the Ultra Enterprise 10000 and
the Origin 2000. The former has a uniform memory access (UMA) architecture and the
latter has a nonuniform memory access (NUMA) architecture. The implementation is
based on the source code of Boehm GC version 4.10. We used four applications written
in C++: BH (an N-body problem solver), CKY (a context free
(a life game simulator) and RNA (a program to predict RNA secondary structure).
5.1 Ultra Enterprise 10000
Ultra Enterprise 10000 is a symmetric multiprocessor with sixty-four 250 MHz Ultra
processors. All processors and memories are connected through a crossbar
interconnect whose bandwidth is 10.7 GB/s. The L2 cache block size is 64 bytes.
5.2 Origin 2000
Origin 2000 is a distributed shared memory machine. The machine we used in the
experiment has sixteen 195 MHz R10000 processors. That system consists of eight
modules, each of which has two processors and the memory module. The modules
are connected through a hypercube interconnect whose bandwidth is 2.5 GB/s. The
memory bandwidth of each module is 0.78 GB/s and the L2 cache block size is 128
bytes.
In the default configuration, each memory page (whose size is 16 KB) is placed on
the same node as the processor that accessed the page first. Therefore processors
can have desired pages on local by touching the pages at the initializing phase of the
program. We used two physical memory allocation policies in the experiment:
Local to allocator (LA) Each heap block and corresponding mark bitmap are local
to the processor that allocate the heap block first.
Round-robin (RR) The home node of a heap block is determined by its address
rather than the allocator of the block. The heap block is local to processor P
such that
#Processors.
The home of a mark bitmap is determined in the same rule.
5.3 Applications
We used following four applications written in C++. BH and CKY are parallel ap-
plications. We wrote Enterprise version of those applications in a parallel extension to
C++ [14]. This extension allows programmer to create user level threads dynamically.
The runtime implicitly uses fork system call at the beginning of the program. Since
this extension does not work on Origin now, we wrote those applications on Origin by
using fork system call explicitly. Life and RNA are sequential applications; even those
sequential applications can utilize our parallel collection facility.
BH simulates the motion of N particles by using the Barnes-Hut algorithm [1]. At
each time step, BH makes a tree whose leaves correspond to particles and calculates
the acceleration, speed, and location of the particles by using the tree. In
the experiment, we simulate 10000 particles for 50 time steps.
CKY takes sentences written in natural language and the syntax rules of that language
as input, and outputs all possible parse trees for each sentence. CKY
calculates all nonterminal symbol for each substring of the input sentence in
bottom-up. In the experiment, each of the given 256 sentences consists of 10 to
words.
Life solves "Conway's Game of Life''. It simulates the cells on square board. Each
cell have either of two states, ON and OFF. The state of a cell is determined by
states of adjacent cells at the previous time step. The program takes a list that
contains ON cells in an initial state. The number of initial ON cells is 5685 in
our experiment. We simulate them for 150 time steps.
RNA predicts the secondary structure of an RNA sequence. The input data is a set
of stack regions and each stack region has its position and energy. A set of stack
regions is called feasible if any pair of its elements fulfills a certain condition.
The problem is to find all feasible subsets of given stack regions whose total
energy is no smaller than a threshold. The size of input stack regions is 119 in
our experiment.
5.4 Evaluation Framework
Ideally, the speed-up of the collector should be measured by using various numbers
of processors and applying the algorithm to the same snapshot of heap. It is difficult,
however, to reproduce the same snapshot multiple times because of the indeterminacy
of application programs. The amount of data is so large that we cannot simply dump
the entire image of the heap. Even if such dumping were feasible, it would still be
difficult to continue from a dumped image with a different number of processors.
Thus the only feasible approach is to formulate the amount of work needed to finish a
collection for a given heap snapshot and then calculate how fast the work is processed
at each occurrence of a collection.
A generally accepted estimation of the workload of marking for a given heap configuration
is the amount of live objects, or equivalently, the number of words that are
scanned by the collector. This, however, ignores the fact that the load on each word
differs depending on whether it is a pointer, and the density of pointers in a live data
may differ from one collection to another. Given a word in heap, Boehm GC first
performs a simple test that rules out most non-pointers and then examines the word
more elaborately.
To measure the speed-up more accurately, we define the workload W of a collection
as
a 4 x 4
a 5 x 5
is the number of marked objects, x 2
the number of times to scan already
marked objects, x 3
the number of times to scan non-pointers, x 4
the number of empty
heap blocks, and x 5
the number of non-empty blocks 1 . Each x n is totaled over all
processors. The GC speed S is defined as is the elapsed time of the
collection. And the GC speed-up on N processors is the ratio of S on N processors
to S on a single processor. When we measure S on a single processor, we eliminate
overhead for parallelization.
The constants an were determined through a preliminary experiment. To determine
a 3
, for example, we created a 1000-word object that contained only non-pointers and
we measured the time to scan the object. We ran this measurement several times
and used the shortest time. It took 20 us to scan a 1000-word object on Enterprise
10000; that means 0.020 us per word. From this result, we let a 0:020. The other
constants were determined similarly. The intention of this preliminary experiment is
to measure the time for the workload without any cache misses.
In the experiment, the constants were set at a 1
0:16, a 3
a 4
2:0, and a 5
= 1:3 on Enterprise 10000, and a 1
0:13, a 3
a 4
2:0, and a 5
= 1:3 on Origin 2000.
1 The marking workload is derived from x1 ; x2 ; x3 and the sweeping workload is from x4 ; x5 .
Chapter 6
Experimental Results
6.1 Speed-up of GC
Figures
6.1-6.16 show performance of GC using the four applications on two systems.
We measured several versions of collectors. "Sequential" refers to the original Boehm
GC and "Simple" refers to the algorithm in which each processor simply marks objects
that are reachable from the root of that processor without any further task distribution.
"Basic" refers to the basic algorithm described in Section 4.1, and the following three
versions refer to ones that implement all but one of the optimizations described in
Section 4.2. "No-XXX" stands for a version that implements all the optimizations
but XXX. "Full" is the fully optimized version. We measured an additional version on
Origin 2000, "Full-LA". This is the same as "Full" but takes different physical memory
allocation policy. "Full-LA" takes "Local to Allocator" policy, while all other versions
do "Round-robin" policy.
The applications were executed four times in each configuration and invoked collections
more than 40 times. The table shows the average performance of the invocations.
When we used all or almost all the processors on the machine, we occasionally observed
invocations that performed distinguishably worse than the usual ones. They
were typically times worse than the usual ones. The frequency of such unusually
bad invocations was about once in every five invocations when we used all processors.
We have not yet determined the reason for these invocations. It might be the effect of
other processes. For the purpose of this study, we exclude these cases.
Figure
6.1-6.4 and 6.8-6.11 compare three versions, namely, Simple, Basic, and
Full. The graphs show that Simple does not exhibit any recognizable speed-up in any
application. As Figure 6.1-6.4 show, Basic on Enterprise 10000 performs reasonably
until a certain point, but it does not scale any more beyond it. The exception is RNA,
where we do not see the difference between Basic and Full. The saturation point of
Basic depends on the application; Basic of CKY reaches the peak on 32 processors,
while that of BH reaches the saturation point on 8 processors. The peak of Life is
48 processors. Full achieved about 28-fold speed-up in BH and in CKY, and about
14-fold speed-up in Life and RNA on 64 processors.
On 16-processor Origin 2000, the difference between Basic and Full is little, except
in BH. The some performance problems in Basic, however, appear only when the
number of processors becomes large as we have observed on Enterprise 10000; thus,
Full would be more significant on larger system. Full achieved 3.7-6.3-fold speed-up
on processors.
6.2 Effect of Each Optimization
Figure
6.5-6.7 show how each optimization affects scalability on Enterprise 10000.
Especially in BH and in CKY, removing any particular optimization yields a sizable
degradation in performance when we have a large number of processors. Without the
improved termination detection by the non-serializing barrier (NSB), neither BH nor
CKY achieves more than a 17-fold speed-up. Without NSB, Life does not scale on more
than 48 processors, too. Sensitivity to optimizations differs among the applications;
Splitting large objects (SLO) and marking with optimistic synchronization (MOS)
have significant impacts in BH, while they do not in other applications.
SLO is important when we have a large object in the application. In BH, we use
a single array named particles to hold all particles data, whose size is 800 KB
in our experiments. This large array became a bottleneck when we omitted SLO
optimization. This phenomenon was noted on Origin 2000, as Figure 6.12 indicates.
Generally, MOS have significant effects when we have objects with big reference
counts, because these objects cause many contentions between collectors that try to
visit them. The experiment revealed that the array particles was the source of
problem again; in one collection cycle, we observed that we had about 70,000 pointers
to this array. That caused significant contentions. This big reference count was produced
by the stack of user threads. Because our BH implementation computes forces
to the particles in parallel, each thread has references to its responsible particles. Although
those references are directed to distinct addresses (for example, ith thread has
references to particles[i]), all of them are regarded as pointers to a single object
particles. MOS optimization effectively alleviate the contentions in such case and
improve the performance.
We observe significant impact of NSB optimization on GC speed in three applica-
tions, but we do not see that in RNA even on 64 processors. Although the reason for
this difference is not understood well, in general, NSB is important when collectors
tend to become idle frequently. This is because collectors often update the idle coun-
ters, when we do not implement NSB. In RNA, the frequency of the task shortage
may be low. We will investigate whether this hypothesis is the case in the future.
6.3 Effect of Physical Memory Allocation Policy
Figure
6.13-6.16 compare two memory placement policies: 'Local to allocator (LA)'
and 'Round-robin (RR)' on Origin 2000, described in Section 5.2. Full adopts RR
policy and Full-LA LA. As we can see easily, collection speed with RR is significantly
faster than that with LA in three applications, BH, CKY and RNA. When we adopt
LA policy, GC speed does not improve on more than eight processors.
While we have not fully analyzed this, we conjecture that this is mainly due to the
imbalance in the amount of allocated physical pages among nodes. With LA policy,
the access to objects and mark bitmaps in the mark phase contend at nodes that
have allocated many pages. Actually, BH has significant memory imbalance because
only one processor construct a tree of particles. And all objects in RNA are naturally
allocated by one processor because our RNA is sequential program 1 .
We will investigate why this is not the case in Life, which is also sequential program.
6.4 Discussion on Optimized Performance
As we have seen in Section 6.1, the GC speed of fully optimized version always get
faster as the number of processors increases in any applications. But they considerably
differ in GC speed; for instance, it is 28-fold speed-up in BH and CKY, while 14-fold in
Life and RNA on Enterprise 10000. In order to try to find the cause of this difference,
we examined how processors spend time during the mark phase 2 . Figure 6.17-6.20
show the breakdowns. From these figures, we can say that the biggest problem in Life
is load imbalance, because processors spend a significant amount of time in idle. The
performance improvement may be possible by refining the load balancing method.
On the other hand, we currently can not specify the reasons of the relatively bad
performance in RNA, where processors are busy during 90% of the mark phase.
In most collection cycles, the sweep phase is five to ten times shorter than mark phase. We
therefore focus on the mark phase.
Sequential Sequential code without overhead for parallelization.
Simple Parallelized but no load balancing is done.
Basic Only load balancing is done.
All optimizations but SLO (splitting large object) are done.
All optimizations but MOS (marking with optimistic synchronization) are done.
All optimizations but NSB (non-serializing barrier) are done.
All optimizations but SLO (splitting large object) are done.
Full All optimizations are done.
Full-LA Origin 2000 only. Same as Full, but the physical memory allocation policy is "Local to Allocator".
Table
6.1: Description of labels in following graphs. Except Full-LA, the physical
memory allocation policy on Origin 2000 is "Round-robin".
number of processors
speed-up
Full
Basic
Simple
Linear
Figure
6.1: Average GC speed-up in BH on Enterprise 10000.
number of processors
speed-up
Full
Basic
Simple
Linear
Figure
6.2: Average GC speed-up in CKY on Enterprise 10000.
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.3: Average GC speed-up in Life on Enterprise 10000.
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.4: Average GC speed-up in RNA on Enterprise 10000.
number of processors
speed-up
Full
Linear
Figure
6.5: Effect of each optimization in BH on Enterprise 10000.
number of processors
speed-up
Full
Linear
Figure
6.6: Effect of each optimization in CKY on Enterprise 10000.
number of processors
speed-up
Full
linear
Figure
6.7: Effect of each optimization in Life on Enterprise 10000.
BH (Origin 2000)2610
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.8: Average GC speed-up in BH on Origin 2000.
CKY (Origin 2000)2610
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.9: Average GC speed-up in CKY on Origin 2000.
Life (Origin 2000)2610
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.10: Average GC speed-up in Life on Origin 2000.
RNA (Origin 2000)2610
number of processors
speed-up
Full
Basic
Simple
linear
Figure
6.11: Average GC speed-up in RNA on Origin 2000.
BH (Origin 2000)2610
number of processors
speed-up
Full
linear
Figure
6.12: Effect of each optimization in BH on Origin 2000.
BH (Origin 2000)2610
number of processors
speed-up
Full
Full-LA
linear
Figure
6.13: Effect of physical memory allocation policy in BH on Origin 2000.
CKY (Origin 2000)2610
number of processors
speed-up
Full
Full-LA
linear
Figure
6.14: Effect of physical memory allocation policy in CKY on Origin 2000.
Life (Origin 2000)2610
number of processors
speed-up
Full
Full-LA
linear
Figure
6.15: Effect of physical memory allocation policy in Life on Origin 2000.
RNA (Origin 2000)2610
number of processors
speed-up
Full
Full-LA
linear
Figure
6.16: Effect of physical memory allocation policy in RNA on Origin 2000.
BH (Enterprise 10000)
0%
20%
40%
80%
100%
number of processors
Busy Lock Balance Idle
Figure
6.17: Breakdown of the mark phase in BH on Enterprise 10000. This shows
busy, waiting for lock, moving tasks, and idle.
CKY (Enterprise 10000)
0%
20%
40%
80%
100%
number of processors
Busy Lock Balance Idle
Figure
6.18: Breakdown of the mark phase in CKY on Enterprise 10000.
Life (Enterprise 10000)
0%
20%
40%
80%
100%
number of processors
Busy Lock Balance Idle
Figure
6.19: Breakdown of the mark phase in Life on Enterprise 10000.
RNA (Enterprise 10000)
0%
20%
40%
80%
100%
number of processors
Busy Lock Balance Idle
Figure
6.20: Breakdown of the mark phase in RNA on Enterprise 10000.
Chapter 7
Conclusion
We constructed a highly scalable parallel mark-sweep garbage collector for shared-memory
machines. Implementation and evaluation are done on two systems: Ultra
Enterprise 10000, a symmetric shared-memory machine with 64 processors and Origin
2000, a distributed shared-memory machine with 16 processors. This collector
performs dynamic load balancing by exchanging objects in mark stacks.
Through the experiments on the large-scale machine, we found a number of factors
that severely limit the scalability, and presented the following solutions: (1) Because
the unit of load balancing was a single object, a large object that cannot be divided
degraded the utilization of processors. Splitting large objects into small parts when
they are pushed onto the mark stack enabled a better load balancing. (2) We observed
that processors spent a significant time for lock acquisitions on mark bits in BH. The
useless lock acquisitions were eliminated by using a optimistic synchronization instead
of a "lock-and-test" operation. (3) Especially on 32 or more processors, processors
wasted a significant amount of time because of the serializing operation used in the
termination detection with a global counter. We implemented non-serializing method
using local flags without locking, and the long critical path was eliminated.
On Origin 2000, we must pay attention to physical page placement. With the default
policy that places a physical page to the node that first touches it, the GC speed was
not scalable. We improved performance by distributing physical pages in a round
robin fashion. We conjecture that this is because the default policy causes imbalance
of access traffic between nodes; since some nodes have much more physical pages
allocated than other nodes, accesses to these highly-loaded nodes tend to contend,
hence the latency of such remote accesses accordingly increases. For now, we do not
have enough tools to conclude.
When using all these solutions, we achieved 14 to 28-fold speed-up on 64-processor
Enterprise 10000, and 3.7 to 6.3-fold speed-up on 16-processor Origin 2000.
Chapter 8
Future Work
We would like to improve the GC performance further. In Section 6.4, we have seen
that the collectors in some applications still spend a significant amout of time in idle.
We will investigate how we can improve the load balancing method. Instead of using
buffers for communication (the stealable mark queues), stealing tasks from victim's
mark stack directly may enable faster load distributing.
We have also noticed that we cannot explain the relatively bad performance of RNA
by the load imbalance alone. This may be due to the number of cache misses, which
are included by 'Busy' in Figure 6.20. We can capture the number of cache misses
by using performance counters, which recent processors are equipped with. We can
use the R10000's counters through /proc file system on Origin 2000. And we have
constructed a simple tool to use the Ultra SPARC's counters on Enterprise 10000.
With these tools, we are planning to examine how often processors meets cache misses.
In Section 6.3, we mentioned that we can obtain better performance with the RR
(Round-robin) physical memory allocation policy than with the LA (Local to allocator)
policy. So far, the focus of our discussion is the speed of GC alone. But the matter
will be complicated when we take account into the locality of application programs.
LA policy may be advantageous when a memory region tends to be accessed by the
allocator. The ideal situation would be that most accesses by application are local,
while collection task is balanced well.
--R
A hirarchical O(N log N) force-calculation algorithm
Space efficient conservative garbage collection.
Garbage collection in an uncooperative environment.
Concurrent garbage collection for C
A concurrent generational garbage collector for a multithreaded implementation of ML.
A language for concurrent symbolic computa- tion
A methodology for implementing highly concurrent data ob- jects
A concurrent copying garbage collector for languages that distinguish (Im)mutable data.
A shared-memory parallel extension of KLIC and its garbage collection
Evaluation of parallel copying garbage collection on a shared-memory multiprocessor
Garbage Collection
The SGI Origin: A ccNUMA highly scalable server.
Garbage collection in MultiScheme (preliminary version).
Concurrent replicating garbage collection.
Remarks on a methodology for implementing highly concurrent data objects.
"a methodology for implementing highly concurrent data objects"
An effective garbage collection strategy for parallel programming languages on large scale distributed-memory machines
Parallel garbage collection on shared-memory multiprocessors
Uniprocessor garbage collection techniques.
--TR
MULTILISP: a language for concurrent symbolic computation
Garbage collection in an uncooperative environment
Garbage collection in MultiScheme
Space efficient conservative garbage collection
A concurrent copying garbage collector for languages that distinguish (im)mutable data
A concurrent, generational garbage collector for a multithreaded implementation of ML
A methodology for implementing highly concurrent data objects
Concurrent replicating garbage collection
Remarks on A methodology for implementing highly concurrent data
Notes on MYAMPERSANDldquo;A methodology for implementing highly concurrent data objectsMYAMPERSANDrdquo;
Garbage collection
An effective garbage collection strategy for parallel programming languages on large scale distributed-memory machines
Lock-Free Garbage Collection for Multiprocessors
Evaluation of Parallel Copying Garbage Collection on a Shared-Memory Multiprocessor
Uniprocessor Garbage Collection Techniques
ICC++-AC++ Dialect for High Performance Parallel Computing
--CTR
Guy E. Blelloch , Perry Cheng, On bounding time and space for multiprocessor garbage collection, ACM SIGPLAN Notices, v.34 n.5, p.104-117, May 1999
Guy E. Blelloch , Perry Cheng, On bounding time and space for multiprocessor garbage collection, ACM SIGPLAN Notices, v.39 n.4, April 2004
David Siegwart , Martin Hirzel, Improving locality with parallel hierarchical copying GC, Proceedings of the 2006 international symposium on Memory management, June 10-11, 2006, Ottawa, Ontario, Canada
Toshio Endo , Kenjiro Taura, Reducing pause time of conservative collectors, ACM SIGPLAN Notices, v.38 n.2 supplement, February
H. Gao , J. F. Groote , W. H. Hesselink, Lock-free parallel and concurrent garbage collection by mark&sweep, Science of Computer Programming, v.64 n.3, p.341-374, February, 2007
David Detlefs , Christine Flood , Steve Heller , Tony Printezis, Garbage-first garbage collection, Proceedings of the 4th international symposium on Memory management, October 24-25, 2004, Vancouver, BC, Canada
Yoav Ossia , Ori Ben-Yitzhak , Irit Goft , Elliot K. Kolodner , Victor Leikehman , Avi Owshanko, A parallel, incremental and concurrent GC for servers, ACM SIGPLAN Notices, v.37 n.5, May 2002
Katherine Barabash , Ori Ben-Yitzhak , Irit Goft , Elliot K. Kolodner , Victor Leikehman , Yoav Ossia , Avi Owshanko , Erez Petrank, A parallel, incremental, mostly concurrent garbage collector for servers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.6, p.1097-1146, November 2005
Yossi Levanoni , Erez Petrank, An on-the-fly reference-counting garbage collector for java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.1, p.1-69, January 2006
Yossi Levanoni , Erez Petrank, An on-the-fly reference counting garbage collector for Java, ACM SIGPLAN Notices, v.36 n.11, p.367-380, 11/01/2001
Guy E. Blelloch , Perry Cheng, On bounding time and space for multiprocessor garbage collection, ACM SIGPLAN Notices, v.39 n.4, April 2004 | scalability;shared-memory machine;parallel algorithm;dynamic load balancing;garbage collection |
509644 | Loop re-ordering and pre-fetching at run-time. | The order in which loop iterations are executed can have a large impact on the number of cache misses that an applications takes. A new loop order that preserves the semantics of the old order but has a better cache data re-use, improves the performance of that application. Several compiler techniques exist to transform loops such that the order of iterations reduces cache misses. This paper introduces a run-time method to determine the order based on a dependence-driven execution. In a dependence-driven execution, an execution traverses the iteration space by following the dependence arcs between the iterations. | Introduction
Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in current processor
organizations remain control and data hazards. An estimate [5] shows that the performance of single-chip microprocessors
are improving at a rate of 80% annually, while DRAM speeds are improving at a rate of only 5-10% in that same amount
time [5] [8]. The growing inability of the memory systems to keep up with the processors increases the importance of cache
data re-use to reduce traffic to main memory and pre-fetching mechanisms to hide memory access latencies. These
technological trends pose a challenge to interesting scientific and engineering applications whose data requirements are much
larger than the processor's cache.
Because scientific and engineering applications spend most of their execution time in loops, most of the effort in locality
optimizations has focused on restructuring loops. Changing the iteration order by restructuring loops can significantly
improve the performance of an application. Re-ordering iterations of a loop are conventionally done at compile time by
applying transformations such as loop interchange, skewing and reversal. Unfortunately, compile-time transformations do not
apply to certain types of loops because these transformations must be provably correct without knowing the values of the
variables, forcing the compiler to make conservative assumptions.
In this paper, we present a hybrid compile-time/run-time method to re-order loop iterations using a dependence-driven
execution model that is loosely based on the concept of systolic arrays [9,11] and coarse grain dataflow [3]. In a
dependence-driven execution, the system enables a block of iterations when the dependence constraints on those iterations
are satisfied. The immediate execution of newly enabled iterations produces a depth-first traversal of the iteration space
which improves data re-use. We maintain symbolic data-dependence information based on array subscript expression found
in the body of the loop which is evaluated at run-time. This meta-computation on symbolic dependences allows us to avoid
an early commitment to any specific order, giving the system a greater flexibility, which in turn increases the class of loops
that can be optimized by re-ordering iterations. Furthermore, by maintaining dependence information during run-time, the
run-time system can pre-fetch the sinks of dependences to hide the latency of memory accesses.
Conventional wisdom suggests that determining the iteration order dynamically would add too much computational
overhead. However, there are other overheads in addition to computational ones, such as those overheads caused by control
and data hazards. On previous generations of computers with a more balanced memory system, the cost may indeed have
been unjustified. On contemporary processors, the CPU cycles are relative cheap in comparison to memory cycles which can
be up two orders of magnitude more expensive. This imbalance suggests that computational overhead for logic to avoid cache
misses may not be significant if this logic can reduce traffic to memory, or hide the latency of memory operations. Elsewhere
[16], we discussed the parallelism and scalability of a dependence-driven execution on a multiprocessor. In this paper, we
evaluate the efficacy of run-time loop ordering to improve temporal locality in a contemporary uniprocessor.
Background and Related Work
Many important numerical applications in science and engineering consist of composite functions of the form
where the f i 's are functions that are not necessarily distinct, D is some large data set, much greater than the processor's cache
size, and n denotes the number of times the function sequence is to be applied. Implemented in imperative languages, the
composite function would appear as a nested loop with each f i being expressed as a simple loop iterating over the data space
D. The semantics of loops in these languages orders the iterations lexicographically with respect to the induction variables
forcing the computation to traverse the data space in a strict function-at-a-time order. This execution order leads to a poor
re-use of cache data: Because D does not fit entirely in the cache, the cache contains only the last c bytes of D upon
completion of some function f i , forcing f i+1 to re-load every byte of D on to the cache. It may be possible and desirable to
execute the iterations in a different order to improve locality. How do we determine which are the desirable and legal orders?
How do we specify or express these orders for efficient execution?
Much of the work on locality optimization relies on compile-time transformations to re-order the iterations of loops.
Unfortunately, these transformations apply only to loops that are perfectly nested (that is, loops in which all assignment
statements occur only in the innermost loop) or loops that can be transformed into perfectly nested loops. Put differently,
compile-time transformations are applicable to the case where all the f i 's are the same in equation 1 above. In this section,
we briefly review compile time loop transformations. In the following section, we describe how loop re-ordering can be
extended to composite functions where not all f i 's are the same.
To discuss compile transformation, it is useful to define the notion of an iteration space. An iteration space is an
n-dimensional space of integers that models nested loops of depth n. A loop iteration is a point in the iteration space
described by a vector I, (i 1 ,i 2 ,.i n ) where each i p is an index variable delimited by the iteration range for the corresponding
loop at depth p. A linear loop transformation is a transformation from the original iteration space to another iteration space
with some desired properties such as better locality or parallelism. A sequence of loop transformations can be modeled as a
product of a non-singular matrices, each matrix making a transformation, such as skewing, loop interchange, reversal, etc.
Thus finding the possible and desirable iteration order can be formulated as a search for a non-singular matrix with some
objective function satisfying some set of constraints. These transformations are also called unimodular transformations
because they preserve the volume of the iteration space of integers.
A loop transformation is legal if the transformation preserves the dependence relations. If there is a dependence between two
points I and J in the iteration space, then the difference between vector J and vector I, J-I is called the dependence distance
vector. The set of distance vectors makes up the data dependences that determine the allowable re-ordering transformation.
Based on these dependences, optimizing compilers may make the following transformations to improve locality:
Loop Interchange: Loop Interchange [19,1] swaps an inner loop with an outer loop. Optimizing compilers will apply
this transformation to improve memory locality if the interchange reduces the array access stride.
Blocking (or Tiling): Blocking [20,12,17] takes advantage of applications with spatial locality by traversing a
rectangle of iteration space at a time. If most of the memory accesses in the application are limited to addresses within
the rectangle and this rectangle of data fits wholly in the cache, the processor will access the cache line multiple times
before it leaves the cache.
Skewing: Blocking may not be legal on some iteration spaces if the distance vectors contain negative distances. In
some cases, skewing [19] can be applied to enable blocking transformation. Skewing traverses the iteration space
diagonally in waves.
Figure
1: Skewing and Tiling Transformations on
Hyperbolic 1D PDE.
From the transformed iteration space, the compiler generates code in the form of new loops. As an example, consider the
hyperbolic 1D PDE. Figure 1 shows the dependences and the iteration space prior, to and after, the skewing and blocking
transformation with a block size of two. While the generated code is more complex than the original, the new code has better
locality and parallelism. A survey of compiler transformations for high performance computing can be found in [4,18].
Dependence-Driven Execution
Given the transformed iteration space, compilers must generate code that describes the traversal over the entire iteration
space. This early commitment to a specific order limits flexibility. More specifically, compile time transformations have the
following limitations:
Unimodular transformation do not apply to composite functions with multiple distinct functions, that is, they do not
apply to a large class of imperfectly nested loops.
Some dependences in loop iterations involve unknown user variables in the subscript expressions. For example,
consider the following loop:
for (I=0; I < N1; I++) {
for (J=1; J < N2; J++) {
The compiler cannot apply unimodular transformations without knowing the values of K and L, or at the very least
knowing whether the values are negative or positive.
Because compilers must give a static specification of iteration order, the code generated for the transformed iteration
space can become complex, as in the example of skewing and blocking transformation in figure 1. The complex code
with many levels of nesting and conditionals causes control hazards which reduces instruction level parallelism on
contemporary processors. Furthermore, it is difficult for compilers to apply other optimizations on complex code. For
example, none of the compilers on various architectures with which we experimented were able to apply loop
unrolling to the code generated in figure 1.
Furthermore, compile time linear loop transformations do not give us a general technique to automate pre-fetching
data to hide memory access latencies.
In this section, we describe the DUDE (Def-Use-Descriptor Environment) run-time system. DUDE is meant to be used either as
a target for optimizing compilers or as a set of library calls that programmers can use directly 1 to optimize their code. The
basic model is loosely based on the underlying concept of systolic arrays. Like systolic arrays, computation in DUDE consists
of a large number of processing elements (cells) which are of the same type. However, for efficient computation on
commercial processors, the granularity of computation in DUDE is much coarser. In our implementation, these cells are
actually C++ objects that consist of an operation and a descriptor describing a region of data to which the operation is
applied. We term these objects Iterates and an array of these Iterates make up an IterateCollection. The procedures in the
cells of systolic arrays may consist of several alternative options. Similarly, operators in the Iterates of a IterateCollection
may be overloaded.
Like the cells of systolic arrays, Iterates are interconnected through links. But unlike the interconnection between cells of
systolic arrays which are physical, hardwired links, the links in DUDE are symbolic expression of indices from the index
space of IterateCollections. The expression for symbolic links, called dependence rule, is derived from array access patterns
in the statements of the original loop, and therefore it summarizes dependences in the iteration space. The symbolic
meta-computation on the dependence rule determines the path of execution through the iteration space.
Also like the computation in systolic arrays, data is processed and transfered from one element to another by pipelining.
Since there is only one physical processing element on a uniprocessor, there is no computational speedup due to pipelining.
However, because of the temporal locality that this model offers, we can expect a performance improvement even on an
uniprocessor. Unlike the computation in systolic arrays, the computations in DUDE are not synchronized by a global clock
(and in that sense, our model is closer to wavefront arrays [10]). The asynchronous computation, together with the pipelining
of the function applications, allows the system to apply multiple functions to the same block of data before that data block
leaves the processor's cache.
Figure
2: Dependence-driven Execution Model
Describing Loops in DUDE
A goal of the run-time system is to be able to optimize complex loops of the form shown in equation 1. To achieve these
goals, we have taken an object-oriented loops and blocks of iterations are extensible first class objects which
can be put together to describe complex loops. By putting together and specializing these objects, the user specializes the
system to create a "software systolic array" for the application at hand. This object-oriented model is based on AWESIME
[7] and the Chores [6] run-time systems. The following is a list of objects in DUDE:
Data Descriptor: Data Descriptors describe a subsection of the data space. For example, a matrix can be divided into
sub-matrices with each sub-matrix being defined by a data descriptor. The methods on this object, SX(), EX(), SY(),
EY(), etc., retrieve the corners of the sub-matrix.
Iterate: An Iterate is a tuple <data descriptor, operator>. The user specializes an Iterate by overloading the default
operator with an application specific operator consisting of statements found in the body of a simple loop. The system
applies the virtual operator to the data described by the descriptor.
IterateCollection: An IterateCollection, as the name implies, is an array of Iterates. An IterateCollection represents a
simple loop in a nested loop (or a simple function in a composite function) that performs an operation on the entire
data space. The dimensionality of an IterateCollection is normally the same as that of the data array on which it
operates.
LOOP: LOOP is a template structure used to describe a composite function by putting together one or more
IterateCollections. The user relies on the following methods provided by the LOOP object to glue together different
IterateCollections and begin the computation:
makes IterateCollection the nth simple function in the composite function.
defines the symbolic link dep, from IterateCollection IC1 to IC2. This symbolic
link is expressed in terms of a dependence rule with variables that range over the index space of the
IterateCollections.
Execute() executes the entire loop nest described by the loop descriptor.
Because loops are objects, they can be created and defined at run-time, giving the system the flexibility to describe and
compute complex loop nests. Figure 2 shows the basic model that the run-time system uses. Initially, the system pushes only
the unconstrained Iterates onto to the system LIFO Queue. This allows the scheduler to pop off an Iterate to perform the
operation with which that Iterate is associated. The completion of an Iterate can potentially enable other Iterates based on the
data dependences specified in SetDependence() and based on what other Iterates have completed. This creates a cycle shown
in figure 2 which the system repeats until the entire loop nest is completed.
Example: Red/Black SOR
We now describe a dependence-driven execution with respect to the example of Red/Black SOR which has the form , (Red
(D) where N is the number of time steps. Note that since the loops for the Red and Blk operations are not nested within
each other, they are not perfectly nested and hence unimodular transformation does not apply.
Figure
3: Multi-Loop Dependences on Red/Black
Figure
4: Red/Black SOR on DUDE
Figure
3 shows the original code and the inter-loop dependences. The Red and the Blk operations in the loop body simply
take the average of an element's neighboring point creating the dependence shown in the figure. Figure 4 shows the
application as it would appear when written for DUDE. For this application, there are two Iterates, the RED and the BLK
with corresponding BLK::main and RED::main methods that overload the operator to specialize the Iterate for this
application. These Iterates compose the RedColl and BlkColl collections. Finally, these collections themselves are combined
in the loop descriptor to create a composite function that iterates up to 10 iterations.
The Execute() function of loop in figure 4 starts the system by pushing all of the initially unconstrained Iterates of the
RedColl collection onto the system LIFO queue. In a dependence-driven execution, the memory locality of the entire
execution of the nested loop is sensitive to the order in which the initially unconstrained Iterates are loaded. For applications
which have block memory access patterns such as this one, the system loads the Iterates in Morton order [15].
Now the computation begins. After the initial Iterates have been loaded, the system scheduler pops off a (Red) Iterate from
the system LIFO queue and applies the main operator to the data described by the descriptor for that Iterate. When
completed, the system determines the list of sinks of the dependences arcs for that Iterate based on the dependence rule. For
each sink, the system decrements the counter in the destination Iterates which at this point in the execution, are Blk Iterates.
If the count is zero, the Iterate becomes unconstrained or enabled. The dependence satisfaction engine pushes these enabled
Iterates onto the system LIFO queue. Because of the LIFO order, the next time the scheduler pops off an Iterate from the
LIFO queue, it would be a Blk Iterate. Continuing, the completion of a Blk Iterate can further enable a Red Iterate from
second time step, and so forth. This describes a depth-first traversal of the iteration space since a Blk operation can begin
before all of the Red operations are completed. Note that using a FIFO queue would enforce a breadth-first traversal order of
the iteration space as would the order produced by the original source loop.
Figure
5: Iteration order for Hyperbolic 1D pde using DUDE
Having described the run-time system, we are now in a position to compare it with the compile-time transformations.
Because compile-time optimization cannot transform the loop structure for Red/Black SOR, we now revert back to the
example of Hyperbolic 1D PDE to compare the iteration order of a loop nest as it would run on DUDE with a compile-time
loop re-ordering.
Figure
5 shows the snapshot of the iteration order of the Hyperbolic 1D PDE as it runs on DUDE. The code shown below the
diagram is the body of the operator for the PDE Iterate. Note that this code is much simpler than the code required for
skewing/tiling shown in figure 1. Simpler code with less conditionals runs more efficiently on contemporary processors with
deep pipelines. It also enables the possibility of further optimizations. As shown in figure 1, there are really two orderings to
consider in a dependence-driven execution: intra-Iterate order (indicated by numbers) and inter-Iterate order (enforced by
arrows). While the inter-Iterate order is determined by the dependence rule, the order within an Iterate is exactly the same as
that in the original source loop. Note that the given dependence rule causes a diagonal traversal of the Iterates, much like the
skewing transformation.
Support for Automated Pre-fetches
So far we have discussed methods to reduce the number of cache misses. How do we hide memory access latency when
cache misses are unavoidable? In this section, we discuss how DUDE inserts pre-fetch instructions to mask the memory
accesses latencies with useful computation for cases when a cache miss is unavoidable. Figure 2 shows where the pre-fetch
logic fits into the dependence-driven model. Just before the system executes the currently ready Iterate C the pre-fetch logic
tries to predict what the next Iterate N to execute would be, based on C and the dependence rule. This prediction simply
simulates what the dependence satisfaction engine does to enable new Iterates with the following exception: instead of
pushing the newly generated Iterates N onto the LIFO queue, the pre-fetch logic invokes the pre-fetch command on the
region of data that the Iterate N is associated with. This causes the data needed by N to be delivered to processor cache while
the system is executing Iterate C with the intention that when the system finally executes N, the data for N would be available
in the processor's cache.
Experimental Results and Analysis
We measured the performance of six applications: Red/Black SOR, Odd-Even sort, Multi-grid, Levialdi's Component
labeling algorithm, Hyperbolic 1D PDE, and Vector chain addition. One would not ordinarily run some of these algorithms
on a uniprocessor since more efficient scalar algorithms exist. Our aim is to ultimately run these algorithm on a
multiprocessor but by conducting these experiments on a uniprocessor, we isolate the benefits of increased parallelism in a
dependence-driven execution from the benefits of improved temporal locality.
All experiments were conducted on a single processor DEC 21164 running at 290 MHz with 16KB of first level cache and 4
MB of second level cache. The cache penalties for the two caches are 15 cycles and 60 cycles respectively. All programs
were compiled with DEC C++ (cxx) compiler with -O5 option.
To determine where the cycles (cache miss, branch mis-predict, useful computation, etc.) were spent, we used the Digital
Continuous Profiling Infrastructure (DCPI) [2] available on the Alpha platforms. DCPI runs on the background with low
overhead (slowdown of 1-3%) and unobtrusively generates profile data for the applications running on the machines by
sampling hardware performance counters available on the Alphas. The cycles are attributed to the following categories:
Computation: This is the cycles spent in doing useful computation.
Static Stalls: These are stalls due to static resource conflicts among instructions, dependencies to previous
instructions, and conflicts in the functional units.
D-Cache Miss Stalls: These are dynamic stalls caused by D-cache misses.
Other Dynamic Stalls: Other sources of dynamic stalls are missed branch predictions, I-cache misses, ITB and DTB
misses.
Figure
Cycles Breakdown for Various
Applications
Figure
6 shows a breakdown of where the cycles were spent for the six applications. Because some of the methods are not
relevant to certain applications, some graphs compares fewer methods than others. All measurements are averages of 15 runs
with negligible standard deviations.
Red/Black SOR
Figure
7: Analysis of SOR (2048x2048) Running
on DUDE
This application has good spatial locality since each element only accesses its neighboring elements. It also has the potential
for temporal locality if we pipeline the iterations from different time steps. We compared using a dependence-driven
execution to three other methods: unoptimized, tiling by row and tiling by block. We chose a matrix size of 2048x2048
(64-bit floats) to insure that the entire matrix did not entirely fit into the processor's cache. For each method, we used the
optimal block size for that method which were 256x2048 for tiling by row, 256x256 for block and 32x32 for DUDE. Since the
time steps in Red/Black SOR are normally controlled by a while loops, we use the following
while (!done) {
for (i=0; i<10; i++)
Figure
6 shows that the Red/Black SOR using the unoptimized method spends as much as 76% of time on D-cache stalls.
This is not surprising given how expensive memory accesses are relative to cpu cycles. Since tiling by row creates the same
iteration order as the unoptimized case, there is little benefit in using tiling by row.Due to its access patterns (access of north,
south, west, and east neighbors), tiling by block does a little better because of spatial locality.
As shown in the figure, DUDE incurs the greatest overhead in terms of number of instructions executed. The run-time
dependence satisfaction is partly responsible for these overheads. Another source of overheads are caused by the use of
smaller block sizes (32x32) which increases the number of loops and hence the number of instructions. Comparing these
overheads with the cycles spent in D-cache stalls, it is clear that these overheads are relatively insignificant. Neverthless,
there is a tension between the overheads caused by smaller block sizes and the benefits of greater temporal locality; a smaller
block allows the algorithm to explore deeper into time steps improving temporal locality, but it also increases the total
overhead. The right hand side of figure 7 shows the effect of grain sizes on this application.
To further analyze the cache behavior of SOR using various method, we used ATOM [14] to instrument this application. The
left-hand side of the figure shows the total number of references, L1 cache, misses and L2 cache misses that we derived from
instrumenting the executables. As expected, there are more memory references required for DUDE, but it also suffers the least
from L2 cache misses. The working set for this application is too large for the dependence-driven execution to entirely fit in
L1 causing a DUDE slight increase in L1 cache misses over the tiling by block method.
Hyperbolic 1D PDE
To compare the run-time method with skewing and blocking compiler transformation, we also measured the breakdown in
cycles of the Hyperbolic 1D PDE, which is a wavefront computation with a perfectly nested loop. Figure 6 shows the
performance of the three methods. Both the static and run-time re-ordering optimization significantly improve locality. The
static skewing transformation has the best locality, but introduces more overhead than the dependence-driven execution. The
control hazards introduced in the compiler generated code (right side of figure 1) increase the static stalls as shown in the
figure. Further analysis of the skewing code using DCPI also revealed that this method suffered from resource conflict stalls
due to all the integer arithmetic in the subscript expression required by the compiler-transformed code (see right side of
figure 1).
Component Labeling
Levialdi's algorithm [13] for component labeling is used in image processing to detect connected components of a picture. It
involves a series of phases, each phase consisting of changing a 1-pixel to a 0-pixel if its upper, left, and upper-left neighbors
are 0-pixels. A 0-pixel is changed to a 1-pixel if both its upper and left neighbors are 1-pixels. Comparison of the
performances of various methods are shown in figure 6. Again, we see that the higher overhead for determining the iteration
order dynamically is small compared to the benefits of avoiding stalls.
Odd-Even Sort
Because the overheads in a dependence-driven execution are proportional to the number of dependence arcs emanating from
an Iterate, and because this application has only two arcs per Iterate, DUDE dramatically outperforms the unoptimized
method. We do not include the performance of tiling methods because tiling would not change the iteration order in this
one-dimensional problem. Compile-time skewing transformation does not apply here because there are two functions, the odd
and the even operations.
Multi-Grid
Multi-grid is another iterative PDE solver but it is an interesting one because it has five distinct operators (smooth even
elements, smooth odd elements, restrict, form, and prolong) and different dependence relations between these operations. The
dimension of the data space also changes depending on what level of the multi-grid pyramid you are in.
Vector Chain Additions
To measure the performance of DUDE when a purely static method can determine the same iteration order, we analyzed the
performance of simply adding 14 vectors each of length 1048576 double floats. In the unoptimized version, two vectors were
added in their entirety before adding the next vector. In the tiled versions, vectors were broken into chunks containing
elements which gave the best performance. Elements 0.31 of all the 14 vectors were added before adding elements
and so forth. Finally, in DUDE, we used a chunk size consisting of 512 elements with the Dependence Rule set to .
While DUDE incurs slightly more overhead for determining the iteration order, it also benefits from data pre-fetches.
Effect of Pre-fetching
To study how well the system pre-fetches data, we 1) inspected the footprint of the execution, and 2) compared the cycles
breakdown of the applications with or without enabling pre-fetch instruction. There are two pre-fetch instruction on the
Alphas, fetch(M) instruction which is documented but not implemented on the 21164 and the instuction to load (ldt or ldl) to
the zero register which is implemented but not documented. Since we were running our experiments on the 21164, we used
the load to the zero register to generate our pre-fetches. By looking at the the footprint of an execution of what the system
was pre-fetching and what the system was executing, we were able to verify that the system was indeed pre-fetching the next
Iterate while executing the current one. However, the performance result that we observed showed that there was little benefit
from pre-fetching for some of the applications at which we looked. The left sided of 7 gives us a clue as to why pre-fetching
was not very effective on some applications. This figure shows that there were hardly any L2 misses, but a lot of L1 misses.
This implies that the working set for a dependence-driven execution was too large to fit in L1. Pre-fetching the next iterate
may help reduce L2 cache misses (which is not really the source of most stalls), but it does not help misses to L1 because of
the conflicts with the working set of the current Iterate. We can reduce the size of the working set by reducing granularity,
but this would increase the total overhead.
Figure
8: Effect of Pre-fetch on Odd-Even Sort
and Vector Chain Addition
For applications with small working sets, we were able to get about a 7% performance improvement by pre-fetching as
shown in figure 8. The working set for Odd-Even sort consists of only the left and right neighbors on an one-dimensional
array while the working set of vector chain addition consist of only the vector containing the total sum and the current vector
that is being added. We expect that the efficacy of pre-fetching would be greater if L1 was sufficiently big enough to fit the
working set. Pre-fetching at the granularity of dependence-driven execution is more suited for distributed shared memory
systems with larger caches and larger latencies. Nevertheless, our preliminary experiment indicates that pre-fetching can be
automated in a dependence-driven execution.
Conclusion
Compile-time optimizations have the advantage that compilers can apply the optimizations with little overhead ( in most
cases) to the total execution time. Run-time optimization have the advantage that there is more information available
(assuming information from compile-time is kept until run-time). This information includes the state of computation such as
values of the variables, dependences that are satisfied, and dependences that are yet to be satisfied. Having this information,
the run-time can be more flexible. This flexibility comes at the cost of more instructions executed and more cpu cycles.
However, on modern commercial machines, processor cycles are cheap. If the additional cycles can be used to use the
memory system more effectively, then we can reduce overall execution time of an application. We have described a run-time
system that uses symbolic dependence information to determine loop-order during run-time. We have also shown that a
computation that is driven by dependences significantly reduces the number of cache miss stalls while adding relatively
insignificant overhead. Furthermore, when cache misses cannot be avoided, we have shown that the system can also be used
to pre-fetch data to avoid memory access latencies. As memory access costs become more expensive; cpu speeds continue to
get faster; the size of caches continue to get larger; and as the mechanism for pre-fetching becomes better supported - the
overhead for a dependence-driven execution becomes more justified.
--R
Automatic loop interchange.
Continuous profiling: Where have all the cycles gone?
Parallel processing with large-grain data flow techniques
Compiler transformations for high-performance computing
Keynote address.
Enhanced run-time support for shared memory parallel computing
A users guide to AWESIME: An object oriented parallel programming and simulation system.
Computer Architecture: a Quantitative Approach.
Systolic arrays (for VLSI).
Wavefront array processors.
VLSI Array Processors.
The cache performance and optimization of blocked algorithm.
On shrinking binary picture patterns.
The Design and Analysis of Spatial Data Structures.
Improving locality and parallelism in nested loops.
High Performance Compilers for Parallel Computing.
Optimizing supercompilers for supercomputers.
More iteration space tiling.
--TR
VLSI array processors
More iteration space tiling
Computer architecture: a quantitative approach
The design and analysis of spatial data structures
The cache performance and optimizations of blocked algorithms
Chores: enhanced run-time support for shared-memory parallel computing
Link-time optimization of address calculation on a 64-bit architecture
Continuous profiling
On shrinking binary picture patterns
Automatic loop interchange
Compiler Transformations for High-Performance Computing
Optimizing supercompilers for supercomputers
--CTR
Suvas Vajracharya , Dirk Grunwald, Dependence driven execution for multiprogrammed multiprocessor, Proceedings of the 12th international conference on Supercomputing, p.329-336, July 1998, Melbourne, Australia
Suvas Vajracharya , Steve Karmesin , Peter Beckman , James Crotinger , Allen Malony , Sameer Shende , Rod Oldehoeft , Stephen Smith, SMARTS: exploiting temporal locality and parallelism through vertical execution, Proceedings of the 13th international conference on Supercomputing, p.302-310, June 20-25, 1999, Rhodes, Greece | systolic arrays;coarse-grain dataflow;temporal locality;dependence-driven;data locality;run-time systems;loop transformations |
509647 | Performance characteristics of gang scheduling in multiprogrammed environments. | Gang scheduling provides both space-slicing and time-slicing of computer resources for parallel programs. Each thread of execution from a parallel job is concurrently scheduled on an independent processor in order to achieve an optimal level of program performance. Time-slicing of parallel jobs provides for better overall system responsiveness and utilization than otherwise possible. Lawrence Livermore National Laboratory has deployed three generations of its gang scheduler on a variety of computing platforms. Results indicate the potential benefits of this technology to parallel processing are no less significant than time-sharing was in the 1960's. | Introduction
Interest in parallel computers has been propelled by both the economics of commodity priced microprocessors and a growth
rate in computational requirements exceeding processor speed increases. The symmetric multiprocessor (SMP) and massively
parallel processor (MPP) architectures have proven quite popular, yet both suffer significant shortcomings when applied to
large scale problems in multiprogrammed environments. The problems must first be recast in a form supporting a high levels
of parallelism. Then to achieve the inherent parallelism and performance, it is necessary to concurrently schedule CPU
resources to all threads and processes associated with each program. System throughput is frequently used as a metric of
success; however, ease of use, good interactivity, and "fair" distribution of resources are of substantial importance to
customers in a multiprogrammed environment. Efficiently harnessing the power of a multitude processors while satisfying
customer requirements is a difficult proposition for schedulers.
Most MPP computers provide concurrent scheduling through space-slicing schemes. A program is allocated a collection of
processors and retains those processors until completion of the program. Scheduling is critical, yet each decision has an
unknown impact upon the future: should a job be scheduled at the risk of blocking larger jobs later or should processors be
left idle in anticipation of future arrivals? The lack of a time-slicing mechanism precludes good interactivity at high levels of
utilization. Gang scheduling solves this dilemma by combining concurrent resource scheduling, space-slicing, and
time-slicing. The impact of each scheduling decision is limited to a time-slice rather than the job's entire lifetime. Empirical
evidence from gang scheduling on a Cray T3D installed at Lawrence Livermore National Laboratory (LLNL) demonstrates
this additional flexibility can improve overall system utilization and responsiveness.
Most SMP computers provide both space-sharing and time-slicing, but schedule each process independently. While good
parallelism may be achieved this way on a lightly loaded system, that is a luxury rarely available. The purpose of gang
scheduling in this environment is to improve the throughput of parallel jobs by concurrent scheduling, without degrading
either overall system throughput or responsiveness. Moreover, this scheme can be extended to scheduling of parallel jobs
across a cluster of computers in order to address larger problems. Gang scheduling of DEC Alpha computers at LLNL is
explored in both stand-alone and cluster environments, and shown to fulfill these expectations.
Overview of Gang Scheduling
The term "gang scheduling" refers to all of a program's threads of execution being grouped into a gang and concurrently
scheduled on distinct processors. Furthermore, time-slicing is supported through the concurrent preemption and later
rescheduling of the gang [4]. These threads of execution are not necessarily POSIX threads, but components of a program
which can execute simultaneously. The threads may span multiple computers and/or UNIX processes. Communications
between threads may be performed through shared memory, message passing, and/or other means.
Concurrent scheduling of a job's threads has been shown to improve the efficiency of both the individual parallel jobs and the
system [3, 13]. The job's perspective is similar to that of a dedicated machine during the time-slices of its execution. Some
reduction in I/O bandwidth may be experienced due to interference from other jobs, but CPU and memory resources should
be dedicated. Job efficiency improvements results from a reduction in communications latency, promoting fine-grained
parallelism. System efficiency can be improved by reductions in context switching, virtual memory paging, and cache
refreshing.
The advantages of gang scheduling are similar to those of time-sharing in uniprocessor systems. The ability to preempt jobs
permits the scheduler to more efficiently utilize the system in several ways:
Long running jobs and those with high resource requirements can be executed without monopolizing resources
Interactive and other high priority jobs can be provided with near real-time response, even jobs with high resource
requirements during periods of high system utilization
Jobs with high processor requirements can be initiated in a timely fashion, without waiting for processors to be made
available in a piecemeal fashion as other jobs terminate
Low priority jobs can be executed, provided with otherwise unused resources, and preempted when higher priority
jobs become available
High system utilization can be sustained under a wide range of workloads
Job preemption does incur some additional overhead. The CPU resources sacrificed for context switching is slight and
explored later in the paper. The increase in storage requirements is possibly substantial. All running or preempted jobs must
have their storage requirements satisfied simultaneously. Preempted jobs must also vacate memory for other jobs, with the
memory contents written to disk. The acquisition of additional disks in order to increase utilization of CPU and memory
resources is likely to be cost-effective, but this need must be considered.
Other Approaches to Parallel Scheduling
Most MPP computers use the variable partitioning paradigm. In variable partitioning, the job specifies its processor count
requirement at submission time. Processors allocated to a job are retained until its termination. The inability to preempt a job
can prevent the timely allocation of resources to interactive or other high priority jobs. A shortage of jobs with small
processor counts can result in the incomplete allocation of processors. Conversely, the execution a job with high processor
requirements can be delayed by fragmentation, which necessitates the accumulation of processors in a piecemeal fashion as
multiple jobs with smaller processor counts terminate. Variable partitioning can result in poor resource utilization due to
resource fragmentation [10, 18], processors left idle in anticipation of high priority job arrival [12], or processors left idle in
order to accommodate jobs with substantial resource requirements.
Another option is dynamic partitioning, in which the operating system determines the number of processors to be allocated to
each job. While dynamic partitioning does possess the allure of high utilization and interactivity [11, 15, 16], it can make
program development significantly more difficult. The program is required to operate efficiently without knowledge of
processor count until execution time. The variable processor count also causes execution time variability, which may be
unacceptable for workloads containing very long running jobs or even moderate size jobs with high priority.
MPP Scheduling
The variable partition paradigm prevalent on MPP architectures and its lack of job preemption makes responsiveness
particularly difficult to provide. In order to ensure responsive service, some MPP computers divide resources into "pools"
reserved for interactive jobs, batch jobs, or available to any job type. These pools can also be used to reserve portions of the
computer for specific customers on some systems. Since jobs can not normally span multiple pools, partitioning the computer
in this fashion reduces the maximum problem size which can be addressed while fragmentation reduces scheduling flexibility
and system utilization. The optimal configuration places all resources into a single pool available to any job type, assuming
that support for resource allocation and interactivity can be provided by other means.
Scheduling
Space-sharing and time-sharing are the norm on SMP computers, providing both good interactivity and utilization. Most
computers schedule each process independent, which works well for a workload consisting of many independent
processes. However, the solution of large problems is dependent upon the use of parallel jobs, which suffer significant
inefficiencies without the benefit of concurrent scheduling [3, 13].
Parallel job development efforts at the National Energy Research Supercomputer Center (NERSC) illustrates difficulties in
parallel job scheduling [2]. In order to encourage parallel job development, NERSC provided dedicated time to parallel jobs
on a Cray C90 computer. Several of the parallel jobs running in dedicated mode were able to achieve a parallel performance
(CPU time/wall time) over 15.5 on this 16 CPU machine and approach 10 GFlops per second of the 16 GFlops per second
peak speed. While all batch jobs were suspended during the dedicated period, interactive jobs were initially permitted to
execute concurrently with the parallel job. Several instances were observed of a single compute-bound interactive program
reducing the parallel job's throughput and system utilization by almost 50 percent. Suspension of interactive jobs was found
to be necessary in order to achieve reasonable parallel job performance.
The benefits of SMP computers can be scaled for larger problems by clustering. Efficient execution of parallel jobs in this
environment requires coordination of scheduling across the entire cluster. If communications between the threads consisted
exclusively of message passing, it is possible to base scheduling upon this message traffic and achieve a high level of
parallelism [13, 14]. The LLNL workload makes extensive use of shared memory for communications, preventing us from
pursuing this strategy.
LLNL Workload Characterization
LLNL customers have long relied upon interactive supercomputing for program development and rapid throughput of jobs
with short to moderate execution times. While some of this work can be performed on smaller computers, most customers
use the target platform for reasons of problem size, hardware compatibility, and software compatibility. LLNL began its
transition to parallel computing with the acquisition of a BBN TC2000 in 1989. Additional parallel computers at LLNL have
included the Cray C90, Cray T3D, Meiko CS-2, and IBM SP2. Many of our problems are of substantial size and have been
well parallelized.
The LLNL Cray T3D workload is typical of that on our other parallel computers. The model at LLNL has 256 processors,
each with 64 megabytes of DRAM. All processors are configured into a single pool available to any job. The LLNL Cray
T3D is configured to permit interactive execution of jobs up to 64 processors and 2 hours execution time. Interactive jobs
account for 67 percent of all jobs executed and consume 13 percent of all CPU resources delivered. The interactive workload
typically ranges from 64 to 256 processors (25 to 100 percent of the computer's processors) during working hours and drops
to a negligible level in the late night hours. Peak interactive workloads reach 320 processors for a 25 percent oversubscription
rate. Timely execution of interactive jobs is dependent upon the preemption of batch jobs and, in extreme cases, time-sharing
processors among interactive jobs.
Large jobs account for a high percentage of resources utilized on the Cray T3D as shown in Table 1. Memory requirements
are quite variable, but most jobs use between 42 and 56 megabytes per processor. Due to Cray T3D hardware constraints, a
contiguous block of processors with a specific shape must be made available to execute a job [1], making fragmentation a
particularly onerous problem. Gang scheduling permits us to preempt jobs as needed to execute large jobs in a timely fashion
with minimal overhead.
Job Size (CPU
CPU Utilization (%):
Table
1: CPU Utilization by Job Size on Cray T3D at LLNL
Most programs use either PVM (Parallel Virtual Machine) or MPI (Message Passing Interface) libraries for communications
between the threads. Although the Cray T3D has a distributed memory architecture, it will support a shared memory
programming model permitting a job to read and write memory local of another processor assigned to that job. This shared
memory programming model has considerably lower overhead than message passing and is widely used. A small number of
programs utilize a combination of both paradigms: shared memory within the Cray T3D and message passing to
communicate with threads or serial program components executing on a Cray YMP front-end. This multiple paradigm model
is expected to be common on our DEC Alpha cluster and IBM SP2 containing SMP nodes.
LLNL Gang Scheduler Design Strategy
The LLNL customers expect rapid response, even on a heavily utilized multiprogrammed computer; however the need for
rapid response is not uniform across the entire workload. In order to provide interactivity where required and minimize the
overhead of context switching, we divide our workload into six different classes. Each job class has significantly different
scheduling characteristics as described below:
Express jobs have been deemed by management to be mission critical and are given rapid response and optimal
throughput.
Interactive jobs require rapid response time and very good throughput during extended working hours. The jobs'
response time and throughput can be reduced at other hours for the sake of improved system utilization and throughput
of production jobs.
Debug jobs require rapid response time during extended working hours. The jobs' response time can be reduced at
other hours for the sake of improved system utilization. Debug jobs can not be preempted on the Cray T3D.
Production jobs do not require rapid response time, but should receive very good throughput at night and on
weekends.
Benchmark jobs do not require rapid response time, but can not be preempted.
Standby jobs have low priority and are suitable for absorbing otherwise idle compute resources.
The default classes are production for batch jobs, debug for totalview debugger initiated jobs, and interactive for other jobs
directly initiated from a user terminal. Jobs can be placed in the express class only by the system administrator. Benchmark
and standby classes can be specified by the user at job initiation time.
Each job class has a number of scheduling parameters, including relative priority and processor limit. There are also several
system-wide scheduling parameters such as aggregate processor limit for all gang scheduled jobs. The scheduling parameters
can be altered in real-time, which permits periodic reconfiguration. LLNL emphasizes interactity during extended work hours
(7:30 AM to 10:00 PM) and throughput at other times. The gang scheduler daemon itself can also be updated at any time.
Upon receipt of the appropriate signal and completion of any critical tasks, the daemon writes its state to a file and initiates a
new daemon program.
Sockets are utilized for user communications, most of which occur at job initiation. Sockets are also used for job state change
requests and scheduling parameter changes, which are rare. Job and processor state information is written periodically to a
data file, which is globally readable. This file is read by the "gangster" application, which is the users' window into gang
scheduling status.
We have delegated the issue of "fair" resource distribution to other systems, including the Centralized User Bank (CUB) [8]
and Distributed Production Control System (DPCS) [17]. These systems provide resource allocation and accounting
capabilities to manage the entire computing environment with a single tool. Both systems exercise coarse grained control by
the scheduling of batch jobs and fine grained control by adjusting nice values of both interactive and batch processes. Our
gang scheduler is designed to recognize changes in nice value and schedule accordingly. Jobs of classes which can be
preempted will automatically be changed to standby class when at high nice value and returned to their requested class upon
reduction in nice value. This mechanism has proven to function very well in managing resource distribution while
minimizing interdependence of the systems.
Significant differences exist in the three LLNL gang scheduler implementations. These differences were largely the result of
architectural differences, but some were based upon experiences with previous implementations. Each implementation is
described below with results.
BBN TC2000
In order to pioneer parallel computing at LLNL, a BBN TC2000 with 126 processors was acquired in 1989. The TC2000 has
a shared memory architecture and originally supported space-sharing only.
The gang scheduler for the TC2000 reserves all resources at system startup and controls all resource scheduling from that
time [6, 7]. User programs require no code changes to communicate with the gang scheduler. However, the program must
load with a modified version of the mandatory parallel program initiation library. Rather than securing resources directly
from the operating system, this library secures resources from the gang scheduler daemon. The program may also increase or
decrease its processor count during execution by explicitly notifying the gang scheduler daemon. Otherwise, a job must
continue execution on the specific processors originally assigned to it. Time-sharing is performed by the use of sleep and
wake signals, issued concurrently to all threads of a job. This mechanism can provide for very rapid context switches, on the
order of milliseconds. This implementation did have "fair share" mechanism with the objective of providing each user with
access to comparable resources. The gang scheduler would determine resource allocation to be made for each ten second
time-slice with the objective of insuring interactivity, equitable distribution of resources, and high utilization.
While the gang scheduler implemented for the TC2000 was able to provide good responsiveness and throughput, some
shortcomings should be noted. Despite the shared memory architecture to the TC2000, it was not possible to relocate threads
in order to better balance the continuously changing workload. The scheduler could react to workload changes only at
time-slice boundaries, which limited responsiveness. The user based fair share mechanism was abandoned in later
implementations due to the availability of an independent and more comprehensive resource allocation system.
Cray T3D
The Cray T3D is a massively parallel computer incorporating DEC alpha 21064 microprocessors capable of 150 MFLOPS
peak performance. Each processor has its own local memory. The system is configured into nodes, consisting of two
processors with their local memory and a network interconnect. The nodes are connected by a bidirectional three-dimensional
torus communications network. There are also four synchronization circuits (barrier wires) connected to all processors with a
tree shaped interconnect [1].
getting into great detail, the T3D severely constrains processor and barrier wire assignments. Jobs must be allocated
a processor count which is a power of two, with a minimum of two processors (one node). A job's can be built to run with
any valid processor count, but its processor count can not change after execution begins. The processors allocated to a job
must have a specific shape with specific dimensions for a given problem size. For example, an allocation of processors
must be made with a contiguous block of eight processors in the X direction, two processors in the Y direction, and two
processors in the Z direction. Furthermore, the possible locations of the processor assignments are restricted. These very
specific shapes and locations for processor assignment are the result of the barrier wire structure. Jobs must be allocated one
of the four barrier wires when initiated. The barrier wire assignment to a job can not change if the job is relocated and, under
some circumstances, two jobs sharing a single barrier wire may not be located adjacent to each other.
Prior to the installation of a gang scheduler on our Cray T3D, we were forced to make several significant sacrifices in order
to satisfy our customers' need for interactivity [5, 9]. The execution time of batch jobs was restricted to insure "reasonable"
responsiveness to jobs with large processor requirements and interactive jobs, although one may argue that delays on the
order of hours may not be reasonable. Most batch jobs were limited to four hours. One batch queue permitted execution times
up to 19 hours, but this queue was enabled during only brief periods on weekends and holidays. Since jobs could not be
preempted, our batch workload was dramatically reduced at 4:00 AM. As batch jobs completed, their released resources
might remain unused by any interactive job for many hours. At times of heavy interactive use, the initiation of an interactive
job had to wait for other jobs to terminate and release resources. The processor allocation restrictions also made for severe
fragmentation problems. While interactivity was generally good, the processor utilization rate of 33 percent was considered
unacceptable.
The Cray T3D gang scheduler implementation has a several significant differences from that of the BBN TC2000. Since the
initiation of all parallel jobs on the T3D is conducted by a single application program, we placed a wrapper around this to
communicate with the gang scheduler daemon. No changes to the user application or scripts were required. The fixed
time-slice period was replaced with an event driven scheduler with the ability to react instantly to changes in workload. The
allocation of specific processors and barrier wires can dramatically effect performance on the T3D, so substantial effort was
placed in developing software to optimize these selections. Processor selection criterion includes best-fit packing, placement
of jobs with similar scheduling characteristics in close proximity, and minimizing contention of the barrier wire circuits.
Context switching logic also required substantial modification. The BBN TC2000 supported memory paging, while the Cray
T3D lacks support for paging. Cray T3D job preemption results in the entire memory image of the preempted job being
written to disk before the processors can be reassigned, which requires about one second per preempted processor. In the case
of a 64 processor jobs being context switched, about one minute is required to store the preempted job's context and another
minute to load the state of another job of similar size. The T3D gang scheduler calculates a value for each job including its
processor count, job type, location (disk or memory) to make job preemption and initiation decisions. Additional information
associated with each job class and used in this calculation include: maximum wait time, processor limit, and do-not-disturb
time multiplier. The minimum time-slice for a job is the product of the job's processor count and its class' do-not-disturb
time multiplier. While the minimum time-slice mechanism does reduce responsiveness, it prevents the costly thrashing of
jobs between memory and disk and is critical for maintaining a high utilization level.
The Cray T3D gang scheduler has been in operation since March 1996. We were able to dramatically modify the batch
environment to fully subscribe the machine during the day and oversubscribe it at night by as much as 100 percent. The
normal mode of operation in the daytime is for interactive class jobs to preempt production class jobs shortly after initiation.
The interactive jobs then usually continue execution until completion without preemption. At night, the processors are
oversubscribed temporarily and only for the express purpose of executing larger jobs (128 or 256 processor). This strategy
results in only about eight percent of all jobs ever being preempted.
The batch queue for long running jobs has been substantially reconfigured: its time limit has been increased from 19 to 40
hours, its maximum processor allocation (for all running jobs in the queue) increased from 64 to 128 processors, and is
enabled at all times. System utilization increased substantially and weekly CPU utilization rates over 96 percent have been
sustained. Figure 1 shows monthly CPU utilization for a period of 15 months. Three different schedulers were utilized over
this period. UNICOS MAX is the native Cray T3D operating system from Cray Research. DJM or Distributed Job Manager
is a parallel job scheduler originally developed by the Minnesota Supercomputer Center and substantially modified by Cray
Research. The LLNL developed gang scheduler is also shown. The CPU utilization reported is that during which a CPU is
actually assigned to a program which is memory resident. CPU utilization is reduced by three things:
1. Context switch time, a CPU is unavailable while a program's image is being transferred between disk and memory
2. Sets of processors on which no job can fit, a packing problem
3. Insufficient work, particularly on weekends
Figure
1: Cray T3D CPU Utilization
Benchmarks run against both UNICOS MAX and an early prototype of the LLNL gang scheduler showed a 21 percent
improvement in interactive job throughput with no reduction in aggregate throughput. The cost of moving jobs' state between
memory and disk to provide responsiveness was fully compensated for by more efficient packing of the processor torus. The
current implementation has additional performance enhancements and should show an aggregate throughput improvement of
a few percent.
Figure
2 shows a gangster display of typical Cray T3D daytime utilization. Note that only 12 of the 256 processors (six of
128 nodes) are not assigned to some job. A total of ten interactive and debug class jobs are running and using 148 of the
processors. Over half of the batch workload is currently paged out for these interactive jobs. Left side of display shows the
contents of each node (two processors). Each letter indicates the job and a period indicates an unused node. The right side of
the display describes each job. The W field shows the barrier wire used. The MM:SS field shows the total execution time
accumulated. The ST field shows the job's state: i=swapping in, N=new job, not yet assigned nodes or barrier wire,
o=swapping out, O=swapped out, R=running, W=awaiting commencement of execution, assigned nodes and barrier wire.
The gang scheduler state information is written to disk at 15 second intervals. The gangster display is updated at the same
rate.
gangster - 13:25 - 42377
pet_test 4 700 0 R 5:24
a a . h
a a . f Dbug a - susan 81280 aaaa8123
a a d .
a a d g Prod s - mahdi 83818 ge 64 420 0 R1462:43
a a . h Prod u - eduardo 79879 new 64 -1
a a . w Prod m - delarub 79890 new 64 -1
a a d . Prod n - tomas 80331 mdterrep
Figure
2: Typical daytime gangster display on Cray T3D
While the utilization rate is quite satisfactory, responsiveness is also of great importance. Responsiveness can be quantified
by slowdown, the ratio of total time spent in the system to the run time. During the three week period of July 23 through
August 12, 2659 interactive jobs were executed using a total of 44.5 million CPU-seconds and 1328 batch jobs were executed
using a total of 289.5 million CPU-seconds. The slowdown of the aggregate interactive workload was 18%, which is viewed
quite favorably. Further investigation shows a great deal of variation in slowdown. Most longer running interactive jobs
enjoy slowdowns of only a few percent. Interactive jobs executed during the daytime typically begin execution within
seconds and are not preempted. Interactive jobs executed during late night and early morning hours experienced slowdowns
as high as 1371 (a one second job delayed for about 23 minutes). However the computer is configured for high utilization and
batch job execution during these hours, so high slowdowns are not unexpected.
DEC Alpha
The Digital Unix 4.0D operating system includes a "class scheduler", which is essentially a very fine grained fair share
scheduler. The class scheduler permits processes, process groups, or sessions to be grouped in a class. The class can then be
allocated some share of CPU resources. For example, the eight threads of a job could be placed into a single class and
allocated 80 percent of the CPU resources on a ten CPU computer. While the class scheduler does not explicitly reserve eight
CPUs for the eight threads of this parallel job, the net effect is very close. The operating system also recognizes advantage in
keeping a process on a particular CPU to avoid refreshing its cache. We have found the class scheduler actually delivers over
percent of the CPU resources as desired for gang scheduling with minimal thread migration between processors. For the
job which fails to sustain its target number of runable threads, the class scheduler will allocate the CPU resources to other
jobs in order to sustain high overall system utilization.
Since parallel jobs are not normally registered with the Digital UNIX operating system, the gang scheduler relies upon
explicit registration through the use of a library. While it is highly desirable that user code modification for gang scheduler be
avoided, that is impossible to achieve at this time. Imbedding a few gang scheduler remote procedure calls directly in MPI
and PVM libraries would free many users from having to modify their programs. Presently, the gang scheduled application
must be modified with a few simple library calls, including:
Register job with gang scheduler: A global gang scheduler job ID is returned
Register resource requirements: CPU, memory, and disk space requirements are specified for the job on each
computer to be gang scheduled
Register the processes: Associate a specific process, process group, or session with the gang scheduler job ID
Since it is necessary to coordinate activities across multiple computers, the DEC Alpha gang scheduler returned to the fixed
time-slice model used on the BBN TC2000. In order to manage these time-slices across multiple computers, the concept of
"tickets" was introduced. These tickets represent specific resource allocations at specific times and are issued only for jobs
spanning multiple computers. Jobs which execute only on a single computer are not pre-issued tickets, but are managed by
that computer's gang scheduler daemon which makes scheduling decisions at the start of each time-slice. This design permits
each computer to operate as independently as possible, while permitting the gang scheduling of jobs across the cluster as
needed. The tickets are managed by the gang scheduler daemon on each computer in the cluster and are associated with a job
for its lifetime. A job may be given additional tickets or have some revoked depending upon changes in overall system load.
The job is also permitted to alter its resource requirements during execution. A change in the number of CPU required for a
job may result in revoked or additional tickets.
The gangster display program was re-written in Tcl/Tk in order to provide a richer environment. Detailed information on
system and job status now includes a history of CPU, real memory, and virtual memory use which is updated at time-slice
boundaries. This information can be helpful for system administrators and programmers tuning their systems. For example, if
a program's CPU allocation and CPU use are substantially different, the desired level of parallelism is not being achieved and
the matter should be investigated. A dramatic changes in real memory use for a program during its execution may indicate
substantial paging, which also warrants investigation. Figures 3 and 4 show displays of system and program status.
C machine status
Figure
4: Gangster display of DEC job status
In order to insure that good responsiveness is preserved for all jobs, this gang scheduler permits processors to be reserved for
non-gang scheduled jobs. This prevents gang scheduled jobs from reserving all processors on a computer for the entire
time-slice, which could prevent logins or other interactions for a significant period. The gang scheduler daemon also reacts to
changes in the overall workload, insuring resources are fairly distributed to all active jobs.
Gangster also permits several system-wide operations to a gang scheduled job including: suspend, resume, kill, and change
job class. For example, the kill job operation will send kill signals to all processes registered to a gang scheduled job on all
computers running that job.
Conclusions
Gang scheduling has been shown to provide an attractive multiprogrammed environment for multiprocessor systems in
several ways:
Parallel jobs can be provided with access to all required resources simultaneously, providing the illusion of a dedicated
environment
Interactive and other high priority jobs can be provided with rapid response and excellent throughput
Jobs with large resource requirements can be initiated rapidly, without having to wait for multiple jobs to terminate
and release resources
A high level of utilization can be maintained under a wide range of workloads
Experience on the Cray T3D has been overwhelmingly positive. It can now sustain processor utilization rates over 95 percent
through the course of a week while providing the aggregate interactive workload with a slowdown of less than 20 percent.
Such performance give this distributed memory MPP a range of performance spanning large-scale parallel applications to
general purpose interactive computing.
Experience on the DEC Alpha environment has also been very favorable during our testing period. A very rich and
interactive environment is available on these fast SMPs, while true supercomputing class problems can be addressed by
harnessing the power of a cluster for parallel jobs. Our plans call for bringing two 80 CPU DEC Alpha 8400 clusters under
the control of gang scheduling in the Fall of 1997.
--R
Cray Research Inc.
Dedicated Computing on a YMP/C916.
Effective Distributed Scheduling of Parallel Workloads.
A Survey of Scheduling in Multiprogrammed Parallel Systems.
Improved Utilization and Responsiveness with Gang Scheduling.
Timesharing massively parallel machines.
Centralized User Banking and User Administration on UNICOS.
Gang Scheduler - Timesharing the Cray T3D"
A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors
Analysis of non-work-conserving processor partitioning policies
Dynamic Coscheduling on Workstation Clusters.
Comparing Gang Scheduling with Dynamic Space Sharing on Symmetric Multiprocessors Using Automatic Self-Allocating Threads (ASAT)
Process control and scheduling issues for multiprogrammed shared-memory multiprocessors
Distributed Production Control System.
A new graph approach to minimizing processor fragmentation in hypercube multiprocessors.
--TR
Process control and scheduling issues for multiprogrammed shared-memory multiprocessors
A two-dimensional buddy systems for dynamic resource allocation in a partitionable mesh connected system
A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors
Effective distributed scheduling of parallel workloads
A New Graph Approach to Minimizing Processor Fragmentation in Hypercube Multiprocessors
Comparing Gang Scheduling with Dynamic Space Sharing on Symmetric Multiprocessors Using Automatic Self-Allocating Threads (ASAT)
Analysis of Non-Work-Conserving Processor Partitioning Policies
Demand-Based Coscheduling of Parallel Jobs on Multiprogrammed Multiprocessors
Improved Utilization and Responsiveness with Gang Scheduling
--CTR
Adrian T. Wong , Leonid Oliker , William T. C. Kramer , Teresa L. Kaltz , David H. Bailey, ESP: a system utilization benchmark, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.15-es, November 04-10, 2000, Dallas, Texas, United States
Jung-Lok Yu , Jin-Soo Kim , Seung-Ryoul Maeng, A runtime resolution scheme for priority boost conflict in implicit coscheduling, The Journal of Supercomputing, v.40 n.1, p.1-28, April 2007
Atsushi Hori , Hiroshi Tezuka , Yutaka Ishikawa, Highly efficient gang scheduling implementation, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-14, November 07-13, 1998, San Jose, CA
Shahaan Ayyub , David Abramson, GridRod: a dynamic runtime scheduler for grid workflows, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Gyu Sang Choi , Jin-Ha Kim , Deniz Ersoz , Andy B. Yoo , Chita R. Das, A comprehensive performance and energy consumption analysis of scheduling alternatives in clusters, The Journal of Supercomputing, v.40 n.2, p.159-184, May 2007
Gyu Sang Choi , Jin-Ha Kim , Deniz Ersoz , Andy B. Yoo , Chita R. Das, Coscheduling in Clusters: Is It a Viable Alternative?, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.16, November 06-12, 2004
Bin Lin , Peter A. Dinda, VSched: Mixing Batch And Interactive Virtual Machines Using Periodic Real-time Scheduling, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.8, November 12-18, 2005 | multiprogramming;gang scheduling;time-slicing;parallel system;space-slicing;scheduling |
509649 | A common data management infrastructure for adaptive algorithms for PDE solutions. | This paper presents the design, development and application of a computational infrastructure to support the implementation of parallel adaptive algorithms for the solution of sets of partial differential equations. The infrastructure is separated into multiple layers of abstraction. This paper is primarily concerned with the two lowest layersof this infrastructure: a layer which defines and implements dynamic distributed arrays (DDA), and a layer in which several dynamic data and programming abstractions are implemented in terms of the DDAs. The currently implemented abstractions are those needed for formulation of hierarchical adaptive finite difference methods, hp-adaptive finite element methods, and fast multipole method for solution of linear systems. Implementation of sample applications based on each of these methods are described and implementation issues and performance measurements are presented. | Introduction
This paper describes the design and implementation of a common computational
infrastructure to support parallel adaptive solutions of partial differential equations. The
motivations for this research are:
1. Adaptive methods will be utilized for the solution of almost all very large-scale
scientific and engineering models. These adaptive methods will be executed on
large-scale heterogeneous parallel execution environments.
2. Effective application of these complex methods on scalable parallel architectures
will be possible only through the use of programming abstractions which lower the
complexity of application structures to a tractable level.
3. A common infrastructure for this family of algorithms will result in both, enormous
savings in coding effort and a more effective infrastructure due to pooling and
focusing of effort.
The goal for this research is to reduce the intrinsic complexity of coding parallel adaptive
algorithms by providing an appropriate set of data structures and programming
abstractions. This infrastructure has been developed as a result of collaborative research
among computer scientists, computational scientists and application domain specialists
working on three different projects: An DARPA project for hp-adaptive computational
fluid dynamics and two NSF sponsored Grand Challenge projects, one on numerical
relativity and the other on composite materials.
1.1 Conceptual Framework
Figure
1.1: Hierarchical Problem Solving Environment
for Parallel Adaptive Algorithms for the Solution of PDEs
Figure
1 is a schematic of our perception of the structure of a problem solving
environment (PSE) for parallel adaptive techniques for the solution of partial differential
equations. This paper is primarily concerned with the lowest two layers of this hierarchy
and how these layers can support implementation of higher levels of abstraction. The
bottom layer of the hierarchical PSE is a data-management layer. The layer implements a
Distributed Dynamic Array (DDA) which provides array access semantics to distributed
and dynamic data. The next layer is a programming abstractions layer which adds
application semantics to DDA objects. This layer implements data abstractions such as
grids, meshes and trees which underlie different solution methods. The design of the PSE
is based on a separation of concerns and the definition of hierarchical abstractions based
on the separation. Such a clean separation of concerns [1] is critical to the success of an
infrastructure that can provide a foundation for several different solution methods. In
particular the PSE presented in this paper supports finite difference methods based on
adaptive mesh refinement, hp-adaptive finite element methods, and adaptive fast
multipole methods.
1.2
Overview
This paper defines the common requirements of parallel adaptive finite difference and
finite element methods for solution of PDEs and fast multipole solution of linear systems,
and demonstrates that one data management system based on Distributed Dynamic
Arrays (DDA) can efficiently meet these common requirements. The paper then describes
the design concepts underlying DDAs and sketches implementations of parallel adaptive
finite difference, finite element, multipole solutions using DDAs based on a common
conceptual basis as the common data management system. Performance evaluations for
each method are also presented.
The primary distinctions between the DDA-based data management infrastructures and
other packages supporting adaptive methods are (1) the separation of data management
and solution method semantics and (2) the separation of addressing and storage semantics
in the DDA design. This separation of concerns enables the preservation of application
locality in multi-dimensional space when it is mapped to the distributed one-dimensional
space of the computer memory and the efficient implementation of dynamic behavior.
The data structure which has traditionally been used for implementation of these
problems has been the multi-dimensional array. Informally an array consists of: (1) an
index set (a lattice of points in an n-dimensional discrete space), (2) a mapping from the
n-dimensional index set to one dimensional storage, and (3) a mechanism for accessing
storage associated with indices.
A DDA is a generalization of the traditional array which targets the requirements of
adaptive algorithms. In contrast to regular arrays, the DDA utilizes a recursively defined
hierarchical index space where each index in the index space may be an index space. The
storage scheme then associates contiguous storage with spans of this index space. The
relationship between the application and the array is defined by deriving the index space
directly from the n-dimensional physical domain of the application.
Problem Description
Adaptive algorithms require definition of operators on complex dynamic data structures.
Two problems arise: (1) the volume and complexity of the bookkeeping code required to
construct and maintain these data structures overwhelms the actual computations and (2)
maintaining access locality under dynamic expansion and contraction of the data requires
complex copying operations if standard storage layouts are used. Implementation on
parallel and distributed execution environments adds the additional complexities of
partitioning, distribution and communication. Application domain scientists and
engineers are forced to create complex data management capabilities which are far
removed from the application domain. Further, standard parallel programming languages
do not provide explicit support for dynamic distributed data structures. Data management
requirements for the three different adaptive algorithms for PDEs are described below.
2.1 Adaptive Finite Difference Data Management Requirements
Finite difference methods approximate the solution of the PDE on a discretized grid
overlaid on the n-dimensional physical application domain. Adaptation increases the
resolution of the discretization in required regions by refining segments of the grid into
finer grids. The computational operations on the grid may include local stencil based
operations at all levels of resolution, transfer operations between levels of resolution and
global linear solves. Thus the requirement for the data-management system for adaptive
finite difference methods is seamless support of these operations across distribution and
refining and coarsening of the grid. Storage of the dynamic grid as an array where each
point of the grid is mapped to a point in a hierarchical index space is natural. Then the
refinements and coarsening of the grid become traversal of the hierarchy of the
hierarchical index space.
2.2 Adaptive Finite Element Data Management Requirement
The finite element method requires storage of the geometric information defining the
mesh of elements which spans the application domain. Elements of the linear system
arising from finite element solutions are generally computed on the fly so that they need
not be stored. HP-adaptive finite element methods adapt by partitioning elements into
smaller elements (h refinement) or by increasing the order of the polynomial
approximating the solution on the element (p refinement). Partitioning adds new elements
and changes relationships among elements. Changing the approximation function
enlarges the descriptions of the elements. The data-management requirements for
hp-adaptive finite elements thus include storage of dynamic numbers of elements of
dynamic sizes. These requirements can be met by mapping each element of the mesh to a
position in a hierarchical index space which is associated with a dynamic span of the
one-dimensional storage space. Partitioning an element replaces a position in the index
space by a local index space representing the expanded mesh.
2.3 Adaptive Fast Multipole Data Management Requirements
Fast multipole methods partition physical space into subdomains. The stored
representation of the subdomain includes a charge configuration of the subdomain and
various other descriptive data. Adaptation consists of selectively partitioning subdomains.
The elements are often generated on the fly so that storage of the values of the forces and
potential need not be stored. The requirement for data management capability is therefore
similar to that of adaptive finite element methods. A natural mapping is to associate each
subdomain with a point in a hierarchical index space.
2.4 Requirements Summary
It should be clear that an extended definition of an array where each element can itself be
an array and where the entity associated with each index position of the array can be an
object of arbitrary and variable size provides one natural representation of the data
management requirements for all of adaptive finite element, adaptive finite difference and
adaptive fast multipole solvers. The challenge is now to demonstrate an efficient
implementation of such an array and to define implementations of each method in terms
of this storage abstraction. The further and more difficult challenge is an efficient
parallel/distributed implementation of such an array.
3 Distributed Dynamic Data-Management
The distributed dynamic data-management layer of the PSE implements Distributed
Dynamic Arrays (DDAs). This layer provides pure array access semantics to dynamically
structured and physically distributed data. DDA objects encapsulate distribution, dynamic
load-balancing, communications, and consistency management, and have been extended
with visualization and analysis capabilities.
There are currently two different implementations of the data management layer, both
built on the same DDA conceptual framework: (1) the Hierarchical Dynamic Distributed
Array (HDDA) and (2) the Scalable Dynamic Distributed Array (SDDA). The HDDA is
a hierarchical array in that each element of the array can recursively be an array; it is a
dynamic array in that each array (at each level of the hierarchy) can expand and contract
at run-time. Instead of hierarchical arrays the SDDA implements a distributed dynamic
array of objects of arbitrary and heterogeneous types. This differentiation results from the
differences in data management requirements between structured and unstructured
meshes. For unstructured meshes it is more convenient to incorporate hierarchy
information into the programming abstraction layer which implements the unstructured
mesh. These two implementations derived from a common base implementation and can
and will be re-integrated into a single implementation in the near future.
The arrays of objects defined in the lowest layer are specialized by the higher layers of
the PSE to implement application objects such as grids, meshes, and tree. A key feature
of a DDA based on the conceptual framework described following is its ability to extract
out the data locality requirements from the application domain and maintain this locality
despite its distribution and dynamic structure. This is achieved through the application of
the principle of separation of concerns [1] to the DDA design. An overview of this design
is shown in Figure 3.1. Distributed dynamic arrays are defined in the following
subsection.
Figure
3.1: DDA Design - Separation of Concerns -> Hierarchical Abstractions
3.1 Distributed Dynamic Array Abstraction
The SDDA and HDDA are implementations of a distributed dynamic array. The
distributed dynamic array abstraction, presented in detail in [2], is summarized as
follows.
In general, an array is defined by a data set D, an index space I, and an injective
An array is dynamic if elements may be dynamically inserted into or removed from
its data set D.
An array is distributed if the elements of its data set D are distributed.
A data set D is simply a finite set of data objects. An index space I is a countable set of
indices with a well-defined linear ordering relation, for example the set of natural
numbers. Two critical points of this array abstraction are (1) that the cardinality of the
data set D is necessarily equal to or less than the cardinality of the index space I and
(2) that each element of the data set uniquely maps to an element of the index space.
3.2 Hierarchical Index Space and Space-Filling Curves
An application partitions its N-dimensional problem domain into a finite number of
points and/or regions. Each region can be associated with a unique coordinate in the
problem domain. Thus the "natural" indexing scheme for such an application is a
discretization of these coordinates. Such an index space can defined as: I 1 -I 2 -I N
where each I j corresponds to the discretization of a coordinate axis.
An index space may be hierarchical if the level of discretization is allowed to vary,
perhaps in correspondence with a hierarchical partitioning of the problem domain. The I j
components of hierarchical index space could be defined as (d,(i 1 ,i 2 ,.,i d )), where d is the
depth of the index.
Recall that an index space requires a well-defined linear ordering relation. Thus the
"natural" N dimensional hierarchical index space must effectively be mapped to a linear,
or one-dimensional, index space. An efficient family of such maps are defined by
space-filling curves (SFC) [3].
One such mapping is defined by the Hilbert space-filling curve, illustrated in Figure 3.2.
In this mapping a bounded domain is hierarchically partitioned into regions where the
regions are given a particular ordering. In theory the partitioning "depth" may be infinite
and so any finite set of points in the domain may be fully ordered. Thus an SFC mapping
defines a hierarchical index space, of theoretically infinite depth, for any application
domain which can be mapped into the SFC "bounding box".
Figure
3.2: Hilbert Space-Filling Curve
The "natural" index space of the SFC map efficiently defines a linear ordering for all
points and/or subregions of the application's problem domain. Given such a linear
ordering the these subregions can be distributed among processors by simply partitioning
the index space. This partitioning is easily and efficiently obtained from a partitioning the
linearly ordered index space such that the computational load of each partition is roughly
equal.
Figure
3.3, Space-Filling Curve Partitioning, illustrates this process for an
irregularly partitioned two dimensional problem domain. Note in Figure 3.3 that the
locality preserving property of the Hilbert SFC map generates "well-connected"
subdomains.
Figure
3.3: Space-Filling Curve Partitioning
3.3 DDA Implementation
An array implementation provides storage for objects of the data set D and access to
these objects through a converse function F D. The quality and efficiency of
an array implementation are determined by the correlation between storage locality and
index locality and by the expense of the converse function F -1 . For example a
conventional one-dimensional FORTRAN array has both maximal quality and efficiency.
A DDA implements distributed dynamic arrays where storage for objects in the data set
D is distributed and dynamic, and the converse function F -1 provides global access to
these objects. A DDA's storage structure and converse function consists of two
components: (1) local object storage and access and (2) object distribution. The HDDA
and SDDA implementations of a DDA use extendible hashing [4] & [5] and red-black
balanced binary trees [6] respectively for local object storage and access.
An application instructs the DDA as to how to distribute data by defining a partitioning
of index space I among processors. Each index in the index space is uniquely assigned to
a particular processor i -> P. The storage location of a particular data object is now
determined by its associated index d -> i -> P. Thus the storage location of any
dynamically created data object is well-defined.
Each DDA provides global access to distributed objects by transparently caching objects
between processors. For example, when an application applies a DDA's converse
function d) if the data object d is not present on the local processor the data
object is transparently copied from its owning processor into a cache on the local
processor.
3.4 Locality, Locality, Locality!
A DDA's object distribution preserves locality between object storage and the object's
global indices. Given the "natural" index space of the space-filling curve map and the
corresponding domain partitioning, a DDA's storage locality is well-correlated with
geometric locality, as illustrated in Figure 3.4.
Figure
3.4: Locality, Locality, Locality!
In
Figure
3.4 an application assigns SFC indices to the data objects associated with each
subregion of the domain. A DDA stores objects within a span of indices in the local
memory of a specified processor. Thus geometrically local subregions have their
associated data objects stored on the same processor.
Application Programming Interface (API)
The DDA application programming interface consists a small set of simple methods
which hide the complexity of the storage structure and make transparent any required
interprocessor communication. These methods include:
GET d <- F -1 (i)
INSERT D <- D + d new
REMOVE D <- D - d old
REPARTITION Forcing a redistribution of objects
PUT/LOCK/UNLOCK Cache coherency controls
Implementation of these methods varies between the SDDA and HDDA; however, the
abstractions for these methods are common to both versions of the DDA.
3.6 DDA Performance
The DDA provides object management services for an application's distributed dynamic
data structures. This functionality introduces an additional overhead cost when accessing
local objects. For example, local object access could be accomplished directly via 'C'
pointers instead of DDA indices; however, note that the pointer to a data object may be
invalidated under data object redistribution.
This overhead cost is measured for the SDDA version of the DDA, which uses a
red-black balanced binary tree algorithm for local object management. The overhead cost
of the local get, local insert, and local remove method is measured on an IBM RS6000.
The local get method retrieves a local data object associated with an input index value,
i.e. the converse function. The local insert method inserts a new data object associated
with its specified index (d new ,F(d new )) into the local storage structure. The local remove
method removes a given data object from the local storage structure. The computational
time of each method is measured given the size of the existing data structure, as presented
in
Figure
3.5, SDDA Overhead for Local Methods.
Figure
3.5: SDDA Overhead for Local Methods
The overhead cost of local get method, as denoted by the middle line in Figure 3.5,
increases logarithmically from 2 microseconds for an empty SDDA to 8 microseconds for
an SDDA containing one million local data objects. This slow logarithmic growth is as
expected for a search into the SDDA's balanced binary tree. The local insert method
performs two operations: (1) search for the proper point in the data structure to insert the
object and (2) modification of the data structure for the new object. The cost of the insert
method is a uniform "delta" cost over the get operation. Thus the overhead cost of
modifying the data structure for the new data object independent of the size of the SDDA.
Note that the overhead cost of the local remove method, as denoted in Figure 3.5 by the
lowest line, is also independent of the size of the SDDA.
4 Method Specific Data & Programming Abstractions
The next level of the PSE specializes DDA objects with method specific semantics to
create high-level programming abstractions which can be directly used to implement
parallel adaptive algorithms. The design of such abstractions for three different classes of
adaptive solution techniques for PDEs, hierarchical, dynamically adaptive grids,
hp-adaptive finite elements and dynamic tree (dynamic trees are the data abstractions
upon which the fast multipole methods are implemented) are descried below.
4.1 Hierarchical Adaptive Mesh-Refinement
Problem Description
Figure
4.1: Adaptive Grid Hierarchy - 2D (Berger-Oliger AMR Scheme)
Dynamically adaptive numerical techniques for solving differential equations provide a
means for concentrating computational effort to appropriate regions in the computational
domain. In the case of hierarchical adaptive mesh refinement (AMR) methods, this is
achieved by tracking regions in the domain that require additional resolution and
dynamically overlaying finer grids over these regions. AMR-based techniques start with a
base coarse grid with minimum acceptable resolution that covers the entire computational
domain. As the solution progresses, regions in the domain requiring additional resolution
are tagged and finer grids are overlayed on the tagged regions of the coarse grid.
Refinement proceeds recursively so that regions on the finer grid requiring more
resolution are similarly tagged and even finer grids are overlayed on these regions. The
resulting grid structure is a dynamic adaptive grid hierarchy. The adaptive grid hierarchy
corresponding to the AMR formulation by Berger & Oliger [7] is shown in Figure 4.1.
Distributed Data-Structures for Hierarchical AMR Two basic distributed
data-structures have been developed, using the fundamental abstractions provided by the
HDDA, to support adaptive finite-difference techniques based on hierarchical AMR: (1)
A Scalable Distributed Dynamic Grid (SDDG) which is a distributed and dynamic array,
and is used to implement a single component grid in the adaptive grid hierarchy; and (2)
A Distributed Adaptive Grid Hierarchy (DAGH) which is defined as a dynamic
collection of SDDGs and implements the entire adaptive grid hierarchy. The
SDDG/DAGH data-structure design is based on a linear representation of the
hierarchical, multi-dimensional grid structure. This representation is generated using
space-filling curves described in Section 3 and exploits the self-similar or recursive
nature of these mappings to represent a hierarchical DAGH structure and to maintain
locality across different levels of the hierarchy. Space-filling mapping functions are also
used to encode information about the original multi-dimensional space into each
space-filling index. Given an index, it is possible to obtain its position in the original
multi-dimensional space, the shape of the region in the multi-dimensional space
associated with the index, and the space-filling indices that are adjacent to it. A detailed
description of the design of these data-structures can be found in [8].
Figure
4.2: SDDG Representation - Figure 4.3: DAGH Composite Representation
SDDG Representation:
A multi-dimensional SDDG is represented as a one dimensional ordered list of SDDG
blocks. The list is obtained by first blocking the SDDG to achieve the required
granularity, and then ordering the SDDG blocks based on the selected space-filling curve.
The granularity of SDDG blocks is system dependent and attempts to balance the
computation-communication ratio for each block. Each block in the list is assigned a cost
corresponding to its computational load. Figure 4.2 illustrates this representation for a
2-dimensional SDDG.
Partitioning a SDDG across processing elements using this representation consists of
appropriately partitioning the SDDG block list so as to balance the total cost at each
processor. Since space-filling curve mappings preserve spatial locality, the resulting
distribution is comparable to traditional block distributions in terms of communication
overheads.
DAGH Representation:
The DAGH representation starts with a simple SDDG list corresponding to the base grid
of the grid hierarchy, and appropriately incorporates newly created SDDGs within this
list as the base grid gets refined. The resulting structure is a composite list of the entire
adaptive grid hierarchy. Incorporation of refined component grids into the base SDDG
list is achieved by exploiting the recursive nature of space-filling mappings: For each
refined region, the SDDG sub-list corresponding to the refined region is replaced by the
child grid's SDDG list. The costs associated with blocks of the new list are updated to
reflect combined computational loads of the parent and child. The DAGH representation
therefore is a composite ordered list of DAGH blocks where each DAGH block
represents a block of the entire grid hierarchy and may contain more than one grid level;
i.e. inter-level locality is maintained within each DAGH block. Figure 4.3 illustrates the
composite representation for a two dimensional grid hierarchy.
The AMR grid hierarchy can be partitioned across processors by appropriately
partitioning the linear DAGH representation. In particular, partitioning the composite list
to balance the cost associated to each processor results in a composite decomposition of
the hierarchy. The key feature of this decomposition is that it minimizes potentially
expensive inter-grid communications by maintaining inter-level locality in each partition.
Figure
4.4: SDDG/DAGH Storage
Data-structure storage is maintained by the HDDA described in Section 3. The overall
storage scheme is shown in Figure 4.4.
Programming Abstractions for Hierarchical AMR
Figure
4.5: Programming Abstraction for Parallel Adaptive Mesh-Refinement
We have developed three fundamental programming abstractions using the
data-structures described above that can be used to express parallel adaptive
computations based on adaptive mesh refinement (AMR) and multigrid techniques (see
Figure
4.5). Our objectives are twofold: first, to provide application developers with a set
of primitives that are intuitive for expressing the application, and second, to separate
data-management issues and implementations from application specific operations.
Grid Geometry Abstractions:
The purpose of the grid geometry abstractions is to provide an intuitive means for
identifying and addressing regions in the computational domain. These abstractions can
be used to direct computations to a particular region in the domain, to mask regions that
should not be included in a given operation, or to specify region that need more
resolution or refinement. The grid geometry abstractions represent coordinates, bounding
boxes and doubly linked lists of bounding boxes.
Coordinates: The coordinate abstraction represents a point in the computational domain.
Operations defined on this class include indexing and arithmetic/logical manipulations.
These operations are independent of the dimensionality of the domain.
Bounding Boxes: Bounding boxes represent regions in the computation domain and is
comprised of a triplet: a pair of Coords defining the lower and upper bounds of the box
and a step array that defines the granularity of the discretization in each dimension. In
addition to regular indexing and arithmetic operations, scaling, translations, unions and
intersections are also defined on bounding boxes. Bounding boxes are the primary means
for specification of operations and storage of internal information (such as dependency
and communication information) within DAGH.
Bounding Boxes Lists: Lists of bounding boxes represent a collection of regions in the
computational domain. Such a list is typically used to specify regions that need
refinement during the regriding phase of an adaptive application. In addition to linked-list
addition, deletion and stepping operation, reduction operations such as intersection and
union are also defined on a BBoxList.
Grid Hierarchy Abstraction:
The grid hierarchy abstraction represents the distributed dynamic adaptive grid hierarchy
that underlie parallel adaptive applications based on adaptive mesh-refinement. This
abstraction enables a user to define, maintain and operate a grid hierarchy as a first-class
object. Grid hierarchy attributes include the geometry specifications of the domain such
as the structure of the base grid, its extents, boundary information, coordinate
information, and refinement information such as information about the nature of
refinement and the refinement factor to be used. When used in a parallel/distributed
environment, the grid hierarchy is partitioned and distributed across the processors and
serves as a template for all application variables or grid functions. The locality preserving
composite distribution [9] based on recursive Space-filling Curves [3] is used to partition
the dynamic grid hierarchy. Operations defined on the grid hierarchy include indexing of
individual component grid in the hierarchy, refinement, coarsening, recomposition of the
hierarchy after regriding, and querying of the structure of the hierarchy at any instant.
During regriding, the re-partitioning of the new grid structure, dynamic load-balancing,
and the required data-movement to initialize newly created grids, are performed
automatically and transparently.
Grid Function Abstraction:
Grid Functions represent application variables defined on the grid hierarchy. Each grid
function is associated with a grid hierarchy and uses the hierarchy as a template to define
its structure and distribution. Attributes of a grid function include type information, and
dependency information in terms of space and time stencil radii. In addition the user can
assign special (FORTRAN) routines to a grid function to handle operations such as
inter-grid transfers (prolongation and restriction), initialization, boundary updates, and
input/output. These function are then called internally when operating on the distributed
grid function. In addition to standard arithmetic and logical manipulations, a number of
reduction operations such as Min/Max, Sum/Product, and Norms are also defined on grid
functions. GridFunction objects can be locally operated on as regular FORTRAN 90/77
arrays.
4.2 Definition of hp-Adaptive Finite Element Mesh
The hp-adaptive finite element mesh data structure consists of two layers of abstractions,
as illustrated in Figure 4.6. The first layer consists of the Domain and Node abstractions.
The second layer consists of mesh specific abstractions such as Vertex, Edge, and
Surface, which are specializations of the Node abstraction.
Figure
4.6: Layering of Mesh Abstraction
A mesh Domain is the finite element application's specialization of the SDDA. The
Domain uses the SDDA to store and distribute a dynamic set of mesh Nodes among
processors. The Domain provides the mapping from the N-dimensional finite element
domain to the one-dimensional index space required by a DDA.
A finite element mesh Node associates a set of finite element basis functions with a
particular location in the problem domain. Nodes also support inter-Node relationships,
which typically capture properties of inter-Node locality.
Specializations of the finite element mesh Node for a two-dimensional problem are
summarized in the following table and illustrated Figure 4.7.
Mesh Object Reference Location Relationships
Vertex vertex point
Edge midside point
Vertex endpoints
Element "owners"
Irregular edge constraints
Element centroid point
Edge boundaries
Element refinement heirarchy
Figure
4.7: Mesh Object Relationships
Extended relationships between mesh Nodes are obtained through the minimul set of
relationships given above. For example:
Extended Relationship Relationship "Path"
Element ->Vertex Element <-> Edge -> Vertex
Element <-> Element Element <-> Edge <-> Element (normal)
Element <-> Element Element <-> Edge <-> Edge <-> Element (constrained)
Finite element h-adaptation consists of splitting elements into smaller elements, or
merging previously split elements into a single larger elements. Finite element
p-adaptation involves increasing or decreasing the number of basis functions associated
with the elements. An application performs these hp-adaptations dynamically in response
to an error analysis of a finite element solution.
HP-adaptation results in the creation of new mesh Nodes and specification of new
relationships. Following an hp-adaptation the mesh partitioning may lead to
load imbalance, as such the application may repartition the problem. A DDA significantly
simplifies such dynamic data structure update and repartitioning operations while
insuring data structure consistency throughout these operations.
4.3 Adaptive Trees
An adaptive distributed tree requires two main pieces of information. First it needs a tree
data structure with methods for gets, puts, and pruning nodes of the tree. This
infrastructure requires pointers between nodes. Second an adaptive tree needs an
estimation of the cost associated with each node of the tree in order to determine if any
refinement will take place at that node. With these two abstractions, an algorithm can
utilize an adaptive tree in a computation. At this point, we are developing a distributed
fast multipole method based on balanced trees, with the goal of creating a mildly adaptive
tree in the near future.
Adaptive trees could be defined in either of the DDAs. The implementation described
here is done using the SDDA. All references in the tree are made through a generalization
of the pointer concept. These pointers are implemented as indices into the SDDA, and
access is controlled by accessing the SDDA data object with the appropriate action and
index. This control provides a uniform interface into a distributed data structure for each
processor. Thus, distributed adaptive trees are supported on the SDDA.
The actual contents of a node includes a list of items.
1. An index of each node derived from the geometric location of the node.
2. Pointers to a parent and to children nodes.
3. An array of coefficients used by the computation.
4. A list of pointers to other nodes with which the given node interacts.
All of this information is stored in a node, called a subdomain. The expected work for
each subdomain is derived from the amount of computation to be performed as specified
by the data. Adaptivity can be determined on the basis of the expected work of a given
node, relative to some threshold. In addition, since each node is registered in the SDDA,
we can also compute the total expected work per processor. By collecting the total
expected work per processor with the expected work per subdomain, a simple load
balance can be implemented by repartitioning the index space.
5 Application Codes
There follow sketches of applications expressed in terms of each of the parallel adaptive
mesh refinement method, the parallel hp-adaptive method and the parallel many-body
problem each built on programming abstractions built upon a DDA.
5.1 Numerical Relativity using Hierarchical AMR
A distributed and adaptive version of H3expresso 3-D numerical relativity application
has been implemented using the the data-management infrastructure presented in this
paper. The H3expresso 3-D numerical relativity application is developed at the National
Center for Supercomputing Applications (NCSA), University of Illinois at Urbana, has
H3expresso (developed at National Center for Supercomputing Applications (NCSA),
University of Illinois at Urbana) is a "concentrated" version of the full H version 3.3
code that solves the general relativistic Einstein's Equations in a variety of physical
scenarios [10]. The original H3expresso code is non-adaptive and is implemented in
FORTRAN 90.
Representation Overheads
Figure
5.1: DAGH Overhead Evaluation
The overheads of the proposed DAGH/SDDG representation are evaluated by comparing
the performance of a hand-coded, unigrid, Fortran 90+MPI implementation of the
H3expresso application with a version built using the data-management infrastructure.
The hand-coded implementation was optimized to overlap the computations in the
interior of each grid partition with the communications on its boundary by storing the
boundary in separate arrays. Figure 5.1 plots the execution time for the two codes. The
DAGH implementation is faster for all number of processors.
Composite Partitioning Evaluation
The results presented below were obtained for a 3-D base grid of dimension 8 X 8 X 8
and 6 levels of refinement with a refinement factor of 2.
Figure
5.2: DAGH Distribution: Snap-shot I - Figure 5.3: DAGH Distribution:
Snap-shot II
Figure
5.4: DAGH Distribution: Snap-shot III - Figure 5.5: DAGH Distribution:
Snap-shot IV
Load Balance:
To evaluate the load distribution generated by the composite partitioning scheme we
consider snap-shots of the distributed grid hierarchy at arbitrary times during integration.
Normalized computational load at each processor for the different snap-shots are plotted
in
Figures
5.2-15.5. Normalization is performed by dividing the computational load
actually assigned to a processor by the computational load that would have been assigned
to the processor to achieve a perfect load-balance. The latter value is computed as the
total computational load of the entire DAGH divided by the number of processors.
Any residual load imbalance in the partitions generated can be tuned by varying the
granularity of the SDDG/DAGH blocks. Smaller blocks can increase the regriding time
but will result in smaller load imbalance. Since AMR methods require re-distribution at
regular intervals, it is usually more critical to be able to perform the re-distribution
quickly than to optimize each distribution.
Communications:
Both prolongation and restriction inter-grid operations were performed locally on each
processor without any communication or synchronization.
Partitioning Overheads
Table
5.1: Dynamic Partitioning Overhead
Partitioning is performed initially on the base grid, and on the entire grid hierarchy after
every regrid. Regriding any level l comprises of refining at level l and all level finer than
generating and distributing the new grid hierarchy; and performing data transfers
required to initialize the new hierarchy. Table 5.1 compares the total time required for
regriding, i.e. for refinement, dynamic re-partitioning and load balancing, and
data-movement, to the time required for grid updates. The values listed are cumulative
times for 8 base grid time-steps with 7 regrid operations.
5.2 HP-Adaptive Finite Element Code
An parallel hp-adaptive finite element code for computational fluid dynamics is in
development. This hp-adaptive finite element computational fluid dynamics application
has two existing implementations: (1) a sequential FORTRAN code and (2) a parallel
FORTRAN code with fully a duplicated data structure. The hp-adaptive finite element
data structure has a complexity so great that is was not tractable to distribute the
FORTRAN data structure. As such the parallel FORTRAN implementation is not
scalable due to the memory consumed on each processor by duplicating the data
structure.
To make tractable the development of a fully distributed hp-adaptive finite element data
structure is was necessary to achieve a separation of concerns between the complexities
of the hp-adaptive finite element data structure and complexities of distributed dynamic
data structures in general. This separation of concerns in the development of a fully
distributed hp-adaptive finite element data structure provided the initial motivation for
developing the SDDA.
The organization of the new finite element application is illustrated in Figure 5.6. At the
core of the application architecture is the index space. The index space provides a very
compact and succinct specification of how to partition the application's problem among
processors. This same specification is used to both distribute the mesh structure through
the SDDA and to define a compatible distribution for the vectors and matrices formed by
the finite element method.
Figure
Finite Element Application Architecture
The finite element application uses a second parallel infrastructure which supports
distributed vectors and matrices, as denoted in the lower right corner of Figure 5.6. The
current release of this infrastructure is documented in [11]. Both DDA and linear algebra
infrastructures are based upon the common abstraction of an index space. This
commonality provides the finite element application with uniform abstraction for
specifying data distribution.
5.3 N-Body Problems
General Description
Figure
5.7: Data flow in the fast multipole algorithm
The N-body particle problems arising in various scientific disciplines appear to require an
computational method. However, once a threshold in the number of particles is
surpassed, approximating the interaction of particles with interactions between
sufficiently separated particle clusters allows the computational effort to be substantially
reduced. The best known of these fast summation approaches is the fast multipole method
[12], which, under certain assumptions, gives a method of O(N)
The fast multipole method is a typical divide and conquer algorithm. A cubic
computational domain is recursively subdivided into octants. At the finest level, the
influence of the particles within a cell onto sufficiently separated cells is subsumed into a
multipole series expansion. These multipole expansions are combined in the upper levels,
until the root of the oct-tree contains a multipole expansion. Then local series expansions
of the influence of sufficiently separated multipole expansions on cells are formed.
Finally, in a reverse traversal of the oct-tree the contributions of cells are distributed to
their children (cf. Figure 5.7). The algorithm relies on a scaling property, which allows
cells on the scale of children to be sufficiently separated when they were too close on the
scale of the current parents. At the finest level the influence of these sufficiently
separated cells is taken into account together with the interaction of the particles in the
remaining closeby cells. For a more mathematically oriented description of the shared
memory implementation with or without periodic boundary conditions we refer to
[13][14] and the references therein.
Figure
5.7 shows that the fast multipole algorithm is readily decomposed into three
principal stages:
1. populating the tree bottom-up with multipole expansions
2. converting the multipole expansions to local expansions
3. distribute local expansions top-down
These three stages have to be performed in sequential order, but it is easily possible to
parallelize each of the stages individually. For the second stage the interaction set of a
cell is defined as the set of cells which are sufficiently separated from the cell, but their
parents are not sufficiently separated from the cells parent cell. The local expansion about
the center of a cell is found by adding up all the influences from the cells of the
interaction set. For each cell these operations are found to be completely independent of
each other. But notice that the majority of communication requirements between different
processors is incurred during this stage. Hence, an optimization of the communication
patterns during the second stage can account for large performance gains.
A more detailed analysis of the (non-adaptive) algorithm reveals that optimal
performance should be attained when each leaf-cell contains an optimal number of
particles, thereby balancing the work between the direct calculations and the conversion
of the multipole expansions to the local expansions in the second stage. Distributed Fast
Multipole Results
Figure
5.8: Total Execution Time on SP2 vs. Problem Size for Multiple Levels of
Approximation
Next we describe preliminary performance results for the distributed fast multipole
method implemented on the HDDA. We verified the fast multipole method by computing
an exact answer for a fixed resolution of the computational domain. Test problems were
constructed by storing one to a few particles chosen randomly per leaf cell. This
comparison was repeated for several small problems until we were certain that the
algorithm behaved as expected. Performance measurements were taken from a 16 node
IBM SP2 parallel computer running AIX Version 4.1.
The total run times using 3, 4, and 5 levels of approximation on a variety of problem
sizes for 1, 2, 4, and 8 processors are presented in Figure 5.8. We also plot the expected
O(N) run time aligned with the 8 processor results. Each curve represents the total
execution time for a fixed level of spatial resolution while increasing the number of
particles in the computation. Two major components of each run time are an
approximation time and a direct calculation for local particles. The approximation time is
a function of the number of levels of resolution and is fixed for each curve. The direct
calculation time grows as O(N 2 ) within a given curve. The problem size for which these
two times are equal represents the optimal number of particles stored per subdomain.
This optimal problem size appears as a knee in the total execution time curves.
The curves for 8 processors align relatively well with the O(N) run time for all problem
sizes. Thus, the algorithm appears scalable for the problems considered. Furthermore,
these results show that 1 processor has the longest time and 8 processors have the shortest
time, which indicates that some speedup is being attained. Ideally, one would expect that
processors achieve a speedup of 8.
We have presented results for a fast multipole method which demonstrates the O(N)
computational complexity. These results exhibit both scalability and speedup for a small
number of processors. Our preliminary results focused on validity and accuracy. The next
step is to performance tune the algorithm.
6 Conclusion and Future Work
The significant conclusions demonstrated herein are:
1. That there is a common underlying computational infrastructure for a wide family of
parallel adaptive computation algorithms.
2. That a substantial decrease in effort in implementation for these important
algorithms can be attained without sacrifice of performance through use of this
computational infrastructure.
There is much further research needed to complete development of a robust and
supportable computational infrastructure for adaptive algorithms. The existing versions
of DDA require extension and engineering. The programming abstractions for each
solution method require enriching. It is hoped to extend the programming abstraction
layer to support other adaptive methods such as wavelet methods. There is a need to do
many more applications to define the requirements for the programming abstraction layer
interfaces.
Acknowledgements
This research has been jointly sponsored by the Argonne National Laboratory Enrico
Fermi Scholarship awarded to Manish Parashar, by the Binary Black-Hole NSF Grand
Challenge (NSF ACS/PHY 9318152), by ARPA under contract DABT 63-92-C-0042,
and by the NSF National Grand Challenges program grant ECS-9422707. The authors
would also like to acknowledge the contributions of J-rgen Singer, Paul Walker and Joan
Masso to this work.
--R
System Engineering for High Performance Computing Software: The HDDA/DAGH Infrastructure for Implementation of Parallel Structured Adaptive Mesh Refinement
A Parallel Infrastructure for Scalable Adaptive Finite Element Methods and its Application to Least Squares C-infinity Collocation
Database System Concepts
Linear Hashing: a New Tool for File and Table Addressing
Adaptive Mesh-Refinement for Hyperbolic Partial Differential Equations
Distributed Dynamic Data-Structures for Parallel Adaptive Mesh-Refinement
On Partitioning Dynamic Adaptive Grid Hierarchies
Hyperbolic System for Numerical Relativity
Robert van de Geijn
The rapid evaluation of potential fields in particle systems
The Parallel Fast Multipole Method in Molecular Dynamics
Parallel Implementation of the Fast Multipole Method with Periodic Boundary Conditions
--TR
Algorithms
The parallel fast multipole method in molecular dynamics
Database System Concepts
On Partitioning Dynamic Adaptive Grid Hierarchies
--CTR
Yun He , Chris H. Q. Ding, Coupling Multicomponent Models with MPH on Distributed Memory Computer Architectures, International Journal of High Performance Computing Applications, v.19 n.3, p.329-340, August 2005
Faith E. Sevilgen , Srinivas Aluru, A unifying data structure for hierarchical methods, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.24-es, November 14-19, 1999, Portland, Oregon, United States
Sumir Chandra , Manish Parashar, Towards autonomic application-sensitive partitioning for SAMR applications, Journal of Parallel and Distributed Computing, v.65 n.4, p.519-531, April 2005
S. Chandra , X. Li , M. Parashar, Engineering an autonomic partitioning framework for Grid-based SAMR applications, High performance scientific and engineering computing: hardware/software support, Kluwer Academic Publishers, Norwell, MA, 2004
Karen Devine , Bruce Hendrickson , Erik Boman , Matthew St. John , Courtenay Vaughan, Design of dynamic load-balancing tools for parallel applications, Proceedings of the 14th international conference on Supercomputing, p.110-118, May 08-11, 2000, Santa Fe, New Mexico, United States
Andrew M. Wissink , Richard D. Hornung , Scott R. Kohn , Steve S. Smith , Noah Elliott, Large scale parallel structured AMR calculations using the SAMRAI framework, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.6-6, November 10-16, 2001, Denver, Colorado
J. M. Malard , R. D. Stewart, Distributed dynamic hash tables using IBM LAPI, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-11, November 16, 2002, Baltimore, Maryland
Valerio Pascucci , Randall J. Frank, Global static indexing for real-time exploration of very large regular grids, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.2-2, November 10-16, 2001, Denver, Colorado
James C. Browne , Madulika Yalamanchi , Kevin Kane , Karthikeyan Sankaralingam, General parallel computations on desktop grid and P2P systems, Proceedings of the 7th workshop on Workshop on languages, compilers, and run-time support for scalable systems, p.1-8, October 22-23, 2004, Houston, Texas
Johan Steensland , Sumir Chandra , Manish Parashar, An Application-Centric Characterization of Domain-Based SFC Partitioners for Parallel SAMR, IEEE Transactions on Parallel and Distributed Systems, v.13 n.12, p.1275-1289, December 2002 | fast multipole methods;problem solving environment;adaptive mesh-refinement;HP-adaptive finite elements;parallel adaptive algorithm;distributed dynamic data structures |
509912 | New results on monotone dualization and generating hypergraph transversals. | This paper considers the problem of dualizing a monotone CNF (equivalently, computing all minimal transversals of a hypergraph), whose associated decision problem is a prominent open problem in NP-completeness. We present a number of new polynomial time resp. output-polynomial time results for significant cases, which largely advance the tractability frontier and improve on previous results. Furthermore, we show that duality of two monotone CNFs can be disproved with limited nondeterminism (more precisely, in polynomial time with $O(\log^2 n)$ suitably guessed bits). This result sheds new light on the complexity of this important problem. | INTRODUCTION
# Part of the work carried out while visiting TU Wien.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for pro-t or commercial advantage and that copies
bear this notice and the full citation on the -rst page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior speci-c
permission and/or a fee.
STOC'02, May 19-21, 2002, Montreal, Quebec, Canada.
Recall that the prime CNF of a monotone Boolean function
f is the unique formula
# c#S c in conjunctive normal
form where S is the set of all prime implicates of f , i.e.,
minimal clauses c which are logical consequences of f . In
this paper, we consider the following problem:
Problem Dualization
Input: The prime CNF # of a monotone Boolean
Output: The prime CNF # of its dual
It is well known that problem Dualization is equivalent to
the Transversal Computation problem, which requests
to compute the set of all minimal transversals (i.e., minimal
hitting sets) of a given hypergraph H, in other words, the
transversal hypergraph Tr(H) of H. Actually, these problems
can be viewed as the same problem, if the clauses in
a monotone CNF # are identified with the sets of variables
they contain. Dualization is a search problem; the associated
decision problem Dual is to decide whether two
given monotone prime CNFs # and # represent a pair (f, g)
of dual Boolean functions. Analogously, the decision problem
Trans-Hyp associated with Transversal Computation
is deciding, given hypergraphs H and G, whether
Tr(H).
Dualization and several problems which are like transversal
computation known to be computationally equivalent to
Dualization (see [13]) are of interest in various areas such
as database theory (e.g., [34, 43]), machine learning and
data mining (e.g., [4, 5, 10, 18]), game theory (e.g., [22, 38,
39]), artificial intelligence (e.g., [17, 24, 25, 40]), mathematical
programming (e.g., [3]), and distributed systems (e.g.,
[16, 23]) to mention a few.
While the output CNF # can be exponential in the size of
#, it is currently not known whether # can be computed
in output-polynomial (or polynomial total) time, i.e., in time
polynomial in the combined size of # and #. Any such algorithm
for Dualization (or Transversal Computation)
would significantly advance the state of the art of many
problems in the application areas. Similarly, the complexity
of Dual and Trans-Hyp is open since more than 20 years
now (cf. [2, 13, 26, 27, 29]).
Note that Dualization is solvable in polynomial total time
on a class C of hypergraphs i# Dual is in PTIME for all pairs
Dual is known to be in co-NP and
the best currently known upper time-bound is n o(log n) [15].
Determining the complexities of Dualization and Dual,
and of equivalent problems such as the transversal problems,
is a prominent open problem. This is witnessed by the fact
that these problems are cited in a rapidly growing body of
literature and have been referenced in various survey papers
and complexity theory retrospectives, e.g. [26, 30, 36].
Given the importance of monotone dualization and equivalent
problems for many application areas, and given the
long standing failure to settle the complexity of these prob-
lems, emphasis was put on finding tractable cases of Dual
and corresponding polynomial total-time cases of Dualiza-
tion. In fact, several relevant tractable classes were found
by various authors; see e.g. [6, 7, 8, 10, 12, 13, 31, 32, 35,
37] and references therein. Moreover, classes of formulas
were identified on which Dualization is not just polynomial
total-time, but where the conjuncts of the dual formula
can be enumerated with incremental polynomial delay, i.e.,
with delay polynomial in the size of the input plus the size
of all conjuncts so far computed, or even with polynomial
delay, i.e., with delay polynomial in the input size only.
Main Goal. The main goal of this paper is to present important
new polynomial total time cases of Dualization
and, correspondingly, PTIME solvable subclasses of Dual
which significantly improve previously considered classes.
Towards this aim, we first present a new algorithm Dualize
and prove its correctness. Dualize can be regarded as a
generalization of a related algorithm proposed by Johnson,
Yannakakis, and Papadimitriou [27]. As other dualization
algorithms, Dualize reduces the original problem by self-
reduction to smaller instances. However, the subdivision
into subproblems proceeds according to a particular order
which is induced by an arbitrary fixed ordering of the vari-
ables. This, in turn, allows us to derive some bounds on
intermediate computation steps which imply that Dualize,
when applied to a variety of input classes, outputs the conjuncts
of with polynomial delay or incremental polynomial
delay. In particular, we show positive results for the following
input classes:
. Degenerate CNFs. We generalize the notion of k-degen-
erate graphs [44] to hypergraphs and define k-degener-
ate monotone CNFs resp. hypergraphs. We prove that
for any constant k, Dualize works with polynomial
delay on k-degenerate inputs. Moreover, it works in
output-polynomial time on O(log n)-degenerate CNFs.
. Read-k CNFs. A CNF is read-k, if each variable appears
at most k times in it. We show that for read-k
CNFs, problem Dualization is solvable with polynomial
delay, if k is constant, and in total polynomial
time, if O(log(#). Our result for constant k
significantly improves upon the previous best known
algorithm [10], which has a higher complexity bound,
is not polynomial delay, and outputs the clauses of
in no specific order. The result for O(log #) is
a non-trivial generalization of the result in [10], which
was posed as an open problem [9].
. Acyclic CNFs. There are several notions of hyper-graph
resp. monotone CNF acyclicity [14], where the
most general and well-known is #-acyclicity. As shown
in [13], Dualization is polynomial total time for #-
acyclic CNFs; #-acyclicity is the hereditary version of
#-acyclicity and far less general. A similar result for #-
acyclic prime CNFs was left open. We give a positive
answer and show that for #-acyclic prime #, Dualization
is solvable with polynomial delay.
. Formulas of Bounded Treewidth. The treewidth [41]
of a graph expresses its degree of cyclicity. Treewidth
is an extremely general notion, and bounded treewidth
generalizes almost all other notions of near-acyclicity.
Following [11], we define the treewidth of a hyper-graph
resp. monotone CNF # as the treewidth of its
associated (bipartite) variable-clause incidence graph.
We show that Dualization is solvable with polynomial
delay (exponential in k) if the treewidth of # is
bounded by a constant k, and in polynomial total time
if the treewidth is O(log log #).
. Recursive Applications of Dualize and k-CNFs.
We show that if Dualize is applied recursively and
the recursion depth is bounded by a constant, then
Dualization is solved in polynomial total time. We
apply this to provide a simpler proof of the known
result [6, 13] that monotone k-CNFs (where each conjunct
contains at most k variables) can be dualized in
output-polynomial time.
After deriving the above results, we turn our attention (in
Section 5) to the fundamental computational nature of problems
Dual and Trans-Hyp in terms of complexity theory.
Complexity: Limited nondeterminism. In a landmark
paper, Fredman and Khachiyan [15] proved that problem
Dual can be solved in quasi-polynomial time. More pre-
cisely, they first gave an algorithm A solving the problem in
n) time, and then a more complicated algorithm B
whose runtime is bounded by n 4#(n) where #(n) is defined
by #(n) As noted in [15], #(n) # log n/ log log
o(log n); therefore, duality checking is feasible in n o(log n)
time. This is the best upper bound for problem Dual so far,
and shows that the problem is most likely not NP-complete.
A natural question is whether Dual lies in some lower complexity
class based on other resources than just runtime. In
the present paper, we advance the complexity status of this
problem by showing that its complement is feasible with limited
nondeterminism, i.e, by a nondeterministic polynomial-time
algorithm that makes only a poly-logarithmic number
of guesses. For a survey on complexity classes with limited
nondeterminism, and for several references, see [19]. We
first show by a simple and self-contained proof that testing
non-duality is feasible in polynomial time with O(log 3 n)
nondeterministic steps. We then observe that this can be
improved to O(log 2 n) nondeterministic steps. This result is
surprising, because most researchers dealing with the complexity
of Dual and Trans-Hyp believed so far that these
problems are completely unrelated to limited nondeterminism
We believe that the results presented in this paper are signif-
icant, and we are confident they will prove useful in various
contexts. First, we hope that the various polynomial/output-
polynomial cases of the problems which we identify will lead
to better and more general methods in various application
areas (as we show, e.g. in learning and data mining [10]), and
that based on the algorithm Dualize or some future modifi-
cations, further relevant tractable classes will be identified.
Second, we hope that our discovery on limited nondeterminism
provides a new momentum to complexity research
on Dual and Trans-Hyp, and will push it towards settling
these longstanding open problems.
2. PRELIMINARIES AND NOTATION
A Boolean function (in short, function) is a mapping f :
{0, 1} n
# {0, 1}, where v # {0, 1} n is called a Boolean vector
(in short, vector). As usual, we write g # f if f and g satisfy
A function f is monotone (or positive), if
for all i) implies f(v) # f(w) for all v, w # {0, 1} n . Boolean
variables x1 , x2 , . , xn and their complements - x1 , -
x2 , . , -
xn
are called literals. A clause (resp., term) is a disjunction
(resp., conjunction) of literals containing at most one of x i
and -
x i for each variable. A clause c (resp., term t) is an
implicate (resp., implicant) of a function f , if f # c (resp.,
moreover, it is prime, if there is no implicate c # < c
no implicant t # > t) of f , and monotone, if it consists
of positive literals only. We denote by P I(f) the set of all
prime implicants of f .
A conjunctive normal form (CNF) (resp., disjunctive normal
form, DNF) is a conjunction of clauses (resp., disjunction
of terms); it is prime (resp. monotone), if all its members
are prime (resp. monotone). For any CNF (resp., DNF) #,
we denote by |#| the number of clauses (resp., terms) in it.
Furthermore, for any formula #, we denote by V (#) the set
of variables that occur in #, and by # its length, i.e., the
number of literals in it.
As well-known, a function f is monotone i# it has a monotone
CNF. Furthermore, all prime implicants and prime implicates
of a monotone f are monotone, and it has a unique
prime CNF, given by the conjunction of all its prime impli-
cates. For example, the monotone f such that
(0111), (1111)} has the unique
prime
Recall that the dual of a function f , denoted f d , is defined
by f d x is the complement of f and
respectively. By definition, we have (f d From De
Morgan's law, we obtain a formula for f d from any one of f
by exchanging # and # as well as the constants 0 and 1. For
example, if f is given by
represented by For a monotone
be the prime CNF of f d . Then
by De Morgan's law, f has the (unique) prime DNF
Thus, we will regard Dualization also as
the problem of computing the prime DNF of f from the
prime CNF of f .
3. ORDERED GENERATION OF TRANSVERSAL
In what follows, let f be a monotone function and # its
prime CNF, where we assume w.l.o.g. that all variables x j
n) appear in #. Let # i n) be the
CNF obtained from # by fixing variables x
with 1. By definition, we have
Example 3.1. Consider
Similarly, for the prime DNF
of f , we denote by # i the DNF obtained from # by fixing
variables x Clearly, we have
denoted
by f i .
Proposition 3.1. Let # and # be any CNF and DNF for
f , respectively. Then,
(a) # i # and |# i | #|, and
Denote by # i n) the CNF consisting of all the
clauses in # i but not in # i-1 .
Example 3.2. For the above example, we have
Note that #
have
denote the CNF consisting of
all the clauses c such that c contains no literal in t i-1 and
appears in # i . For example, if
It follows from (2)
that for all
Lemma 3.2. For any term t #PI (f i-1 ), let g i,t be the
function represented by # i [t]. Then |PI (g i,t )|# i | #|.
Proof. Let
Then by (3), t # s is an implicant of # i . Hence, some t s
exists such that t s
# t#s. Note that V (t)#V
#, and hence we have V
otherwise there exists a clause c in # i [t] such that V (c) #
For any s # PI (g i,t ) such that s #= s # , let t s , t s #
such that t s
# t#s and t s #
respectively. By the above
discussion, we have t s
This completes the proof.
We now describe our algorithm Dualize for generating the
set PI (f ). It is inspired by a similar graph algorithm of
Johnson, Yannakakis, and Papadimitriou [27], and can be
regarded as a generalization. Here, we say that term s is
smaller than term t if
i.e., as vector, s is lexicographically smaller than t.
Algorithm Dualize
Input: The prime CNF # of a monotone function f .
Output: The prime DNF of f , i.e. all prime implicants
of function f .
Compute the smallest prime implicant t min
of f and set Q := { t min };
Step 2:
while Q # do
begin
Remove the smallest t from Q and output t;
for each i with x do
begin
Compute the prime DNF # (t,i) of the
function represented by # i [t];
for each term t # in # (t,i) do
begin
if t i-1 # t # is a prime implicant of f i then
begin
Compute the smallest prime implicant
t # of f such that t #
Theorem 3.3. Algorithm Dualize correctly outputs all
increasing order.
Proof. (Sketch) First note that the term t # inserted in
when t is output is larger than t. Indeed, t #= 1) and
t i-1 are disjoint and V (t # {x1 ,. , x i-1}. Hence, every
term in Q is larger than all terms already output, and the
output sequence is increasing. We show by induction that,
if t is the smallest prime implicant of f that was not output
yet, then t is already in Q. This clearly proves the result.
Clearly, the above statement is true if . Assume now
that t #= t min is the smallest among the prime implicants not
output yet. Let i be the largest index such that t i is not a
prime implicant of f i . This i is well-defined, since otherwise
must hold, a contradiction. Now we have (1) i <
n and (2)
is a prime implicant of fn (= f) and (2) follows from the
maximality of i. Let s # PI (f i ) such that V
let (s). Then K # holds, and since x i+1 /
V (t), the term t # x j #K x j is a prime implicant of # i+1 [s].
There exists s # PI (f) such that s #
since s#x i+1 # PI (f i+1 ). Note that # i+1 [s] #= 0. Moreover,
since s # is smaller than t, by induction s # has already been
output. Therefore, t
been considered in the
inner for-loop of the algorithm. Since s #
is a prime implicant of f i+1 , the algorithm has added the
smallest prime implicant t # of f such that t # We
finally claim that t t. Otherwise, let k be the first index
in which t # and t di#er. Then k
contradicting the
maximality of i.
Let us consider the time complexity of algorithm Dualize.
We store Q as a binary tree, where each leaf represents a
term t and the left (resp., right) son of a node at depth
the root has depth 0, encodes x
In Step 1, we can compute t min in
O(#) time and initialize Q in O(n) time. As for Step 2, let
T (t,i) be the time required to compute the prime DNF # (t,i)
from # i [t]. By analyzing its substeps, we can see that each
iteration of Step 2 requires # x i #V (t) (T (t,i)
time; note that t # is the smallest prime implicant of the
function obtained from f by fixing x
and 0 if x Thus, we have
Theorem 3.4. The output delay of Algorithm Dualize
is bounded by
t#PI (f)
time, and Dualize needs in total time
t#PI (f)
If the T (t,i) are bounded by a polynomial in the input length,
then Dualize becomes a polynomial delay algorithm, since
holds for all t # PI (f) and x
the other hand, if they are bounded by a polynomial in
the combined input and output length, then Dualize is a
polynomial total time algorithm, where |# (t,i) | #| holds
from Lemma 3.2. Using results from [2], we can construct
from Dualize an incremental polynomial time algorithm
for Dualization, which however might not output PI (f)
in increasing order. Summarizing, we have the following
corollary.
Corollary 3.5. Let
is bounded by
(i) a polynomial in n and #, then algorithm Dualize is
an O(n#T ) polynomial delay algorithm;
(ii) a polynomial in n, #, and #, then algorithm Dualize
is an O(n| |(T + | |#)) polynomial total time
algorithm; moreover, Dualization is solvable in incremental
polynomial time.
In the next section, we identify su#cient conditions for the
boundedness of T and fruitfully apply them to solve open
problems and improve previous results.
4. POLYNOMIAL CLASSES
4.1 Degenerate CNFs
We first consider the case of small # i [t]. Generalizing a
notion for graphs (i.e., monotone 2-CNFs) [44], we call a
monotone CNF # k-degenerate, if there exists a variable
ordering x1 , . , xn in which |# i
| # k for all
We call a variable ordering x1 , . , xn smallest last as in [44],
if x i is chosen in the order
|
is smallest for all variables that were not chosen. Clearly,
a smallest last ordering gives the least k such that # is k-
degenerate. Therefore, we can check for every integer k # 1
whether # is k-degenerate in O(#) time. If this holds,
then we have |# (t,i) | # n k and T
apply the distributive law
to # i [t] and remove terms t where some x j # V (t) has no
}). Thus Theorem 3.4
implies the following.
Theorem 4.1. For k-degenerate CNFs #, Dualization
is solvable with O(#n k+1 ) polynomial delay if k # 1 is
constant.
Applying the result of [33] that any monotone CNF which
has O(log n) many clauses is dualizable in incremental polynomial
time, we obtain a polynomiality result also for non-constant
degeneracy:
Theorem 4.2. For O(log #)-degenerate CNFs #, problem
Dualization is polynomial total time.
In the following, we discuss several natural subclasses of
degenerate CNFs.
4.1.1 Read-bounded CNFs
A monotone CNF # is called read-k, if each variable appears
in # at most k times. Clearly, read-k CNFs are k-degenerate,
and in fact # is read-k i# it is k-degenerate under every
variable ordering. By applying Theorems 4.1 and 4.2, we
obtain the following result.
Corollary 4.3. For read-k CNFs #, problem Dualization
is solvable
(i) with O(#n k+1 ) polynomial delay, if k is constant;
(ii) in polynomial total time, if
Note that Corollary 4.3 (i) trivially implies that Dualization
is solvable in O(|#|n k+2 ) time for constant k, since
# kn. This improves upon the previous best known algorithm
[10], which is only O(|#|n k+3 ) time, not polynomial
delay, and outputs PI (f) in no specific order. Corollary 4.3
(ii) is a non-trivial generalization of the result in [10], which
was posed as an open problem [9].
4.1.2 Acyclic CNFs
Like in graphs, acyclicity is appealing in hypergraphs resp.
monotone CNFs from a theoretical as well as a practical
point of view. However, there are many notions of acyclicity
for hypergraphs (cf. [14]), since di#erent generalizations
from graphs are possible. We refer to #-, and Berge-
acyclicity as stated in [14], for which the following proper
inclusion hierarchy is known:
Berge-acyclic #-acyclic #-acyclic #-acyclic.
The notion of #-acyclicity came up in relational database
theory. A monotone CNF # is #-acyclic reducible
by the GYO-reduction [21, 45], i.e., repeated application
of one of the two rules:
(1) If variable x i occurs in only one clause c, remove x i from
clause c.
(2) If distinct clauses c and c # satisfy
clause c from #.
to 0 (i.e., the empty clause). Note that #-acyclicity of a
monotone CNF # can be checked, and a suitable GYO-
reduction output, in O(#) time [42]. A monotone CNF
# is #-acyclic i# every CNF consisting of clauses in # is #-
acyclic. As shown in [13], the prime implicants of a monotone
f represented by a #-acyclic CNF # can be enumerated
(and thus Dualization solved) in p(#| time, where p is
a polynomial in #. However, the time complexity of Dualization
for the more general #-acyclic prime CNFs was
left as an open problem. We now show that it is solvable
with polynomial delay.
Let #= 1 be a prime CNF. Let a = a1 , a2 , . , aq be a GYO-
reduction for #, where a the #-th operation removes
x i from c, and a removes c from #. Consider the
unique variable ordering b1 , b2 , . , bn such b i occurs after b j
in a, for all i < j.
Example 4.1. Let
x3
since it has the GYO-reduction
. From this sequence, we obtain the
variable ordering
As easily checked, this ordering shows that #
is 1-degenerate. Under this ordering, we have #
That # is 1-degenerate in this example is not accidental.
Lemma 4.4. Every #-acyclic prime CNF is 1-degenerate.
Note that the converse is not true. Lemma 4.4 and Theorem
4.1 imply the following result.
Corollary 4.5. For #-acyclic CNFs #, problem Dualization
is solvable with O(#n 2 ) delay.
Observe that for a prime #-acyclic #, we have |# n. Thus,
if we slightly modify algorithm Dualize to check #
in advance (which can be done in linear time in a preprocessing
phase) such that such # i need not be considered in
step 2, then the resulting algorithm has O(n|#) delay.
Observe that the algorithm in [13] solves, minorly adapted
for enumerative output, Dualization for #-acyclic CNFs
with O(n|#) delay. Thus, the above modification of
Dualize is of the same order.
4.1.3 CNFs with bounded treewidth
A tree decomposition (of type I) of a monotone CNF # is a
tree T =(W,E) where each node w#W is labeled with a set
X(w)#V (#) under the following conditions:
1. # w#W
2. for every clause c in #, there exists some w # W such
that V (c) # X(w); and
3. for any variable x
X(w)} induce a (connected) subtree of T .
The width of T is maxw#W |X(w)| - 1, and the treewidth of
#, denoted by Tw 1 (#), is the minimum width over all its
tree decompositions.
Note that the usual definition of treewidth for a graph [41]
results in the case where # is a 2-CNF. Similarly to acyclic-
ity, there are several notions of treewidth for hypergraphs
resp. monotone CNFs. For example, tree decomposition of
type II of CNF # c#C c is defined as type-I tree decomposition
of its incident 2-CNF (i.e., graph) G(#) [11, 20].
That is, for each clause c #, we introduce a new variable
yc and construct
denote the type-II treewidth of #.
Proposition 4.6. For every monotone CNF #, it holds
that Tw 2
Proof. Let be any tree decomposition
of # having width Tw 1 (#). Introduce for all
c # new variables y c , and add y c to every X(w) such that
Clearly, the result is a type-I tree decomposition
of G(#), and thus a type-II tree decomposition of
#. Since at most 2 |X(w)| many yc are added to X(w) and
for every w # W , the result follows.
This means that if Tw 1 (#) is bounded by some constant,
then so is Tw 2 (#). Moreover, Tw 1 implies that
# is a k-CNF; we discuss k-CNFs in Section 4.2 and only
consider Tw 2 (#) here. We note that, as shown in the full
paper, there is a family of prime CNFs # which have Tw 2 (#)
bounded by constant k but are not k-CNF for any k < n
not read-k for any k < n - 1), and a family of prime
CNFs which are k-CNFs for constant k (resp., #-acyclic)
but Tw 2 (#) is not bounded by any constant.
As we show now, bounded-treewidth implies bounded degeneracy
Lemma 4.7. Let # be any monotone CNF with Tw 2
k. Then # is 2 k -degenerate.
Proof. (Sketch) Let E) with show
From this, we reversely construct a variable
ordering , an on (#) such that |# i
for all i.
Choose any leaf w # of T , and let p(w # ) be a node
in W adjacent to w # . If X(w # ) \ X(p(w # {yc | c #},
then remove w # from T . On the other hand, if
only
for
We complete a by
repeating this process, and claim it shows that |# i
for all i. Let w # be chosen during this process, and assume
that a i # X(w # ) \ X(p(w # )). Then, for each clause c # i
we must have either yc # X(w # ) or V (c) # X(w # ). Let
| #
Corollary 4.8. For CNFs # with Tw 2 (# k, Dualization
is solvable
(i) with O(#n 2 k +1 ) polynomial delay, if k is constant;
(ii) in polynomial total time, if
4.2 Recursive application of algorithm Dual-
ize
Algorithm Dualize computes in step 2 the prime DNF # (t,i)
of the function represented by # i [t]. Since #[t] is the prime
CNF of some monotone function, we can recursively apply
Dualize to # i [t] for computing # (t,i) . Let us call this variant
R-Dualize. Then we have the following result.
Theorem 4.9. If its recursion depth is d, R-Dualize
solves Dualization in O(n d-1
|#| d-1
#) time.
Proof. If min and every
# 1. This means that PI (f)={tmin} and # is a 1-CNF
(i.e., each clause in # contains exactly one variable). Thus
in this case, R-Dualize needs O(n) time. Recall that algorithm
Dualize needs, by (5), time
|# (t,i) |O(#)). If
Therefore, R-Dualize needs time O(n|#). For d # 3,
Corollary 3.5.(ii) implies that algorithm R-Dualize needs
time O(n d-1
|#| d-1
#).
Recall that a CNF # is called k-CNF if each clause in # has
at most k literals. Clearly, if we apply algorithm R-Dualize
to a monotone k-CNF #, the recursion depth of R-Dualize
is at most k. Thus we obtain the following result; it re-
establishes, with di#erent means, the main positive result of
[6, 13].
Corollary 4.10. Algorithm R-Dualize solves Dualization
in time O(n k-1
|#| k-1
#), i.e., in polynomial total
time for monotone k-CNFs # where k is constant.
5. LIMITED NONDETERMINISM
In the previous section, we have discussed polynomial cases
of monotone dualization. In this section, we now turn to
the issue of the precise complexity of this problem. For this
purpose, we consider the decision problem Dual instead of
the search problem Dualization. It appears that problem
Dual can be solved with limited nondeterminism, i.e.,
with poly-log many guessed bits by a polynomial-time non-deterministic
Turing machine. This result might bring new
insight towards settling the complexity of the problem.
We adopt Kintala and Fischer's terminology [28] and write
g(n)-P for the class of sets accepted by a nondeterministic
Turing machine in polynomial time making at most g(n)
nondeterministic steps on every input of length n. For every
n)-P. The #P
Hierarchy consists of the classes
and lies between P and NP. The #kP classes appear to
be rather robust; they are closed under polynomial time
and logspace many-one reductions and have complete problems
(cf. [19]). The complement class of #kP is denoted by
co-#kP.
We start by recalling algorithm A of [15], reformulated for
CNFs. In what follows, we view CNFs # also as sets of
clauses, and clauses as sets of literals.
Algorithm A. (reformulated for CNFs)
Input: Monotone CNFs #, representing monotone
f , g s.t. V (c)#V (c #, for all c#, c # .
Output: yes if vector w of form
Delete all redundant (i.e., non-minimal)
implicates from # and #.
Step 2:
Check that V
If any of these conditions fails, f #= g d and a
witness w is found in polynomial time (cf. [15]).
Step 3:
If |# 1, test duality in O(1) time.
Step 4:
If |# 2, find a variable x i that occurs in # or #
(w.l.o.g. in #) with frequency # 1/ log(|#|).
Let
Call algorithm A on the two pairs of forms:
If both calls return yes, then return yes (as
otherwise we obtain w such that f(w) #= g d (w) in
polynomial time (cf. [15]).
be the original input for A. For any pair (#) of
CNFs, define its volume by
|. As shown in [15], step 4 of algorithm
A divides the current (sub)problem of volume
self-reduction into subproblems (A.1) and (A.2) of respective
volumes (assuming that x i frequently occurs in #):
(#) be the recursion tree generated by A on
input (#), i.e., the root is labeled with (#). Any node a
labeled with (#) is a leaf, if A stops on input (#) during
steps 1-3; otherwise, a has a left child a l and a right child
ar corresponding to (A.1) and (A.2), i.e., labeled (#1 , #0 #
#1 ) and (#1 , #0 #1 ) respectively. That is, a l is the "high
frequency move" by the splitting variable.
We observe that every node a in T is determined by a unique
path from the root to a in T and thus by a unique sequence
seq(a) of right or left moves starting from the root of T and
ending at a. The following key lemma bounds the number
of moves of each type for certain inputs.
Lemma 5.1. Suppose |# i
|. Then for any
node a in T , seq(a) contains # v right and # log 2 v left
moves, where
|.
Proof. By (6) and (7), each move decreases the volume v
of a node label. Thus, the length of seq(a), and in particular
the number of right moves, is bounded by v. To obtain the
better bound for the left moves, we will use the following
well-known inequality:
# 1/e, for m # 1. (8)
In fact, the sequence (1 -1/x i
monotonically converges to 1/e from below. By inequality
(6), the volume va of the label of any node a such that seq(a)
contains log 2 v left moves is bounded as follows:
log n) log 2 v .
Because
| - |# i
and because of (8)
it follows that:
log v
Thus, a must be a leaf in T . Hence for every a in T , seq(a)
contains at most log 2 v left moves.
Theorem 5.2. Problem Dual is in co-#3P.
Proof. (Sketch) Instances such that either c # c #
for some c # i and c # i , the sequence seq(a) is empty,
or |# i
| > |# i
| are easily solved in deterministic
polynomial time. In the remaining cases, if f #= g d , then
there exists a leaf a in T labeled by a non-dual pair (# ).
If seq(a) is known, we can compute, by simulating A on
the branch described by seq(a), the entire path from the
root to a with all labels check that
non-dual in steps 2 and 3 of A in polynomial time.
We observe that, as noted in [15], the binary length of any
standard encoding of the input # i , # i is polynomially related
to |# i
| if algorithm A reaches step 3. Thus, to
prove the theorem, it is su#cient to show that seq(a) is obtainable
in polynomial time from O(log 3 v) suitably guessed
bits, where
|. To see this, let us represent every
seq(a) as a sequence seq #
#0 is the number of leading right moves and # i is the number
of consecutive right moves after the i-th left move in
seq(a), for
then seq # (a) = [2, 3, 0]. By Lemma 5.1, seq # (a) has length
at most log 2 v + 1. Thus, seq # (a) occupies only O(log 3 v)
bits in binary; moreover, seq(a) is trivially computed from
seq # (a) in polynomial time.
Remark 5.1. It also follows that if f #= g d , a witness
w can be found in polynomial time within O(log 3 n) nondeterministic
steps. In fact, the sequence seq(a) to a "failing
labeled (# ) describes a choice of values for all variables
in V (#) \ V (# ). By completing it with values
for show non-duality of (# ), we obtain
in polynomial time a vector w such that f(w) #= g d (w).
The aim of the above proof was to show with very simple
means that duality can be polynomially checked with limited
nondeterminism. With a more involved proof, applied to
the algorithm B of [15] (which runs in n 4#(n)+O(1) and thus
time), we can prove the following sharper result.
Theorem 5.3. Deciding if monotone CNFs # and # are
non-dual is feasible in polynomial time with O(#(n) log n)
nondeterministic steps. Thus, problem Dual is in co-#2P.
While our independently developed methods are di#erent
from those in [1], the previous result may also be obtained
from Beigel and Fu's Theorem 11 in [1]. They show how
to convert certain recursive algorithms that use disjunctive
self-reductions and have runtime bounded by f(n) into polynomial
algorithms using log f(n) nondeterministic steps (cf.
[1, Chapter 5]). However, this yields a somewhat more complicated
nondeterministic algorithm. In the full paper, we
also prove that algorithm B qualifies for this.
6.
ACKNOWLEDGMENTS
This work was supported in part by the Austrian Science
Fund project Z29-INF, by TU Wien through a scientific
collaboration grant, and by the Scientific Grant in
Aid of the Ministry of Education, Science, Sports and Culture
of Japan. We would like to thank the reviewers for their
constructive comments on this paper.
7.
--R
Molecular computing
Complexity of identification and dualization of positive Boolean functions.
On generating all minimal integer solutions for a monotone system of linear inequalities.
On the complexity of generating maximal frequent and minimal infrequent sets.
Dual subimplicants of positive Boolean functions.
time recognition of 2-monotonic positive Boolean functions given by an oracle
Dualization of regular Boolean functions.
Private communication.
Conjunctive query containment revisited.
Exact transversal hypergraphs and application to Boolean
Identifying the minimal transversals of a hypergraph and related problems.
Degrees of acyclicity for hypergraphs and relational database schemes.
On the complexity of dualization of monotone disjunctive normal forms.
How to assign votes in a distributed system.
Incremental recompilation of knowledge.
Data mining
Limited nondeterminism.
Hypertree decompositions and tractable queries.
On the universal relation.
A theory of coteries: Mutual exclusion in distributed systems.
Translating between Horn representations and their characteristic models.
"G. Stampacchia"
On generating all maximal independent sets.
Refining nondeterminism in relativized polynomial-time bounded computations
Generating all maximal independent sets: NP-hardness and polynomial-time algorithms
Combinatorial optimization: Some problems and trends.
The maximum latency and identification of positive Boolean functions.
A fast and simple algorithm for identifying 2-monotonic positive Boolean functions
Generating all maximal independent sets of bounded-degree hypergraphs
A retrospective
An O(nm)-time algorithm for computing the dual of a regular Boolean function
Coherent Structures and Simple Games.
Every one a winner
A theory of diagnosis from first principles.
Graph minors II: Algorithmic aspects of tree-width
Simple linear time algorithms to test chordality of graphs
Minimal keys and antikeys.
Colouring, stable sets and perfect graphs.
An algorithm for tree-query membership of a distributed query
--TR
Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs
How to assign votes in a distributed system
Design by exmple: An application of Armstrong relations
The minimal keys and antikeys
A theory of diagnosis from first principles
Dualization of regular Boolean functions
On generating all maximal independent sets
An O(<italic>nm</italic>)-time algorithm for computing the dual of a regular Boolean function
Exact transversal hypergraphs and application to Boolean MYAMPERSANDmgr;-functions
Identifying the Minimal Transversals of a Hypergraph and Related Problems
Complexity of identification and dualization of positive Boolean functions
Colouring, stable sets and perfect graphs
Limited nondeterminism
On the complexity of dualization of monotone disjunctive normal forms
Polynomial-Time Recognition of 2-Monotonic Positive Boolean Functions Given by an Oracle
Data mining, hypergraph transversals, and machine learning (extended abstract)
The Maximum Latency and Identification of Positive Boolean Functions
Generating all maximal independent sets of bounded-degree hypergraphs
A fast and simple algorithm for identifying 2-monotonic positive Boolean functions
Hypertree decompositions and tractable queries
Degrees of acyclicity for hypergraphs and relational database schemes
Efficient Read-Restricted Monotone CNF/DNF Dualization by Learning with Membership Queries
Dual-Bounded Generating Problems
A Theory of Coteries
Conjunctive Query Containment Revisited
On Generating All Minimal Integer Solutions for a Monotone System of Linear Inequalities
NP-Completeness
On Horn Envelopes and Hypergraph Transversals
On the Complexity of Generating Maximal Frequent and Minimal Infrequent Sets
--CTR
Dimitris J. Kavvadias , Elias C. Stavropoulos, Monotone boolean dualization is in co-NP[log2n], Information Processing Letters, v.85 n.1, p.1-6, January
Thomas Eiter , Kazuhisa Makino, On computing all abductive explanations, Eighteenth national conference on Artificial intelligence, p.62-67, July 28-August 01, 2002, Edmonton, Alberta, Canada
Georg Gottlob , Reinhard Pichler , Fang Wei, Tractable database design through bounded treewidth, Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 26-28, 2006, Chicago, IL, USA
Leonid Khachiyan , Endre Boros , Khaled Elbassioni , Vladimir Gurvich, A global parallel algorithm for the hypergraph transversal problem, Information Processing Letters, v.101 n.4, p.148-155, February, 2007
Peter L. Hammer , Alexander Kogan , Bruno Simeone , Sndor Szedmk, Pareto-optimal patterns in logical analysis of data, Discrete Applied Mathematics, v.144 n.1-2, p.79-102, November 2004 | hypergraph acyclicity;dualization;treewidth;limited nondeterminism;output-polynomial algorithms;transversal computation;combinatorial enumeration |
510729 | Multicast Video-on-Demand services. | The server's storage I/O and network I/O bandwidths are the main bottleneck of VoD service. Multicast offers an efficient means of distributing a video program to multiple clients, thus greatly improving the VoD performance. However, there are many problems to overcome before development of multicast VoD systems. This paper critically evaluates and discusses the recent progress in developing multicast VoD systems. We first present the concept and architecture of multicast VoD, and then introduce the techniques used in multicast VoD systems. We also analyze and evaluate problems related to multicast VoD service. Finally, we present open issues on multicast VoD as possible future research directions. | INTRODUCTION
A typical Video-on-Demand (VoD) service allows remote
users to play back any one of a large collection of videos
at any time. Typically, these video les are stored in a
set of central video servers, and distributed through high-speed
communication networks to geographically-dispersed
clients. Upon receiving a client's service request, a server delivers
the video to the client as an isochronous video stream.
Each video stream can be viewed as a concatenation of a
storage-I/O \pipe" and a network pipe. Thus, su-cient
storage-I/O bandwidth must be available for continuous transfer
of data from the storage system to the network interface
card (NIC), which must, in turn, have enough bandwidth to
forward data to clients. Thus, a video server has to reserve
su-cient I/O and network bandwidths before accepting a
This work was partly done during Huadong Ma's visit to
RTCL at the University of Michigan. The work was supported
in part by the USA NSF under Grant EIA-9806280
and the Natural Science Foundation of China under Grant
69873006.
client's request. We dene a server channel as the server resource
required to deliver a video stream while guaranteeing
a client's continuous playback.
This type of VoD service has a wide spectrum of appli-
cations, such as home entertainment, digital video library,
movie-on-demand, distance learning, tele-shopping, news-
on-demand, and medical information service. In general,
the VoD service can be characterized as follows.
Long-lived session: a VoD system should support long-lived
sessions; for example, a typical movie-on-demand
service usually lasts 90{120 minutes.
High bandwidth requirements: for example, server storage
I/O and network bandwidth requirements are 1.5
Mbps (3-10 Mbps) for a MPEG-1 (MPEG-2) stream.
Support for VCR-like interactivity: a client requires the
VoD system to oer VCR-like interactivity, such as the
ability to play, forward, reverse and pause. Other advanced
interactive features include the ability to skip
or select advertisements, investigate additional details
behind a news event (by hypermedia link), save the
program for a later reference, and browse, select and
purchase goods.
QoS-sensitive service: the QoS that VoD consumers and
service providers might care includes service latency,
defection rate, interactivity, playback eects of videos,
etc.
A conventional TVoD system uses one dedicated channel
for each service request, oering the client the best TVoD
service. However, such a system incurs very high costs, especially
in terms of storage-I/O and network bandwidths.
Moreover, such a VoD service has poor scalability and low
performance/cost e-ciency. Although the conventional approach
simplies the implementation, not sharing channels
for client requests will quickly exhaust the network and the
server I/O bandwidth. In fact, the network-I/O bottleneck
has been observed in many earlier systems, such as Time
Warner Cable's Full Service Network Project in Orlando
[68], and Microsoft's Tiger Video Fileserver [12]. In order
to support a large population of clients, we therefore need
new solutions that e-ciently utilize the server and network
resources.
Clearly, the popularity or access pattern of video objects
plays an important role in determining the eectiveness of
a video delivery technique. Because dierent videos are requested
at dierent rates and at dierent times, videos are
usually divided into hot (popular) and cold (less popular),
and requests for the top 10{20 videos are known to constitute
60{80% of the total demand. So, it is crucial to improve
the service e-ciency of hot videos.
Thus, requests by multiple clients for the same video arriving
within a short time interval can be batched together and
serviced using a single stream. This is referred to as batching
. The multicast facility of modern communication networks
[24, 25, 59] oers an e-cient means of one-to-many 1
data transmission. The basic idea is to avoid transmitting
the same packet more than once on each link of the net-work
by having branch routers duplicate and then send the
packet over multiple downstream branches. Multicast can
signicantly improve the VoD performance, because it
reduces the required network bandwidth greatly, thereby
decreasing the overall network load;
alleviates the workload of the VoD server and improves
the system throughput by batching requests;
oers excellent scalability which, in turn, enables servicing
a large number of clients; and
provides excellent cost/performance benets.
In spite of these advantages, multicast VoD (MVoD) introduces
new and di-cult challenges, as listed below, that
may make the system more complex, and may even degrade
a particular customer's QoS.
It is di-cult to support VCR-like interactivity with
multicast VoD service while improving service e-ciency.
Batching makes the clients arriving at dierent times
share a multicast stream, which may incur a long service
latency (or waiting time) causing some clients to
renege.
A single VoD stream from one server cannot support
clients' heterogeneity due mainly to diverse customer
premise equipments (CPEs).
A multicast session makes it di-cult to manage the
system protocol and diverse clients.
Multicast VoD introduces the complex legal issue of
copyright protection.
Amulticast VoD system must therefore overcome the above
drawbacks without losing its advantages. This paper critically
reviews the recent progress in multicast VoD (including
general VoD techniques) and discusses open issues in multi-cast
VoD.
The remainder of this paper is organized as follows. Section
2 introduces the concepts and architectures of a multi-cast
VoD system, and analyzes the problems in developing
it. Section 3 reviews the implementations of multicast VoD.
Section 4 discusses the issues related to multicast VoD ser-
vice. Finally, Section 5 summarizes the paper and discusses
open issues in implementing multicast VoD.
2.
OVERVIEW
OF MVOD SERVICE
Multicast also covers multipoint-to-multipoint communica-
tion, but for our purpose in this paper, it su-ces to consider
only one-to-many communication.
Classication Features
similar to broadcast TV, in which the
No-VoD user is a passive participant and has
no control over the session.
in which the user signs up and pays
PPV for specic programming, similar to
existing CATV PPV services.
in which users are grouped based on a
QVoD threshold of interest. Users can perform
rudimentary temporal control activities
by switching to a dierent group.
functions like forward and reverse are
NVoD simulated by transitions in discrete
time intervals. This capability can be
provided by multiple channels with the
same programming skewed in time.
the user has complete control over the
TVoD session presentation. The user has full-function
VCR capabilities, including
forward and reverse play, freeze, and
random positioning.
Table
1: Classication of VoD system
2.1 The taxonomy of VoD systems
A true VoD system supports a user to view any video, at
any time and in any interactive mode. Based on the amount
of interactivity and the ability of controlling videos, VoD
systems are classied as Broadcast (No-VoD), Pay-Per-View
(PPV), Quasi Video-On-Demand (QVoD), Near Video-On-
Demand (NVoD), True Video-On-Demand (TVoD) [55] which
are listed and compared in Table 1.
Obviously, TVoD is the most ideal service. For TVoD
service, the simplest scheme of scheduling server channels
is to dedicate a channel to each client, but it will require
too many channels to be aordable. Since a client may be
willing to pay more for TVoD service than for non-TVoD
service, sharing a channel among clients is a reasonable way
to improve the VoD performance and lower clients' cost. In
fact, multicast can support all types of VoD services while
consuming much less resources.
2.2 VCR interactivity of VoD
Interactivity is an essential feature of VoD service. After
their admission, customers can have the following types
of interactions: Play/Resume, Stop/Pause/Abort, Fast For-
ward/Rewind, Fast Search/Reverse Search, Slow Motion as
identied in [54].
A TVoD service may also provide the support for other
interactions such as Reverse and Slow Reverse, which correspond
to a presentation in the reverse direction, at normal
or slow speed. Usually, we don't consider them as part of
the usual interactive behavior of a customer.
We classify interactive operations into two types: (1) forward
interactions, such as Fast Forward and Fast Search;
(2) backward interactions, such as Rewind, Reverse Search,
Slow Motion, and Stop/Pause. This classication depends
on whether the playback rate after interactive operations is
faster than the normal playback or not. In order to understand
the limited support provided by default in multicast
VoD systems, one can identify two types of interactivity:
continuous or discontinuous interaction [7]. Continuous in-
Delivery
System
CPS-SPS
Delivery
System
Principal service
Interface
Application Service
Interface
Service Interface
Network Service
Interface
Physical Interface
Figure
1: DAVIC reference model
teractive functions allow a customer to fully control the duration
of all actions to support TVoD service, whereas discontinuous
interactive functions allow actions to be specied
only for durations that are integer multiples of predetermined
time increment to support NVoD service. Note that
the size of discontinuity is a measure of the QoS experienced
by the customers from NVoD service.
From the implementation's perspective, we also categorize
interactions as interactions with picture or interaction
without picture. Fast/Reverse Search and Slow Motion are
typical interactions with picture, whereas Fast Forward and
Rewind are typical interactions without picture. In gen-
eral, it is easier to implement interactions without picture
because it requires less system resource.
2.3 The architecture of multicast VoD systems
2.3.1 The reference model of VoD systems
The Digital Audio-Visual Council (DAVIC) founded in
1994 is a non-prot organization which has charged itself
with the task of promoting broadband digital services by the
timely availability of internationally-agreed specications of
open interfaces and protocols that maximize interoperability
across countries and applications or services. According to
the DAVIC reference model shown in Figure 1 [27], a VoD
system generally consists of the following entities:
Content Provider System (CPS) that owns and sells video
programs to the service provider;
Service Provider System (SPS) is a collection of system
functions that accept, process and present information
for delivery to a service consumer system;
Service Consumer System (SCS) is responsible for the
primary functions that allow a consumer to interact
with the SPS and implemented in a customer premise
equipment (CPE);
CPS-SPS and SPS-SCS network provider.
A consumer generates a request for service to the provider,
who will obtain the necessary material from the program
(content) provider and deliver it to the consumer using the
network provider's facilities. The SPS acts as an agent for
consumers and can access the various types of CPS. The
network, CPS, and SPS can be the same organization, but
they are generally dierent. DAVIC-based VoD systems
have been developed, such as the one in [71], ARMIDATM
[52], the NIST VoD system [45], KYDONIA [19], and the
Broadband Interactive VoD system at Beijing Telecommunications
Media
Server
Storage
Serivce
Provider
Request
Response
Network
Figure
2: A multicast VoD system
The reference model is also suitable for specifying the architecture
of MVoD systems. Consider a typical MVoD delivery
system shown in Figure 2 [8, 35]. Consumers make
program requests to the manager server (Service Provider).
A request is received and queued by the manager server until
the scheduler is ready to allocate a logical channel to deliver
video streams from a video object storage to a group of consumers
(multicast group) across a high-speed network. The
manager server organizes the media server and network resources
to deliver a video stream into a channel. A channel
can be either a unicast or multicast channel. The media
server receives consumer requests for video objects via the
manager server, processes them, and determines when and
which channels to deliver requested video objects to the consumers
Each consumer accesses the system by a CPE which includes
a set-top box (STB), a disk and a display monitor.
A consumer is connected to the network via a STB, which
selects one or more network channels to receive requested
video objects according to the server's instructions. The received
video objects are either sent to the display monitor
for immediate playback, or temporarily stored on the disk
which will later be retrieved and played back.
2.3.2 Hierarchical VoD systems
Large-scale VoD systems require the servers to be arranged
as a distributed system in order to support a large
number of concurrent streams. If the system is hierarchical,
an end-node server handles the requests from a particular
area, the next server in the hierarchy takes the requests over
for end-node servers if they cannot handle them. This architecture
provides the cost e-ciency, reliability and scalability
of servers. Generally, servers are either tree-shaped [60] or
graph-structured [76, 77] in Figure 3. The graph-structured
system often oer good QoS for handling the requests, but
the managements of requests, videos and streams are complicated
in the system. The tree-shaped system can easily
manage requests, videos and streams, but it oers poorer
QoS than the former. In order to evaluate the eectiveness
of distribution strategies in such a hierarchy, the authors of
[39] investigated how to reduce storage and network costs
while taking the customers' behaviors into account.
Although some of hierarchical architectures are originally
designed for unicast VoD services, they can also be used for
multicast VoD to further improve the e-ciency of service.
2.4 Problems with multicast VoD
Given below are the desired properties of a multicast VoD
Client Client Client
Client
Server
Client
Server Server
Server Server
Server Server
Server
Client
Server
Client
Figure
3: Hierarchical architecture of a VoD system
system.
E-ciency: The system should impose a minimal additional
burden on the server and the network, and should sufciently
utilize critical resources on the server and the
network.
Real-Time: The system should respond to the consumer
requests and transmit the requested videos in real time.
Scalability: The system should scale well with the number
of clients.
Interactivity: The system should provide the clients full
control of the requested video by using VCR-like interactive
functions.
Reliability: The system should be robust to failures in the
server and the network, and easy to recover from fail-
ures. The transmission of messages and video streams
should also be reliable.
Security: The system should provide e-cient support for
copyright protection in transmitting video streams to
multiple clients.
Ability to deal with heterogeneity: The system should deal
with heterogeneous networks and CPEs.
Fairness: The system should provide \fair" scheduling of
videos with dierent popularities so as to treat all customers
\fairly."
In order to meet the above requirements, we must solve
the following key problems.
The rst problem is how to deal with the coupling between
system throughput and the batching interval. Increasing
the batching interval can save server and network resources
signicantly at the expense of increasing the chance of cus-
tomers' reneging: consumers are likely to renege if they are
forced to wait too long, whereas shortening their waiting
time will diminish the benets of multicast VoD. In order
to make this tradeo, we must shorten all requests' waiting
time while enabling each multicast session to serve as many
consumers as possible.
The second problem is how to support scalability and in-
teractivity. Support for full interactivity requires an \in-
dividualized" service for each customer by dedicating an
interaction-(or I-) channel per consumer, which limits the
scalability of multicast VoD. We need a fully-interactive on-demand
service in multicast VoD systems without compromising
system scalability and economic viability.
The third problem is how to guarantee customers' QoS
with limited bandwidths. In multicast VoD, customers' QoS
can be expressed in terms of the waiting time before receiving
service (or service latency), the customers' defection rate
due to long waits, and the VCR action blocking probability
and playback eect. However, since system resources are
limited, we must strive to maximize their utilization.
Moreover, the multicast VoD service generally favors popular
videos, but how to serve the requests for unpopular
videos in a multicast VoD framework is also of importance
to the fairness of service.
3. IMPLEMENTATION OF MVOD
3.1 Storage organization
There are two types of servers: manager and media servers.
The manager server (Service Provider) is responsible for
billing and connection management, while the media server,
the focus of this section, handles real-time retrieval and delivery
of video streams.
The main challenge in the design of video server is how
to utilize storage e-ciently. When designing a cost-eective
video storage, one must consider issues, such as placement
of data on disks, disk bandwidth and disk-access QoS.
We consider the following main storage requirements.
The VoD server requires a large storage capacity. A
100-minute MPEG-2 video with a transfer rate of 4
Mbps requires approximately 3 GBytes of storage space.
Video objects are di-cult to handle due to their large
volume/size and stringent requirement of real-time continuous
playback.
Most existing studies consider the use of multiple disks organized
in the form of disk-farm or disk-array. A video server
typically uses disk-array for large video data. When designing
such a disk-array based VoD server, we must deal with
several constraints on resource allocation to provide scala-
bility, versatility, and load-balancing. Scalability is dened
as the ability to absorb signicant workload
uctuations and
overloads without aecting admission latency, while versatility
is dened as the ability to recongure the VoD server
with a minimal disturbance to service availability. High-level
versatility is also desirable for expandability, to ensure
that new devices can be added easily. Each video can be
stored on a single disk or stripped over multiple disks.
There are two basic types of storage organization. The
rst type completely partitions the storage among dier-
ent movie titles. Such a storage system is said to have a
completely-partitioned (CP) organization, and may be found
in small-scale VoD servers which store One Movie title Per
Disk (OMPD). The second type completely shares the storage
among dierent movie titles, which is said to have a
completely-shared (CS) organization. VoD servers store movie
titles using ne-grained striping (FGS) or coarse-grained
striping (CGS) [65] of videos across disks in order to eec-
tively utilize disk bandwidth. In FGS (similar to RAID-3),
the stripe unit is relatively small and every retrieval involves
all n disks that behave like a single logical disk with band-width
nB (B is the bandwidth of one disk). In CGS, each
retrieval block consists of a large stripe unit which is read
from only a single disk, but dierent disks can simultaneously
serve independent requests. CGS with parity information
maintained on one or several dedicated disks corresponds
to RAID-5 [11, 66].
organizations typically trade availability | disks can
fail or be brought o-line for update without aecting the entire
service | for increased latency and costly, ine-cient use
Batching policy Features Comparison
Maximum Queue Length requests for the video with the largest maximizing the server throughput
(MQLF) [21] number of pending requests to serve rst. but unfairness to unpopular videos.
First-Come-First-Served the oldest request (with the longest fairness but a lower system
-First waiting time) to serve next. throughput.
Maximum Factored Queue the pending batch with the largest size a throughput close to that of MQLF
Length First (MFQLF)[5] weighted by the factor, (the associated without compromising fairness.
access frequency) 1=2 ,to serve next.
Look-Ahead-Maximize a channel is allocated to a queue if and maximizing the number of admitted
-Batch (LAMB) [33] only if a head-of-the-line user is about users in a certain time window
to leave the system without being served but unfairness to some requests.
Group-Guaranteed Server server capacity is pre-assigned to meeting a given performance objective
Capacity (GGSC) [81] groups of objects for the specic group.
Table
2: Multicast batching policies
of storage capacity. CS organizations ensure a very low latency
and high storage utilization, but recongurations risk
the availability of the entire VoD server. Studies in [3, 18,
have shown that video striping improves disk utilization
and load-balancing, and hence increases the number of
concurrent streams. [3, 63] considered both CGS and FGS,
and concluded that the former can support more concurrent
video streams than the latter. This is because a disk has
a relatively high latency for data access (10{20 ms), and a
su-cient amount of video data must be transferred in each
disk access in order to improve the utilization of the eective
disk transfer bandwidth.
3.2 User-centered scheduling strategies
A conventional VoD system assumes the user-centered scheduling
scheme [4, 83] in which a user eventually acquires some
dedicated bandwidth. It can be achieved by providing (1)
a su-cient bandwidth equal to an object consumption rate
multiplied by the number of users, or (2) less bandwidth,
for which the users compete by negotiating with a sched-
uler. The consumption rate of a video object is equal to
the amount of bandwidth necessary to view it continuously.
When a client makes a request to the server, the server sends
the requested object to the client via a dedicated channel.
This scheme incurs high system costs, especially in terms of
server storage-I/O and network bandwidths. To maximally
utilize these channels, researchers have proposed e-cient
scheduling techniques [20, 32, 43, 51, 61, 62, 64, 84]. These
techniques are said to be \user-centered," because channels
are allocated to users, not data or objects. These simplify
the implementation, but dedicating a stream to each viewer
will quickly exhaust the network-I/O bandwidth.
3.3 Data-centered scheduling strategies
To address the network-I/O bottleneck faced by the user-centered
scheduling, one can use the data-centered scheduling
which dedicates channels to video objects, instead of
users. It allows users to share a server stream by batching
their requests. That is, requests by multiple clients for
the same video arriving within a short time interval can be
batched together and served by using a single stream.
The data-centered scheme has the potential for dramatically
reducing the network and server bandwidth require-
ments. The data-centered multicast VoD service can be
either client-initiated or server-initiated [35]. In the client-initiated
service, channels are allocated among the users and
the service is initiated by clients, so it is also known as a
scheduled or client-pull service. In the server-initiated ser-
vice, the server channels are dedicated to individual video
objects, so it is also called a periodic broadcast or server-push
service. Popular videos are broadcast periodically in
this scheme, and a new request dynamically joins, with a
small delay, the stream that is being broadcast. In practice,
it is e-cient to use hybrid batching that combines the above
two schemes.
3.3.1 Client-initiated multicast schemes
Using a client-initiated multicast, when a server channel
becomes available, the server selects a batch to multicast
according to the scheduling policies in Table 2.
The equally-spaced batching mechanism has a xed maximum
service latency and supports NVoD interactivity, but
its usually-large service latency may cause some clients to
renege. In order to reduce the service latency, dynamic multicast
has been proposed, where the multicast tree is expanded
dynamically to accommodate new requests.
For example, Adaptive Piggybacking [38] allows clients arriving
at dierent times to share a data stream by altering
the playback rates of in-progress requests (for the same
object), for the purpose of merging their respective video
streams into a single stream that can serve the entire group
of merged requests. This approach can lower the service latency
as compared to simple batching. But it is restrictive
in that the variation of the playback rate must be within,
say 5%, of the normal playback rate, or it will result in a
perceivable deterioration of QoS. This limits the number of
streams that can be merged.
Chaining [77] is also a generalized dynamic multicast technique
to reduce the demand on the network-I/O bandwidth
by caching data in the client's local storage to facilitate future
multicasts. Thus, data are actually pipelined through
the client stations residing at the nodes of the respective
chaining tree, and the server serves a \chain" of client stations
using only a single data stream. The advantage of
chaining is that not every request has to receive its data directly
from the sever. A large amount of video also becomes
available from clients located throughout the network. This
scheme scales well because each client station using the service
also contributes its resources to the community. Hence,
the larger the chaining trees, the more eective the application
can utilize the aggregate bandwidth.
The authors of [16] present stream tapping that allows a
client to greedily \tap" data from any stream on the VoD
server containing video data s/he can use. This is accomplished
through the use of a small buer on the CPE and
requires less than 20% of the disk bandwidth used by conventional
systems for popular videos.
To eliminate the service latency, patching was introduced
in [41]. The objective of patching is to substantially improve
the number of requests each channel can serve per time unit,
thereby su-ciently reducing the per-customer system cost.
In the patching scheme, channels are often used to patch the
missing portion of a service or deliver a patching stream,
rather than multicasting the video in its entirety. Given
that there is an existing multicast video, when to schedule
another multicast for the same video is crucial. The time period
after a multicast, during which patching must be used,
is called the patching window [14]. Two simple approaches
to setting the patching window are discussed in [41]. The
rst one uses the length of the video as the patching win-
dow. That is, no multicast is initiated as long as there is an
in-progress multicast session for the video. This approach is
called the greedy patching because it tries to exploit an in-progress
multicast as much as possible. However, an over-
greed can actually reduce data sharing [41]. The second
approach, called the grace patching , uses a patching stream
for the new client only if it has enough buer space to absorb
the skew. Hence, under grace patching, the patching window
is determined by the client buer size. Considering such
factors as video length, client buer size, and request rate,
the authors of [15] generalized patching by determining the
optimal patching window for each video. An improved form
of patching, called as the transition patching [15], uses either
a patching stream or a transition stream and improves
performance without requiring any extra download band-width
at the client site. Other optimal patching schemes
were described in [30, 74]. In patching, a client might have
to download data on both regular multicast and patching
channels simultaneously. To implement patching, a client
station needs three threads: two data loaders to download
data from the two channels, and a video player to play back
the video.
The controlled CIWP (Client-Initiated-With-Prefetching)
[35] is another multicast technique similar to patching and
tapping for near instantaneous VoD service. The novelty of
the controlled CIWP is that it uses a threshold to control
the frequency of multicasting a complete video stream. It
uses simple FCFS channel scheduling so that a client can be
informed immediately of when its request will begin service.
3.3.2 Server-initiated batching
In server-initiated batching, the bandwidth is dedicated
to video objects rather than to users. Videos are decomposed
into segments which are then broadcast periodically
via dedicated channels, and hence, it is also called periodic
broadcast . Although the worst-case service latency experienced
by any subscriber is guaranteed to be less than the
interval of broadcasting the leading segment and is independent
of the current number of pending requests, this strategy
is more e-cient for popular videos than for unpopular ones
due to the xed cost of channels.
One of earlier periodic broadcast schemes was the Equally-spaced
interval Broadcasting (EB) [21]. Since it broadcasts
a given video at equally-spaced intervals, the service latency
can only be improved linearly with the increase of
the server bandwidth. The author of [10] also proposed the
staggered VoD which broadcasts multiple copies of the same
video at staggered times. To signicantly reduce the service
latency, Pyramid Broadcasting (PB) was introduced in
[82]. In PB, each video le is partitioned into the segments
of geometrically-increasing sizes, and the server capacity is
evenly divided into K logical channels. The i-th channel is
used to broadcast the i-th segments of all videos sequentially.
Since the rst segments are very small, they can be broadcast
more frequently through the rst channel. This ensures
a smaller waiting time for every video. A drawback of this
scheme is that a large buer | which usually corresponds to
more than 70% of the video | must be used at the receiving
end, requiring disks for buering. Furthermore, since a very
high transmission rate is used for each video segment, an extremely
high bandwidth is required to write data to the disk
as quickly as it receives the video. To address these issues,
the authors of [4] proposed a technique called Permutation-based
Pyramid Broadcasting (PPB). PPB is similar to PB
except that each channel multiplexes its own segments (in-
stead of transmitting them sequentially), and a new stream
is started once every short period. This strategy allows PPB
to reduce both disk space and I/O bandwidth requirements
at the receivers. However, the required disk size is still large
due to the exponential nature of the data fragmentation
scheme. The sizes of successive segments increase exponen-
tially, thus causing the size of the last segment to be very
large (typically more than 50% of the video). Since the
buer sizes are determined by the largest segment, using
the same data fragmentation scheme proposed for PB limits
the savings achievable by PPB. In PPB, a client needs to
tune in dierent logical subchannels to collect its data for a
given data fragment if the maximum savings in disk space
is desirable.
To reduce the disk costs in the client side, the authors
of [42] introduced Skyscraper Broadcasting (SB) which uses
a new data fragmentation technique and proposes a dier-
ent broadcasting strategy. In SB, K channels are assigned
to each of the N most popular objects. Each of these K
channels transports a specic segment of the video at the
playback rate. The progression of relative segments size
on the channel, f1,2,2,5,5,12,12,25,25,52,52,105,105,: : :g, is
bounded by the width parameter W , in order to limit the
storage capacity required at the client end. SB allows for
simple and e-cient implementation, and can achieve a low
service latency while using only 20% of the buer space required
by PPB. The authors of [34] provided a framework
for broadcasting schemes, and designed a family of schemes
for broadcasting popular videos, called the Greedy Disk-
conserving Broadcasting (GDB). They systematically analyze
the resource requirements, i.e., the number of server
broadcast channels, the client storage space, and the client
I/O bandwidth required by GDB. GDB exhibits a trade-
between any two of the three resources, and outperforms
SB in the sense of reducing resource requirements.
The Dynamic Skyscraper Broadcasting (DSB) in [28] dynamically
schedules the objects that are broadcast on the
skyscraper channels to provide all clients with a precise time
at which their requested objects will be broadcast, or an upper
bound on that time if the delay is small and reaps the
cost/performance benets of the skyscraper broadcasting.
The above broadcasting schemes generally assume that
the client I/O bandwidths are limited to download data from
Scheduling strategy Features Typical methods
Client-initiated the channels are allocated among the users, Adaptive Piggybacking [38], Patching [41]
(scheduled, the multicast tree can be expanded dynamically Chaining [77], Tapping [16] and
client-pull) to accommodate new requests so that the Controlled CIWP [35], etc.
service latency is minimized (ideally zero).
Server-initiated the channels are dedicated to video objects, Only two download channels:
(periodic broadcast, the videos are divided into segments which EB [21], PB [82], PPB [4],
server-push) are then broadcast periodically via dedicated SB [42], GDB [34], DSB [28],etc.
channels, the worst-case service latency More than two download channels:
experienced by any client is less than the Harmonic broadcasting [47],Staircase
interval of broadcasting the leading segment scheme [48] and FB [46, 80], etc.
Hybrid scheduling the overall performance is improved by combining Controlled multicast [35], Catching
client-initiated and server-initiated strategies. and Selective catching [36], etc.
Table
3: The summary of existing data-centered approaches
only two channels. If the client can download data from
more than two channels, there are methods available that
can e-ciently reduce the service latency with less broadcasting
channels. For example, a broadcasting scheme based
on the concept of harmonic series is proposed in [47, 49];
the scheme doesn't require the bandwidth assigned to a
video equal to a multiple of a channel's bandwidth. For
a movie of length of D minutes, if we want to reduce the
viewer's waiting time to D=N minutes, we only need to allocate
H(N) video channels to broadcast the movie peri-
odically, where H(N) is the harmonic number of N , i.e.,
N . The staircase scheme in [48]
can reduce the storage and disk transfer-rate requirement
at the client end. However, both the staircase and harmonic
schemes cannot serve buerless users. In [46, 50], a scheme
called Fast Broadcasting (FB) is proposed, which can further
reduce the waiting time and the buer requirement.
Using FB, if a STB does not have any buer, its user can
still view a movie insofar as a longer waiting time is ac-
ceptable. The authors of [80] proposed two enhancements
to FB, showing how to dynamically change the number of
channels assigned to the video and seamlessly perform this
transition, and presenting a greedy scheme to assign a set
of channels to a set of videos such that the average viewers'
waiting time is minimal.
3.3.3 Hybrid multicast scheduling
All practical scheduling policies are guided by three primary
objectives: minimize the reneging probability, minimize
average waiting time, and be fair. It was shown in
[21, 22, 35, 36] that a hybrid of the above two techniques
oered the best performance. For example, the Catching
proposed in [36] is a combination of periodic broadcast and
client-initiated prex retrieval of popular videos. There are
many hybrid schemes to improve the overall performance
of multicast VoD. The selective catching in [36] further improves
the overall performance by combining catching and
controlled multicast to account for diverse user access pat-
terns. Because most demands are on a few very popular
movies, more channels are assigned to popular videos.
However, it is necessary (and important) to support unpopular
videos. We assume that scheduled multicasts are
used to handle less popular videos, while the server-initiated
scheme is used for popular videos. In this approach, a fraction
of server channels are reserved and pre-allocated for
periodically-broadcasting popular videos. The remaining
channels are used to serve the rest of the videos using some
scheduled multicasts. This hybrid of server-initiated and
client-initiated schemes achieves better overall performance.
The existing data-centered approaches are summarized in
Table
3.
3.4 Multicast routing and protocols
There has been extensive research into multicast routing
algorithms and protocol [17, 72, 85]. Multicast can be implemented
on both LANs and WANs. Nodes connected to
a LAN often communicate via a broadcast network, while
nodes connected to a WAN communicate via switched net-
works. In a broadcast LAN, transmission from any one node
is received by all the other nodes on the network, so it is
easy to implement multicast on a broadcast LAN. On the
other hand, it is challenging | due mainly to the problem of
scalability | to implement multicast on a switched network.
Today's WANs are designed to mainly support unicast com-
munication, but in future, as multicast applications become
more popular and widespread, there will be a pressing need
to provide e-cient multicast support on WANs. In fact, the
multicast backbone (MBone) of the Internet is an attempt
toward this goal.
For multicast video transmissions, one of the key issues is
QoS routing which selects routes with su-cient resources to
provide the requested QoS. For instance, the multicast VoD
service requires its data throughput to be guaranteed at or
above a certain rate. The goal of QoS routing is twofold: (1)
meet the QoS requirements for every admitted connection,
and (2) achieve global e-ciency in resource utilization. In
most cases, the problems of QoS routing are proven to be
NP-complete [86]. Routing strategies can be classied as
source routing, distributed or hierarchical routing. Some
heuristic QoS routing algorithms have been proposed (see
[17, 85] for an excellent survey of existing multicast QoS
routing schemes).
In an eort to provide QoS for video transmissions, a number
of services have been dened in the Internet. A Resource
Reservation Protocol (RSVP) has been developed to
provide receiver-initiated xed/shared resource reservation
for unicast/multicast data
ows [13] after nding a feasible
path/tree to satisfy the QoS requirements. Furthermore,
a protocol framework for supporting continuous media has
been developed: RTP (Real-Time Protocol) [73] provides
support for timing information, packet sequence numbers
and option specication, without imposing any additional
error control or sequencing mechanisms. Its companion control
protocol, RTCP (Real-Time Control Protocol), can be
used for gathering feedback from the receivers, again according
to the application's need.
3.5 The client-end system
Customer premise equipments (CPEs) include set-top boxes
(STBs), disks, and display monitors, where a disk or a RAM
is used as a buer. As an example, the disk space of 100 MB
can cache about 10 minutes of MPEG-1 video. Such a disk
space costs less than $10 today. The high cost of a VoD
system is due mostly to the network costs. For instance,
the cost of networking contributes more than 90% of the
hardware cost of the Time Warner's Full Service Network
project.
The client's STB, from software perspectives, generally
contains a main control thread, video stream receiver threads
and a video player thread. A client is connected to the
network via a STB. The main control thread processes the
client's service request by sending a message indicating his
desired video to the server. It then forks the video stream
receiver threads to select one or more network channels to
receive and decompress video data according to the server's
instructions. The received video data are either stored on
the disk or sent to the display monitor for immediate play-
back. The display monitor can either retrieve stored data
from the disk or receive data directly from a channel.
The CPE buer plays important roles as follows.
Supporting the VCR interactions of a customer [2, 7,
70]. The interaction protocols for multicast VoD are
designed by using the CPE buer.
Providing instant access to the stored video program so
as to minimize the service latency [36, 69]. Preloading
and caching, based on the video stored in the CPE
buer, can reduce the service latency.
Reducing the bandwidth required to transmit the stored
video [36, 69]. Because some video data reside in
the CPE buer, the overall bandwidth requirement of
transmitting videos is reduced.
Eliminating the additional bandwidth required to guarantee
jitter-free delivery of the compressed video stream.
These functions are discussed in detail in the following sub-sections
3.6 Support for interactive functions
One of the important requirements is to oer VCR interac-
tivity. In order to support customers' interactive behavior in
multicast VoD service, there have been e-cient techniques
proposed by a combination of tuning and merging as well
as using the CPE buer and I-channels (see Table 4). The
authors of [10] introduced tuning in staggered VoD which
broadcasts multiple copies of the same video at staggered
times. Intelligently tuning to dierent broadcast channels
is used to perform a user interaction. However, not all interactions
can be achieved by jumping to dierent streams.
Moreover, even if the system can emulate some interactions,
it cannot guarantee the exact eect the user wants. Other
solutions to VCR interactivity are proposed in [7] and [23],
especially for handling a pause/resume request. Support
for continuous service of pause operations was simulated for
a NVoD server, but merge operations from I-channels to
batching- (or B-) channels were either ignored [23] or did
not guarantee continuity in video playout [7]. [7] proposed
the use of the CPE buer to provide limited interactive func-
tions. In order to implement the interactivity of multicast
VoD services, more e-cient schemes have been proposed.
For example, the SAM protocol [53] oers an e-cient way
for TVoD interactions, and all those introduced in Section 2
are provided by allocating the I-channels as soon as a VCR
action request is issued. When playout resumes, the VoD
server attempts to merge the users back into a B-channel by
using a dedicated synch buer located at access nodes and
partially shared by all the users. Should this attempt fail, a
request for a new B-channel is then initiated.
The drawback of the SAM protocol requires an excessive
number of I-channels, thus causing a high blocking rate of
VCR interactions. The authors of [2] improved the SAM
protocol by using the CPE buer and active buer manage-
ment, and hence, more interactions can be supported without
I-channel allocation. The BEP (Best-Eort Patching)
scheme proposed in [56] presents an e-cient approach to
the implementation of continuous TVoD interactions. Compared
to the other methods, BEP aims to oer zero-delay
(or continuous) service for both request admission and VCR
interaction, whereas the SAM protocol just supports continuous
VCR interactions without considering service admis-
sion. Moreover, BEP uses a dynamic technique to merge
interaction streams with a regular multicast stream. This
technique signicantly improves the e-ciency of multicast
TVoD for popular videos.
The authors of [70] proposed another scheme called the
Single-Rate Multicast Double-Rate Unicast (SRMDRU) to
minimize the system resources required for supporting full
VCR functionality in a multicast VoD system. This scheme
also supports TVoD service, so customers can be served as
soon as their requests are received by the system. It forces
customers in unicast streams (on the I-channel) to be served
by multicast streams again after they resume from VCR
operations. The idea is to double the transmission rate of
the unicast stream so that the customer of normal playback
can receive the frame synchronized with the transmitting
frame of a multicast group.
4. ISSUES RELATED TO MVOD SERVICE
4.1 QoS of multicast VoD
The eectiveness of a video delivery technique must be
evaluated in terms of both the server and network resources
required for delivering a video object and the expected service
latency experienced by the clients. Reducing the service
latency is an important goal in designing eective scheduling
strategies. The existing dynamic multicast and periodic
broadcast schemes reviewed in Section 3.3 are shown
to achieve good performance.
Besides the dynamic scheduling schemes, other schemes,
such as caching and preloading , have been proposed to reduce
the service latency. In [29, 75], proxy servers are used
to cache the initial segments of popular videos to improve
the service latency. Because nearly all broadcast protocols
assume that the CPE buer is large enough to store up to 40
or 50 % of each video (about 50 minutes of a typical movie),
the partial preloading proposed in [69] uses this storage to
preload anticipated customers' requests, say, the rst 3 minutes
of top 16 to 20 videos. It will provide instantaneous access
to these videos and also reduce the bandwidth required
Level of interaction Features Typical methods
NVoD the interactive functions are simulated tuning [10]
by transitions in discrete time interval.
Limited TVoD the continuous interaction times are that supported by CPE buer [7]
limited by the available resource.
TVoD full control the durations of SAM [53], Improved SAM [2]
all continuous interactions. BEP [56], SRMDRU [70]
Table
4: The summary of interaction schemes for multicast VoD
Solution Features Comparison
Active transcoding data streams are individually transformed according an administrative burden,
to the specication of each requesting receiver di-cult to deploy proxies
Layered multicast encodes source data as a series of layers, sends complex adaptation scheme
dierent layers for dierent multicast groups. in client-end
Table
5: The solutions of handling client heterogeneity
to broadcast them as well as the extra bandwidth required to
guarantee jitter-free delivery of the compressed video signal.
It diers from proxy caching in that the preloaded portions
of each video will reside inside the CPE buer rather than
at a proxy server.
The customers' defection rate is closely related to the
service latency, and is inversely proportional to the server
throughput, or an average number of service requests granted
per program. The shorter the service latency, the lower the
defection rate becomes and the higher the server through-put
is. Another important QoS parameter is the VCR action
blocking probability. All the existing multicast TVoD
protocols covered in Section 3.6 aim to reduce the blocking
probability or discontinuity of VCR interactions.
4.2 Client heterogeneity
As the multicast network expands, there will be various
types of end devices ranging from simple palm-top personal
digital assistants (PDAs) to powerful desktop PCs or HDTV
receivers of multicast VoD. Since there will be multiple VoD
transmission rates or paths, the sender alone cannot meet
the possibly con
icting demands of dierent receivers. Distributing
a uniform representation of the video to all receivers
could cause low-capacity regions of the network to
suer from congestion, and some receivers' QoS cannot be
met even when there are su-cient network resources to provide
better QoS for these receivers.
In the context of VoD, scalability also applies to the server's
ability to support the data requirements of multiple terminal
types. One way to solve this problem is proxy-based
transcoding, where data streams are individually transformed
according to the specication of each requesting receiver
[31]. However, it typically imposes an administrative burden
because it is not transparent to end users. Proxies are
also di-cult to deploy because a user behind a constrained
network link might not have access to the optimal location
of a proxy. There was a proposal to use active networks to
solve this problem by oering a common platform for such
services as part of the basic network service model [79], but
there remain many issues to be addressed before such an
infrastructure can be deployed. Furthermore, transcoding
proxies must be highly reliable and scalable which can be
very costly [31].
Another e-cient solution to heterogeneity is the use of
layered media formats. This scheme encodes source data
as a series of layers, the lowest layer being called the base
layer and higher layers being called the enhancement layers.
Layered encoding can be eectively combined with multicast
transmission by sending dierent layers for dierent multi-cast
groups. Consequently, a receiver using only the basic
multicast service (i.e., joining and leaving multicast groups)
can individually tailor its service to match its capabilities,
independently of other receivers. This basic framework was
later rened in a protocol architecture called Receiver-driven
Layered Multicast (RLM) [58]. In RLM, a receiver searches
for the optimal number of layers by experimentally joining
and leaving multicast groups much in the same way as
a TCP source searches for the bottleneck transmission rate
with the slow-start congestion avoidance algorithm [44]. The
receiver adds layers until congestion occurs and backs o to
an operating point below this bottleneck.
The two solutions to heterogeneity are summarized in Table
5.
4.3 Fairness of multicast VoD service
Fairness is one of the performance metrics in VoD service,
meaning that every client request should be fairly treated
regardless whether it is for a hot video or not. In [41], the
unfairness of a multicast VoD system is expressed as a function
of the defection rate, that is,
d)
, where d i
denotes the defection rate for video i,
d is the mean defection
rate, and N is the number of videos. Alternatively, this
property can also be measured by video service latencies.
The fairness is mainly related to scheduling and resource
allocation. When selecting a scheduling strategy, we make it
as fair as possible. The fairness of certain batching schemes
are surveyed in Section 3.3. However, scheduling strategies
like various periodic broadcasts, are only for popular
videos, and the fairness depends on the scheduling scheme
used for cold videos and the bandwidth allocation among
hot and cold videos. Unfortunately, there are only a very
few attempts to analyze the fairness of existing scheduling
schemes [41]. How to assure the fairness of practical scheduling
schemes, particularly for hybrid schemes, and make the
optimal bandwidth resource allocation are open issues.
4.4 Customer behavior
Understanding customer behaviors is necessary to eciently
design a multicast VoD system and take dierent
strategies to dierent videos at dierent times. Modeling
customers' behaviors includes video selection distribution,
variations of video popularity, and the user interaction model.
4.4.1 Video selection distribution
For short-term considerations, most researchers assume
that the popularity of videos follows the Zipf distribution
[87], that is, the probability of choosing the i-th video isi 1 z P N
, where N is the total number of videos in
the system, and z is called the skew factor . Typically, researchers
assume that the skew factor be set to 0.271 [5, 21].
This number is obtained from the analysis of a user access
pattern from a video rental store [5]. That is, most of the
demand (80%) is for a few (10 to 20) very popular videos.
4.4.2 Time-variation of video popularity
In a real VoD system, request arrivals are usually nonsta-
tionary. Variations in the request rate can be observed on
a daily basis, between \prime time" (e.g., 7 p.m.{10 p.m.)
and \o-hours" (e.g., early morning). On a larger time scale
(e.g., one week or month), movie popularities may change
due to new releases or loss of customers' interest in current
titles. At the same time, the dierent types of customers
(e.g., children and adult) have dierent prime times. In [1],
a time distribution model is expressed as sinusoidal:
where 0 is the daily average arrival rate, A(> 0) is the
amplitude,
being a 24-hour period), and pm
the popularity of movie title m. More general models of
nonstationarity have been proposed in [8, 39] for the long-term
popularity of movie. We call these time-dependent
changes of movie popularity the life-cycle of the movie. The
authors of [39] observed that the long-term behavior of a
movie follows an exponential curve plus a random eect.
[8] also assumed that variations in workload are exponential
functions with dierent average inter-arrival times.
4.4.3 Interaction model
Some interaction models have been proposed in [2, 26, 54].
In [26], the behavior of each user is modeled by a two-state
Markov process with states PB (playback) and FF/Rew.
The times spent by the user in PB and FF/Rew modes are
exponentially distributed. The two-state model in [54] assumes
that the user activity is in either the normal or the
interaction state. But these two models are too simple to
represent the details of user interactions. To be realistic,
a model should capture three specic parameters for their
potentially signicant impacts on the performance of the
VoD server: (1) the frequency of requests for VCR actions;
(2) the duration of VCR actions; (3) the bias of interaction
behavior. Considering these parameters, the authors
of [2] proposed a VCR interaction model. In this model,
a set of states corresponding to the dierent VCR actions
are designed durations and probabilities of transitioning to
neighboring states. The initial state is Play, and the inter-action
system randomly transits to other interactive states
or stays in the Play state according to the behavior distri-
Slow
Play/
Fast
Fast Reverse1
Forward
Search Search
Resume
Rewind
Motion
d6
d2
d3
d7
d8
Abort
Pause
d5
Stop
d4
Figure
4: VCR interactive model
bution. The user resides at each interaction state for an
exponentially-distributed period of time.
As shown in Figure 4, transition probabilities P
are assigned to a set of states corresponding to dierent VCR
actions. It is important to notice that the above-mentioned
parameters are captured by this representation of viewers'
activity. Finding representative values for P
still an open issue. For tractability, customers are classied
to be Very Interactive (VI) or Not Very Interactive (NVI).
Their interactions can be simulated by taking dierent parameter
values [2].
4.5 Evaluation of multicast VoD systems
By batching multiple requests and serving them with a
single multicast, the system capability for handling a large
number of requests can be greatly improved at the expense
of increased admission delay. For zero-delay admission and
VCR interactions, more channels are required. It is important
to evaluate multicast VoD in terms of throughput,
resource requirement, and e-ciency. These evaluation results
will in
uence pricing, system management as well as
resource sharing.
4.5.1 The service throughput
For NVoD service, we need to evaluate the throughput of
multicast VoD. Common assumptions include the Poisson
arrival of clients' requests, and customers' willingness to wait
for a xed amount of time, say, 5 minutes, before withdrawing
their requests. The authors of [1] modeled the customers'
patience with an exponential distribution, i.e., a customer
agrees to wait for units of time or more with probability
, where
is the average time customers
agreed to wait. In general, the patience rate
can be
assumed independently of the videos requested. Based on
this assumption, the authors of [1] converted the problem
of calculating the number of customers waiting between two
consecutive services into that of making a transient analysis
of the M=M=1 \self-service" queueing system with arrival
rate , and self-service with a negative exponential distribution
with rate . They then derived the server throughput
and the average loss rate for each movie. In [78], the user
wait-tolerant behavior in some batching schemes, such as
FCFS (rst-come-rst-serve), MQL (the maximum queue
length), Max-Batch and Min-Idle, have also been investi-
gated, the problem of maximizing the system throughput is
formally discussed and the functional equation dening the
optimal scheme is derived.
For TVoD service in multicast VoD, there is, to our best
knowledge, no literature on evaluating the service through-
put. It is still an open issue.
4.5.2 Bandwidth requirements of TVoD service
Some multicast VoD protocols can support TVoD services
in zero-delay admission if enough channels are used. How to
evaluate the channel requirements depends on the underlying
multicast scheduling scheme. For example, [14] presents
the optimal patching performance, and [15] addresses the
bandwidth requirement of transition patching. [57] proposes
a method for analyzing the user interactivity and evaluating
the number of channels required for multicast TVoD service.
The results determine the relationships among the clients'
behaviors, the system resources, and the TVoD service pro-
tocol. However, rigorously analyzing the requirements of
channels for TVoD protocols supporting zero-delay admission
and full VCR interactions is still an open issue.
4.5.3 Bandwidth cost vs. QoS
Although most server-initiated multicast VoD schemes are
not for providing TVoD service, they strive to make a better
tradeo between the channel cost and the service latency.
For example, most periodic broadcasting schemes try to reduce
the service latency and improve the throughput at low
channel costs. The performance of related periodic broadcasting
schemes have been discussed in [4, 34, 42, 83].
4.6 Other issues
Besides the mentioned issues, there also are other important
issues in multicast VoD service, such as:
Copyright protection: A typical solution to video copyright
protection is an encrypted point-to-point delivery to
ensure that only paying customers receive the service,
but it is ine-cient for multicast VoD. In [40] a simple
scheme is proposed to provide similar protection for
the content, which can be used e-ciently with multi-cast
and caching. In that scheme, the major part of the
video is intentionally corrupted and can be distributed
via multicast connections, while the part necessary for
reconstructing the original is delivered to each receiver
individually. However, the e-cient copyright protection
for multicast VoD service is still an open issue.
Video replacement: The maintenance of storage is a main
server management task in general VoD systems. Due
to the limited storage capacity, it is necessary to replace
old, unpopular videos by new popular videos.
How to select videos to replace depends mainly on the
hit ratio of videos. Popularity-based assignment and
BSR(Bandwidth-to-Space Ratio)-based assignment are
considered as the important policies. Moreover, the
replacement operation for one video shouldn't aect
the service of other videos residing in the same server.
This problem is related to storage organization, and
[3] discussed the eect of video partitioning strategies.
5. CONCLUSIONANDFUTUREDIRECTIONS
As VoD service becomes popular and widely-deployed,
consumers will likely suer from network and server over-
loads. Network and server-I/O bandwidths have been recognized
as a serious bottleneck in today's VoD systems. Multicast
is a good remedy for this problem; it alleviates server
bottlenecks and reduces network tra-c, thereby improving
system throughput and server response time. We discussed
the state-of-art designs and solutions to the problems associated
with multicast VoD. We also illustrated the benets
of multicast VoD, and feasible approaches to implementing
it. Although multicast VoD can make signicant performance
improvements, there are still several open issues that
warrant further research, including:
Eective scheduling and routing schemes, and VCR-
type interactions for multicast TVoD. Ad hoc schemes
may achieve a better trade-o between QoS and system
costs while preserving scalability and interactivity.
E-cient active CPE buer management. It is highly
dependent on customers' behavior, and utilizing the
CPE buer will signicantly improve QoS.
Fairness of VoD service. There is a tradeo between
fairness and throughput for practical scheduling schemes,
particularly for hybrid schemes. We need to achieve
the optimal bandwidth resource allocation without loss
of fairness.
Knowledge of realistic customers' behavior is essential
to the design of multicast VoD protocols and resource
allocation. Other issues related to customers' behavior
are throughput, scaling, and scheduling.
An e-cient theoretical framework for evaluating the
performance of multicast VoD service, especially for
modeling and analyzing multicast TVoD service to support
both zero-latency admission and full VCR inter-activity
Developing standard protocols for multicast VoD for
practical applications. The existing protocols for multicast
routing, scheduling and VCR interactions can
be viewed as a basis to achieve this goal. The DAVIC
protocol also provides a general reference framework,
but protocols for multicast TVoD still need to be developed
further.
6.
--R
Human Behaviour and the Principle of Least E
--TR
Congestion avoidance and control
Multicast routing in datagram internetworks and extended LANs
Scheduling policies for an on-demand video server with batching
Providing VCR capabilities in large-scale video servers
Evaluating video layout strategies for a high-performance storage server
Choosing the best storage system for video service
Channel allocation under batching and VCR control in video-on-demand systems
Reducing I/O demand in video-on-demand storage servers
The SPIFFI scalable video-on-demand system
Dynamic batching policies for an on-demand video server
Adaptive piggybacking
On optimal piggyback merging policies for video-on-demand systems
Fault-tolerant architectures for continuous media servers
Metropolitan area video-on-demand service using pyramid broadcasting
Adapting to network and client variability via on-demand dynamic distillation
Receiver-driven layered multicast
Group-guaranteed channel capacity in multimedia storage servers
Skyscraper broadcasting
Long-term movie popularity models in video-on-demand systems
Scheduling video programs in near video-on-demand systems
The multimedia multicasting problem
Protecting VoD the easier way
<italic>Patching</italic>
Exploring wait tolerance in effective batching for video-on-demand scheduling
Zero-delay broadcasting protocols for video-on-demand
Optimal and efficient merging schedules for video-on-demand servers
Catching and selective catching
An efficient bandwidth-sharing technique for true video on demand systems
ARMIDA TM
A VoD Application Implemented in Java
Prospects for Interactive Video-on-Demand
The Split and Merge Protocol for Interactive Video-on-Demand
Design of Multimedia Storage Systems for On-Demand Playback
A Redundant Hierachical Structure for a Distributed Continuous Media Server
A Low-Cost Storage Server for Movie on Demand Databases
The KYDONIA Multimedia Information Server
A New Scheduling Scheme for Multicast True VoD Service
Dynamic Skyscraper Broadcasts for Video-on-Demand
Fast broadcasting for hot video access
Supplying Instantaneous Video-on-Demand Services Using Controlled Multicast
Earthworm
Long Term Resource Allocation in Video Delivery Systems
Chaining
Video-on-Demand Server Efficiency through Stream Tapping
Providing Unrestricted VCR Functions in Multicast Video-on-Demand Servers
Demand Paging for Video-on-Demand Servers
--CTR
Jian-Guang Lou , Hua Cai , Jiang Li, Interactive multiview video delivery based on IP multicast, Advances in Multimedia, v.2007 n.1, p.13-13, January 2007
Leonardo Bidese de Pinho , Claudio Luis de Amorim, Assessing the efficiency of stream reuse techniques in P2P video-on-demand systems, Journal of Network and Computer Applications, v.29 n.1, p.25-45, January 2006
Azzedine Boukerche , Richard W. Nelem Pazzi, Scheduling and buffering mechanisms for remote rendering streaming in virtual walkthrough class of applications, Proceedings of the 2nd ACM international workshop on Wireless multimedia networking and performance modeling, October 06-06, 2006, Terromolinos, Spain
Ma Huadong , Kang G. Shin, Hybrid broadcast for the video-on-demand service, Journal of Computer Science and Technology, v.17 n.4, p.397-410, July 2002
Huadong Ma , G. Kang Shin , Weibiao Wu, Best-Effort Patching for Multicast True VoD Service, Multimedia Tools and Applications, v.26 n.1, p.101-122, May 2005
Bharadwaj Veeravalli , Long Chen , Hun Yen Kwoon , Goh Kar Whee , See Ying Lai , Lim Peng Hian , Ho Chin Chow, Design, analysis, and implementation of an agent driven pull-based distributed video-on-demand system, Multimedia Tools and Applications, v.28 n.1, p.89-118, January 2006
Carlo K. da S. Rodrigues , Rosa M. M. Leo, Bandwidth usage distribution of multimedia servers using Patching, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.3, p.569-587, February, 2007 | Video-on-Demand VoD;VCR-like interactivity;scheduling;multicast;Quality-of-Service QoS |
510775 | On securely scheduling a meeting. | When people want to schedule a meeting, their agendas must be compared to find a time suitable for all participants. At the same time, people want to keep their agendas private. This paper presents several approaches which intend to solve this contradiction. A custom (---) made protocol for secure meeting scheduling and a protocol based on secure distributed computing are discussed. The security properties and complexity of these protocols are compared. A trade-off between trust and bandwidth requirements is shown to be possible by implementing the protocols using mobile agents. | INTRODUCTION
When negotiating meetings, the participants look up, communicate
and process information about each other's agendas trying to nd a
moment when they are all free to attend the meeting. Due to the private
nature of a person's schedule, as little as possible should be revealed to
any other party during that negotiation. Ideally, only the result of the
negotiation should be known to the participants (and to the participants
only), and any other information about the users' agendas should remain
secret.
An easy solution for scheduling a meeting is to broadcast the schedules
to all participants, but this totally neglects the privacy of the partici-
pants' agendas. Another solution is to send all schedules to a trusted
third party, but nding one such single third party trusted by every
participant, will be very di-cult in practice.
Some existing meeting scheduling applications, like for example \Ya-
hoo! Calendar", dene access levels for viewing and modifying agenda
entries, and dene user groups to which these access levels are assigned.
This is only necessary because the comparison between schedules must
be done by the users themselves. Our approaches eliminate the need
for managing access control, as they are not based on users directly
accessing each other's agenda.
This paper presents more secure solutions. Their goal is for participants
to be able to negotiate a meeting whereby parties have no direct
access to each other's agenda, whereby parties do not rely on another
party for telling the nal result, and whereby no information about the
agendas is revealed, but the nal result, i.e., the particular time the
meeting can be scheduled, or the fact that the meeting cannot be scheduled
This paper builds on the work done in [6] and [3] and shows the trade-
os that can be made in security, level of trust, and e-ciency, when
choosing a particular negotiation protocol and a specic implementation
approach.
The paper is organized as follows. Section 2 presents a custom-made
negotiation protocol. Section 3 presents an alternative approach based
on secure distributed computing. Both approaches are analyzed from
a security and complexity point of view. Section 4 discusses the use of
mobile agents for secure meeting scheduling, and presents the \agenTa"
prototype implementation. We conclude in Sect. 5.
2. USING A CUSTOM-MADE
NEGOTIATION PROTOCOL
2.1.
There exists a representation which reduces the problem of deciding
if the meeting can be scheduled at a certain moment to a logical AND
operation.
As shown in Fig. 1, an agenda will be represented as a bit string
in the following way: for each time slot in the schedule, there is one
bit indicating whether the negotiator can (1) or cannot (0) attend a
size of meeting
schedule
granularity
possible
grid
tation
Figure
1 Conversion from agenda to representation
meeting of the specied length which would start at that time. The
ner the granularity and the longer the negotiation window, the more
bits there will be in the representation.
2.2. SCHEDULING MODEL
In our model, a meeting scheduling starts with an invitation phase.
The initiator broadcasts to the invitees a set of negotiation parameters
such as meeting length, negotiation window (limited time span in which
to attempt the meeting scheduling) and a complete list of invitees. Each
invitee broadcasts to all others a reply indicating whether it will accept
or decline the negotiation invitation. Because broadcasts are used, no
invitee can be mislead as to the set of negotiators it will encounter in
the second phase.
In the second phase, called negotiation, the negotiators try the time
slots one by one and attempt to schedule the meeting. For each time
slot the negotiation takes place according to the protocol outlined be-
low. If the meeting was successfully scheduled the negotiators move on
to the third phase, otherwise the next time slot is tried. After independently
arriving to a result concerning a certain time slot, each participant
broadcasts the result to the others and checks whether all results coin-
cide. This allows for detection of partial failures and attacks which try
to mislead a subset of the negotiators.
In the third phase either the common result is presented to the users,
or the users are informed that no meeting can take place. If there is a
common result, users might conrm their commitment to the scheduled
time on a separate channel (e-mail, telephone), independently of the
scheduling process.
2.3. SCHEDULING A MEETING
For the purpose of this subsection we will refer to the representation
of an agenda according to the description in the previous subsection as
\schedule."
Instead of comparing schedules, the negotiation should be based on
comparing protected forms of the schedules. The schedules are protected
in a way which still allows scheduling to be performed by broadcasting
the protected forms to all negotiators and letting them process the data
without fear of the unprotected form to be revealed.
The binary XOR operation between the schedule and a mask is a
transformation which still allows scheduling to be performed in the sense
that the (in)equality of two or more bits is preserved when they are all
XORed with the same mask.
If all negotiators know the mask, they are able to retrieve the original
schedules easily, by unmasking the broadcasted data. The solution is to
let the mask be a shared secret, that is, all negotiators will contribute
when building it, but it will not be revealed to any of them.
The negotiation protocol then goes as follows:
In step one of the negotiation protocol, each negotiator chooses a
random mask, and XORs it with its schedule. This random mask
is actually a partial mask. The shared secret will be the XOR
of all partial masks, and is called global mask. Even if only one
negotiator keeps its partial mask secret, the others cannot nd the
global mask solely using their partial masks.
In step two of the protocol, all schedules visit all negotiators exactly
one time. At each visit they are masked with the partial
mask of that particular negotiator. In the end, all original schedules
are thus masked with the global mask, without the need for
the negotiators to disclose their partial mask. Since the schedule is
rst masked with its owner's partial mask it remains secret during
its visits.
A negotiator must be unable to identify a protected schedule as
representing its own schedule: otherwise performing XOR between
the original and the protected schedule reveals the global mask,
allowing the negotiator to retrieve all original schedules. Therefore
during the trip to all negotiators, the schedule must be forwarded
randomly between the negotiators in order to make it impossible
to trace. The schedule must have attached a list of negotiators
it hasn't visited yet, decremented at each forwarding, in order to
prevent multiple maskings with the same partial mask.
Note that for countering attempts to trace a schedule by attackers
who have a global view on the network, all communications should
be encrypted.
3 In step three, all protected schedules are broadcasted. Each negotiator
looks independently for a time slot when all protected schedules
have the same value. That implies that the original schedules
are identical, too, for that time slot but does not provide any clue
whether the negotiators are free or busy for that time slot. The
clue is provided by each negotiator's schedule for that time slot. If
the negotiator is free then, it means all negotiators are free then
and the meeting can be scheduled. For time slots when some are
busy and some are free, it is not possible to gure out who are the
busy ones and who are the free ones.
Note that our scheduling protocol does not specify any form of negotiator
authentication. This is however needed for linking the protocol
messages to their originators. Depending on the meeting application,
the desired form of authentication can be added to the protocol.
Figure
2 shows the negotiation protocol as performed by three parties.
For easy understanding of the protocol the schedules in the simulation
are following the same route and the maskings appear to be performed
simultaneously by the three negotiators. In reality the process is asynchronous
(some negotiators may be idle while others are masking) and
routing is random (in the end some negotiators may have nothing to
broadcast while others may broadcast several protected schedules). Another
dierence is that in reality only one bit is processed at a time
(otherwise an attack is possible, see following section). If the meeting
can be scheduled in the corresponding time slot the protocol stops, otherwise
the next time slot is processed.
2.4. SECURITY ANALYSIS
Our custom-made protocol does not require one single entity to be
trusted. It however does not completely protect the privacy of the par-
ticipants' agenda, as attacks by both passive and active adversaries are
possible.
Bad slots. There may be time slots for which all users are busy
and therefore all protected slots will be equal. By checking against
the original schedule each negotiator will avoid scheduling a meeting in
that slot but it will also know everybody else's schedule for that slot
(i.e., everybody is busy). Because they constitute an infringement on all
users' privacy we call these slots bad slots.
A
step
one
step
two
step
three
Figure
Simulation for three negotiators
Entropy attack. The reason for performing the negotiation one slot
at a time is to prevent the following attack. If the negotiation is done
on sequences of slots, when all the broadcasted masked schedules are
received, it still is possible for a party to recognize its original schedule.
It can be done by testing all the masks which transform the original
schedule into one of the protected forms. The correct global mask can be
recognized by the fact that by unmasking the other protected schedules
with it, bit strings are obtained which have the entropy expected from
a schedule.
Negotiating one bit at a time, with fresh partial masks for each bit
and stopping when a meeting is scheduled counters this attack because
each mask bit and schedule bit have maximal entropy.
Number of parties. When only two parties are negotiating, each
can deduce the schedule of the other based on their own schedule and the
comparison between the protected forms of the schedules. Besides that,
the global mask is straightforward to nd because the original schedule
can be linked to its protected form. Also when only three or four parties
are negotiating it is sometimes possible to nd out the global mask
by tracing back schedules. For ve or more participants the ability to
trace a schedule along its route decreases as the number of participants
increases.
Dummy negotiators could be introduced to articially increase the
number of parties, and thus to alleviate this problem. In a broadcasting
communication environment encrypted dummy messages could also be
sent to make the real schedules untraceable.
Rogue negotiators. Active adversaries could attack the protocol in
various ways.
A simple denial of service attack can be mounted by negotiating based
on a fully busy schedule instead of declining the invitation. Since the protocol
relies on the negotiators consistently using their partial mask, the
protocol has unpredictable outcomes if a negotiator randomly changes
its partial mask during the negotiation of a time slot.
Goal-oriented misbehavior is also possible. A negotiator can wait to
be the last to broadcast the protected schedule(s) it has. This way it
is able to detect rst when a meeting could take place. In that case it
can broadcast a false protected form, preventing the meeting from being
scheduled. It knows everybody else's schedule for that time slot, while
the others do not.
2.5. COMPLEXITY
For analyzing the complexity of the scheduling we count the messages
that are sent between the negotiators. In a distributed environment it is
expected that sending messages will be much more resource consuming
than masking or a comparison between bits. Since much of the processing
is done in parallel, bandwidth is more important. Remember that
the negotiation protocol is performed bit by bit.
Note that for n negotiators a broadcast is of complexity n 1. When
an all-to-all broadcast is needed it has complexity n(n 1).
The scheduling starts with a simple broadcast of the invitation. C
n 1. All negotiators (except for the initiator) must announce their
position towards the invitation. These broadcasts adds complexity
(n 1)(n 1). For getting masked, one bit must visit all negotiators
and then be broadcasted: 2(n 1). This happens to each negotiator's
bit in a round: C 1). If the number of bits in a schedule is
l, after at most l rounds the protocol will end. In the check phase of
the scheduling, all negotiators broadcast their result or the fact that no
meeting could be scheduled to all others: C 1). Note that only
positive results (i.e., a meeting is possible) are broadcasted. If the result
is negative, the agents automatically go to the next bit. If the result is
still negative after the last bit, it was not possible to schedule a meeting.
Therefore at most
are sent. For example, for a scheduling
window of 3 eight-hour working days, granularity 1 hour (l = 24) and
this amounts to at most 1000 messages; for
participants in the same conditions, there will be up to 4500 messages
sent.
3. USING SECURE DISTRIBUTED
COMPUTING
3.1. THE PROBLEM OF SECURE
DISTRIBUTED COMPUTING
Usually, the problem of Secure Distributed Computing (SDC) is stated
as follows. Let f be a publicly known function taking n inputs, and suppose
there are n dierent parties, each holding their own private input
n). The n parties want to compute the value f(x
without leaking any information about their private inputs to the other
parties (except of course the information about x i that is implicitly
present in the function result). In descriptions of solutions to the Secure
Distributed Computing problem, the function f is usually encoded as
a boolean circuit, and therefore Secure Distributed Computing is also
often referred to as secure circuit evaluation.
Over the past two decades, a fairly large variety of solutions (other
than the trivial one using a trusted third party) to the problem has
been proposed. An overview is given by Franklin [4] and more recently
by Cramer [2].
3.2. HOW TO PERFORM GENERAL
The core problem of SDC is that we want to perform computations
on hidden data (using encryption, secret sharing or other techniques)
without revealing the data. One class of techniques to compute with
encrypted data is based on homomorphic probabilistic encryption. An
encryption technique is probabilistic if the same cleartext can encrypt to
many dierent ciphertexts under the same encryption key. To work with
encrypted bits, probabilistic encryption is essential, otherwise only two
ciphertexts (the encryption of a zero and the encryption of a one) would
be possible, and cryptanalysis would be fairly simple. An encryption
technique is homomorphic if it satises at least one equation of the form
E(x op operations op and op 0 . A
homomorphic encryption scheme allows operations to be performed on
encrypted data, and hence is suitable for secure circuit evaluation.
In [5], Franklin and Haber present a protocol that evaluates a boolean
circuit on data encrypted with such a homomorphic probabilistic encryption
scheme. In order to support any number of participants, they use a
group oriented encryption scheme, i.e., an encryption scheme that allows
anyone to encrypt, but that needs the cooperation of all participants to
decrypt. In the group oriented encryption scheme used by Franklin and
Haber, a bit b is encrypted for a group of participants S ng as
r mod
r
mod N
and q are two primes such that p q mod 4, and r 2R
ZN . The public key is given by [N;
while K i is the private key of the ith participant. This scheme has some
additional properties that are used in the protocol:
XOR-Homomorphic. Anyone can compute a joint encryption of
the XOR of two jointly encrypted bits. Indeed, if
and
Blindable. Given an encrypted bit, anyone can create a random
ciphertext that decrypts to the same bit. Indeed, if
and r 2R ZN , then
r mod N;
r
mod N
is a
joint encryption of the same bit.
Witnessable. Any participant can withdraw from a joint encryption
by providing the other participants with a single value. In-
it is easy to compute D i
First of all, the participants must agree on a value for N and g, choose
a secret key K i and broadcast g K i mod N to form the public key. To
start the actual protocol, each participant broadcasts a joint encryption
of his own input bits. To evaluate an XOR-gate, everyone simply applies
the XOR-homomorphism. The encrypted output of a NOT-gate can be
found by applying the XOR-homomorphism with a default encryption
of a one, e.g. [1; 1 mod N ].
The encryption scheme is not AND-homomorphic, so the evaluation
of an AND-gate will be more troublesome. Suppose the encrypted input
bits for the AND-gate are ^
E(v). To compute a joint
encryption
proceed as follows:
Each participant i chooses random bits b i and c i and broadcasts
Each participant repeatedly applies the XOR-homomorphism to
Each participant broadcasts decryption witnesses
3 Everyone can now decrypt ^
repeatedly applying the
fact that (a can prove that w
(b
Each participant is able to compute a joint encryption of w
knows b i and c i (he chose them himself) and he received encryptions
from the other participants, so he can compute E(b
as follows: if b default encryption for
a zero will do, e.g. [1; 1]. Otherwise, if b
c j is a valid substitution for E(b
can be computed in an analogous way. He
uses the XOR-homomorphism to combine all these terms, blinds
the result and broadcasts this as ^
4 Each participant combines ^
using the
XOR-homomorphism, to form ^
E(w).
When all gates in the circuit have been evaluated, every participant
has a joint encryption of the output bits. Finally, the participants broadcast
decryption witnesses for the output bits to reveal them.
3.3. SECURE MEETING SCHEDULING
USING SDC
We already showed how to reduce the problem of scheduling a meeting
for n secret agendas to a series of logical AND operations on n secret
bits. For every time slot in the schedule, each negotiator has one secret
input bit: a one if he is available to start the meeting at that time, a zero
if he isn't. Because the Secure Distributed Computing protocol we just
discussed can only handle binary gates, we implement the n-ary AND
operation as a log 2 (n)-depth tree of binary AND-gates. The output bit
of the circuit indicates if this slot is an appropriate starting time for the
meeting (1) or not (0).
3.4. SECURITY ANALYSIS
Franklin and Haber show that their protocol is provably secure against
passive adversaries (i.e., adversaries who follow the rules of the protocol,
but who try to learn as much information from the communication as
possible), given that ElGamal encryption with a composite modulus is
secure. This means that under the assumption of passive adversaries,
complete privacy of all agendas is guaranteed (except of course for the
fact that everybody is available at the time the meeting is scheduled).
However, the proof Franklin and Haber give uses a more complicated
encryption scheme and they mention the one we used here as an alter-
native. To the best of our knowledge, the security of this encryption
scheme is still an open problem.
The protocol is not provably secure against active adversaries (who
can deviate from the protocol). For example, a malicious participant can
ip the output of an AND gate by XORing his ^
with the encryption
of a one. For this particular application however, the most obvious
attacks don't seem to give rise to substantial information leaks. The
SDC protocol presented by Chaum, Damgard and van de Graaf in [1]
provides provable security against active adversaries at the cost of higher
bandwidth requirements.
3.5. COMPLEXITY
Let's have a closer look at the message complexity of this protocol.
The same public and private keys can be used for every evaluation. This
means that the initiator's invitation message can contain N and g (C
while g K i can be wrapped together with the message
that announces each participant's position towards the invitation (C
(n 1)(n 1) messages).
The evaluation of a single AND gate consists of four phases, of which
the rst three need an all-to-all broadcast (consuming n(n 1) messages
while the last one doesn't need any communication. Since the
AND gates within one level of the tree can be evaluated in parallel, the
evaluation of the entire circuit takes C
sages. The broadcast of the encrypted input bits of the circuit and the
broadcast of decryption witnesses for the output bit both take another
messages.
If l slot evaluations are needed before a suitable meeting time is found,
the total message complexity is given by C 1
If we consider the same example as we did
in the previous section (l = 24), this amounts to 5300 messages for 5
participants and 30330 messages for 10 participants.
Before comparing this result to that of the custom-made protocol in
the previous section, we should notice that only the number of messages
is taken into account, not their size. As jN j should be about 1024 bits
to be secure, the messages in the SDC protocol will be larger than the
messages in the custom-made protocol. However, since the maximum
message length for 10 participants is only 2:5 KB (which easily ts into
a single IP packet), we considered the number of transmitted messages
more relevant than the number of bits that are strictly needed.
It should also be noted that we do not take into account computation
or memory overhead for the protocols. The amount of computation and
storage needed for the SDC protocol is considerably higher than for the
custom-made protocol.
4. USING MOBILE AGENTS
In this section, it will be shown how mobile agents can be used to
reduce the communication overhead of the two solutions for the agenda
scheduling problem. The basic idea is to use mobility to bring agents of
the participants closer together. Of course, a mobile agent needs to trust
his execution platform but we will show that the trust requirements are
less strong than for a classical trusted third party (TTP) solution for
the meeting scheduling problem.
To compare the trust requirements of the dierent approaches, we
use the following simple trust model. We say a participant trusts an
execution site if it believes that: (1) the execution site will correctly
execute any code sent to it by the participant; (2) the execution site will
correctly (i.e., as expected by the participant) handle any data sent to it
by the participant. It also implies that the execution site will maintain
the privacy of the data or the code if this is expected by the participant.
If p trusts E, we denote this as shown in Fig. 3.
Figure
3 Notation for \p trusts E"
To compare bandwidth requirements (for communication overhead),
we make the following simple distinction. High bandwidth is required
to execute one of the discussed protocols. Low bandwidth su-ces to
transmit data or agent code. Also intermittent connections (e.g. for devices
that are sometimes disconnected from the network) are considered
low bandwidth. We assume low bandwidth communication is available
between any two parties. If high bandwidth communication is possible
we denote this as shown in Fig. 4.
Figure
4 Notation for high bandwidth connection between E i and
Based on these simple models of communication and trust, we compare
three options for implementing secure meeting scheduling.
4.1. A TRUSTED THIRD PARTY
The rst, perhaps most straightforward option, is to use a globally
trusted third party. Every participant sends its agenda to the TTP who
will compute an appropriate meeting time and disseminate the result to
the participants. Of course, data must be sent to the TTP, through an
authenticated and safe channel. This can be accomplished via conventional
cryptographic techniques.
It is clear that this approach has a very low communication overhead:
the data is only sent once to the TTP; later, every participant receives
the result of the computation. However, every participant should unconditionally
trust the TTP. For the case of 4 participants, the situation
is as shown in Fig. 5.
Figure
5 Situation with 4 participants and a TTP.
It is not clear whether n distrustful participants will easily agree on
one single trustworthy third party. This requirement of one single globally
trusted execution site is the main disadvantage of this approach.
4.2. CRYPTOGRAPHIC SECURE MEETING
The second option is the use of cryptographic techniques (as discussed
in previous sections) that make the use of a TTP super
uous.
The trust requirements are really minimal: every participant only
trusts its own execution site.
Although this option is very attractive, it should be clear from the previous
sections that the communication overhead might be too high to be
practically useful in a general networked environment. High bandwidth
is required between all of the participants. For the case of 4 participants,
the situation can be summarized as shown in Fig. 6.
Figure
6 Situation with 4 participants without a TTP.
4.3. USING MOBILE AGENTS
Finally, a third solution tries to combine the two previous options:
the communication overhead is remedied by introducing semi-trusted
execution sites and mobile agents.
In this approach, every participant p i sends its representative, agent
a i , to a trusted execution site E j . The agent contains a copy of the
agenda and is capable of running a secure meeting scheduling protocol.
It is allowed that dierent participants send their agents to dierent
sites. The only restriction being that the sites should be located closely
to each other, i.e., should have high bandwidth communication between
them.
The amount of long distance communication is moderate: every participant
sends its agent to a remote site, and receives the result from
its agent. The agents use a cryptographic protocol, which unfortunately
involves a high communication overhead. However, since the agents are
executing on sites that are near each other, the overhead of the protocol
is acceptable. For a situation with 4 participants, we could have the
situation as depicted in Fig. 7.
Figure
7 Situation with 4 participants using mobile agents
communication between the participants is nec-
essary, and there is no longer a need for one single trusted execution
site.
4.4. CURRENT IMPLEMENTATION WITH
\agenTa" is the name of our prototype implementation of a secure
meeting scheduling system. Currently it uses the custom-made protocol
described in this paper.
We have used the Aglets SDK 1.1 beta 3 [7], a mobile agents system
development kit which was released to the open source community by
its creator, IBM. The SDK contains an agent server, the API needed to
write agents in Java (called aglets), examples and documentation. The
prototype implementation of agenTa has around 3500 lines of Java code.
For the inter-agent communication KQML (Knowledge Query and
Manipulation Language) was chosen. KQML was developed at the University
of Baltimore Maryland County, and enhanced with security capabilities
in [8].
In our implementation, each user's scheduling application is modular,
the user interface, the agenda management and the negotiation being
performed by distinct intercommunicating aglets. Only the negotiator
aglets of all users take advantage of their mobility to gather on a host
where they carry out the negotiation protocol by local communication.
There are no language limitations for implementing the custom-made
protocol. Communication relies on transmitting character strings. There-
fore, agents implemented with other agent platforms and in other programming
languages can take part in the negotiation, provided the platforms
can interoperate.
5. CONCLUSION
This paper has shown that there exist several techniques for secure
meeting scheduling. Moreover, a trade-o can be made between the level
of security that can be obtained, the degree of trust that is required, and
the amount of overhead that is caused by the protocol.
When a TTP is used, a meeting can be scheduled very e-ciently. The
custom-made protocol has more overhead, but does not require trust in
a third party. An SDC protocol is more secure than our custom-made
protocol, but it is also much less e-cient.
Using mobile agents when implementing any protocol can improve the
e-ciency, while still avoiding the need for one single trusted entity.
Acknowledgements
This work was supported in part by the FWO-Vlaanderen project
G.0358.99 and by the Concerted Research Action (GOA) Mesto-666.
Joris Claessens is funded by a research grant of the Flemish Institute
for the Promotion of Industrial Scientic and Technological Research
Gregory Neven is an FWO-Vlaanderen Aspirant, and Frank
Piessens is an FWO-Vlaanderen Postdoctoral fellow.
--R
James May
--TR
Complexity and security of distributed protocols
Semi-trusted Hosts and Mobile Agents
Multiparty Computations Ensuring Privacy of Each Party''s Input and Correctness of the Result
Introduction to Secure Computation
Secure Meeting Scheduling with
--CTR
Marius Calin Silaghi, Meeting Scheduling Guaranteeing n/2-Privacy and Resistant to Statistical Analysis (Applicable to any DisCSP), Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.711-715, September 20-24, 2004 | meeting scheduling;secure distributed computation;mobile agents |
510800 | Extended description techniques for security engineering. | There is a strong demand for techniques to aid development and modelling of security critical systems. Based on general security evaluation criteria, we show how to extend the system structure diagrams of the CASE tool AutoFocus (which are related to UML-RT collaboration diagrams) to allow modelling of security critical systems, in particular concerning components and channels. Both high-level and low-level models of systems are supported, and the notion of security patterns is introduced to provide generic solutions for security requirements. We explain our approach on the example of an electronic purse card system. | INTRODUCTION
In developing distributed systems-in particular applications that communicate
over open networks like the Internet-security is an extremely important
issue. Many customers are reluctant to take part in electronic business, confirmed
by recent attacks on well-known portal sites or cases of credit card fraud
via the Internet. To overcome their reluctance and make E-Commerce and
mobile systems a success, these systems need to become considerably more
trustworthy.
To solve this problem, on the one hand there are highly sophisticated collections
of evaluation criteria that security critical systems have to meet, like
the ITSEC security evaluation criteria (ITSEC, 1990) or their recent successor,
the Common Criteria (CC) (Common Criteria, 1999). The Common Criteria
describe security related functionality to be included into a system, like authen-
tification, secrecy or auditing, and evaluation assurance levels (EALs) for its
This work was supported by the German Ministry of Economics within the FairPay project
development. The strictest level is EAL7 (Common Criteria, 1999, part 3, p.
66), where a formal representation of the high-level design is required.
On the other hand, research has produced many formal methods to describe
and verify properties of security critical systems, ranging from protocol modelling
and verification (Burrows et al., 1989; Lowe, 1996; Paulson, 1998; Thayer
et al., 1998) to models for access control, like the Bell-LaPadula model (Bell
and LaPadula, 1973) or the notion of non-interference (Goguen and Meseguer,
1998).
Such formal methods however are rarely used in practice, because they require
expert knowledge and are costly and time-consuming. Therefore, an integrated
software development process for security critical systems is needed,
supported by CASE tools and using graphical description techniques. This reduces
cost, as security problems are discovered early in the development process
when it is still inexpensive to deal with them, and proof of meeting evaluation
criteria is a byproduct of software development. In addition, systems developed
along a certain integrated "security engineering process" will be much
more trustworthy.
In this paper, we describe a first step towards using extended description
techniques for security modelling. As a basis for our work, we use the AutoFocus
description techniques. The AutoFocus (Huber et al., 1998b; Slo-
tosch, 1998; Broy and Slotosch, 1999) system structure diagrams are related
to UML-RT collaboration diagrams and describe a system as a set of communicating
components. The corresponding CASE tool developed at Munich
University of Technology supports user-friendly graphical system design and
incorporates simulation, code and test case generation and formal verification
of correctness. The main advantage of the use of AutoFocus over a more
general description technique as UML is its simplicity. Besides, there exists
a clear semantics for the description techniques (for general UML description
techniques, defining a formal semantics is still subject of ongoing research). As
a start, in our work we focus on security properties of communication channels.
We show how certain important security properties of communication channels,
such as secrecy and authenticity, can be modelled at different abstraction levels
of the system design and explain our ideas on the transition between these
levels, using generic security patterns. We give definitions of the meanings
of our extended description techniques, based on the AutoFocus semantics.
See also (J-rjens, 2001) for first work on integrating access control models into
UML description techniques, and (Lotz, 2000) for formal definitions of security
properties using the Focus method.
This paper is structured as follows. In Section 2, we give a short introduction
to AutoFocus. In Section 3, we present the extensions of AutoFocus
system structure diagrams to model security properties of channels. The usage
of these techniques is demonstrated in Section 4, with the help of an example
Extended Description Techniques for Security Engineering 3
model of an electronic purse system. We conclude in Section 5 with a summary
and indicate further work.
2. AutoFocus
AutoFocus/Quest (Huber et al., 1998a; Slotosch, 1998; Philipps and Slo-
tosch, 1999) is a CASE tool recently developed at Munich University of Technology
with the goal to combine user-friendly graphical system design and
support of simulation, code generation and formal verification of correctness.
AutoFocus supports system specification in a hierarchical, view-oriented
way, an approach that is well established and facilitates its use in an industrial
environment. However, it is also based on the well-founded formal background
Focus (Broy et al., 1992), and a fairly elementary underlying concept: communicating
extended Mealy machines.
System specifications in AutoFocus make use of the following views:
System Structure Diagrams (SSDs) are similar to collaboration diagrams
and describe structure and interfaces of a system. In the SSD
view, the system consists of a number of communicating components,
which have input and output ports to allow for sending and receiving
messages of a particular data type. The ports can be connected via chan-
nels, making it possible for the components to exchange data. SSDs can
be hierarchical, i.e. a component belonging to an SSD can have a sub-structure
that is defined by an SSD itself. Besides, the components in an
SSD can be associated with local variables.
Data Type Definitions (DTDs) define the data types used in the model,
with the functional language Quest (Philipps and Slotosch, 1999). In
addition to basic types as integer, user-defined hierarchical data types
are offered that are very similar to those used in functional programming
languages like Gofer (Jones, 1993) or Haskell (Thompson, 1999).
State Transition Diagrams (STDs) represent extended finite automata
and are used to describe the behaviour of a component in an SSD. Transitions
consist of input patterns on the channels, preconditions, output
patterns, and actions setting local variables when the transition is exe-
cuted. As the main focus of this paper is extending SSDs, we will not
describe STDs in detail at this place.
Extended Event Traces (EETs) finally make it possible to describe
exemplary system runs, similar to MSCs (ITU, 1996).
The Quest extensions (Slotosch, 1998) to AutoFocus offer various connections
of AutoFocus to programming languages and formal verification
tools, such as Java code generation, model checking using SMV, bounded model
checking and test case generation (Wimmel et al., 2000).
3. EXTENDING SYSTEM STRUCTURE DIAGRAMS
The main difference between security critical systems and traditional systems
is the consideration of attacks. A potential third party could try to overhear and
manipulate security critical data. To decrease the risk of attacks to a minimum,
special security functionalities are used. For example, encryption is a common
principle to inhibit overhearing of the communication between two agents.
It is state of the art to use graphical description techniques for system spec-
ification. For the specification of security critical systems, we need special
description techniques to deal with the particularities of those systems. We
extend the AutoFocus description techniques to fulfill the needs of security
engineering. In this paper the extensions of the AutoFocus SSDs, mentioned
in Section 2, are described. The extensions of the structure diagrams allow the
definition of security requirements. Furthermore it is possible to specify the
usage of security functionality to fulfill the defined requirements.
We use special tags for the security extensions to the SSDs. These security
tags are assigned to particular diagram parts and have a defined semantics. The
following sections describe the different security extensions made to the SSDs.
3.1. SECURITY CRITICAL SYSTEM PARTS
To model and evaluate security aspects of distributed systems, it is always
necessary to define its security critical parts. The identification of security
critical parts should be done very early within the system development process.
This task is typically part of the analysis phase. In the Common Criteria,
the security critical parts of a system together form the Target Of Evaluation
(TOE). The following definition will make our notion of security criticality
more precise.
critical parts of a distributed system
we mean parts that deal with data or information that has to be protected
against unauthorized operations (e.g. disclosure, manipulation, prevention of
access etc. In particular, security critical system parts are connected with security
requirements such as secrecy, authentication or auditing, and according
to required strictness of evaluation might be subject to formal modelling.
We want to make the distinction visible within the graphical system descrip-
tion. Therefore we annotate security critical system parts with the security tag
-critical-. Both components and channels can be tagged. To mark non critical
system parts, -noncritical- is used. A system part without a -critical- or
Extended Description Techniques for Security Engineering 5
A
data
Figure
Critical System Parts
-noncritical- tag is non critical by default. Figure 1 shows an SSD. It consists
of two security critical components and one security critical channel.
3.2. PUBLIC CHANNELS
The special aspect of security critical systems is the possibility of attacks.
Hostile subjects can manipulate the system by overhearing and manipulating
the communication between the subsystems. To distinguish private communication
channels of a system from public channels, we use the two security
tags -private- and -public-. A -private- channel is a channel a hostile party
has no access to. The hostile subject can neither overhear nor manipulate the
communication that takes place via the channel. Vice versa a hostile party can
overhear and manipulate all communications of a -public- channel. If neither
-private- nor -public- are used, we assume by default that the communication
channel is not publicly accessible.
channel has the same semantics
as a normal channel within AutoFocus, i.e. it is a dedicated connection from
one component to another.
Channel). The semantics of a -public- channel without
secrecy and authenticity introduced in Section 3.6, is defined by the SSDs
shown in Figure 2. Using a -public- channel (Figure 2(a)) is an abbreviation
for having an intruder included in the model that has access to the channel
Figure
2(b)). The behaviour of the intruder is defined by the threat model-for
example, the intruder usually can overhear, intercept or redirect messages. It
is possible to model this behaviour in a flexible way using using AutoFocus
State Transition Diagrams (STDs) (Wimmel and Wisspeintner, 2000).
The identification of private and public channels should be done during the
analysis phase of the system development process, right after the identification
of the security critical parts. Every security critical channel must be analyzed
with regard to accessibility-for the other channels, this is optional. The result
of this analysis is documented by using the tags for private and public channels.
data
(a) -public- Channel
A Intruder
data B
data
(b) Defined as an Intruder between the Two Agents
Figure
2 Semantics of a Public Channel
3.3. REPLACEABLE COMPONENTS
Conventional system structure diagrams always show a system in normal
operation, e.g. an ordinary electronic purse card component with the specified
functionality communicating with the point-of-sale component. In addition to
manipulation of the communication link (man-in-the-middle attack), another
attack scenario is imaginable: the attacker could try to communicate with the
rest of the system in place of the purse card. In this case, there is no ordinary
purse card in the system, but a faked one (that in particular usually does not
contain private keys of ordinary cards, except if they leaked).
We mark components that can be replaced by faked ones by the attacker with
the -replace- tag, and components that can not be replaced with the -nonre-
place- tag. If neither -replace- nor -nonreplace- is used for a component, the
component is non replaceable by default.
Definition 4 (Replaceable Component). Figure 3 shows the semantics of a
-replace- component. Using a replaceable component (Figure 3(a)) is an abbreviation
for specifying two different system scenarios. The first scenario describes
the structure of the system with the specified component A (Figure 3(b)).
In the second scenario (Figure 3(c)) the attacker exchanges the component by
another component A'. A' has the same component signature like A but has an
arbitrary behaviour that can be defined by the threat model.
In the development process, replaceable components should be identified
during the analysis phase together with the identification of private and public
Extended Description Techniques for Security Engineering 7
A
data
(a) -replace- Component
A data
Component not Exchanged
A' data
(c) Scenario 2: Component Exchanged
Figure
3 Semantics of a Replaceable Component
channels. It is only necessary to analyze security critical components with
regard to replaceability.
3.4. ENCAPSULATED COMPONENTS
An encapsulated component is a component that only consists of not publicly
accessible subcomponents. In this way an attacker has no possibility to
manipulate or exchange the subsystems of this component. Furthermore, the
communication within the component cannot be overheard. The security tag
-node- is used to mark a component as an encapsulated one. The identification
of encapsulated components is done together with the identification of private
and public channels and replaceable components.
(Consistency Condition of -node-). A -node- component only
consists of -private- channels and -nonreplace- components.
One example for an encapsulated component is an automated teller machine
(ATM). An ATM is encapsulated in a way that unauthorized persons are not
able to manipulate system parts. Overhearing the internal communication is
also not possible.
3.5. ACTOR COMPONENTS
Most systems interact with their system environments. It is often desired to
illustrate the system environment in the graphical system design. Components
that are not part of the system are called actors. We point out actors by using
the tag -actor-. A typical example for an actor is a system user. The system
user interacts with the system without being part of it.
Actor components can never be marked with the -critical- tag. An actor is not
part of the system and therefore there is no need to analyze the actor itself with
respect to security aspects. But an actor can interact with our system in a way
that affects our security requirements. To visualize these critical interactions,
channels between actors and the system components can be annotated with the
-critical- tag.
3.6. SECRECY AND AUTHENTICITY
The most important security properties of communication channels-in addition
to integrity, availability and non-repudiation-are authenticity and secrecy.
For this purpose, we will introduce tags -secret- and -auth- for channels in
the SSDs. The security properties of channels are identified in the high-level
design phase, taking place after the activities of the analysis phase. It is only
necessary to specify security properties for security critical, public channels.
There are many possible definitions for authenticity and secrecy in the security
literature (see (Gollmann, 1996)). In the following, we give a definition
based on our model.
During the high-level design phase, we assume that the defined requirements
of secrecy and authenticity are fulfilled automatically, if the corresponding tags
appear on the channels. Consequently these requirements restrict the possibilities
of an attacker. In the low-level design, the validity of these requirements
has to be ensured by proper mechanisms.
Definition 6 (Secret Channel in High-Level Design). A message sent on a
-secret- channel can only be read by its specified destination component.
Therefore, we can assume in high-level design that a -secret- and -public-
channel can not be read by the intruder. But the intruder could write something
on it.
Figure
4 shows the semantics of a secret and public channel.
Definition 7 (Authentic Channel in High-Level Design). A message received
on an -auth- channel can only come from its specified source
component. Therefore, an -auth- and -public- channel can not be written
Extended Description Techniques for Security Engineering 9
data
(a) -secret- and -public- Channel
A
Intruder
data
data
(b) Intruder can Write Data
The "S" circle represents a switch component that distributes all incoming data to all outgoing channels.
Figure
4 Semantics of a Secret and Public Channel
by the intruder. But the intruder could possibly read data from it. Figure 5
illustrates the semantics of an authentic channel.
There are some relations between our notion of security critical and secrecy
and authenticity. A security critical channel references to data that should be
protected against attackers (see Definition 1). Secrecy and authenticity are
security properties. These security properties defines concrete requirements
concerning the protection of data against attackers.
Definition 8 (Consistency Condition of -secret- and -auth-). If a channel
is marked to be security critical and the communication is visible for an at-
tacker, the data sent via the channel must be protected in a suitable manner.
In this case, during the high-level design phase the protection of data must
be ensured by a security property. A -critical- and -public- channel must be
-secret- or -auth- or both.
3.7. INTEGRITY
We assume that -secret- or -auth- channels also provide message integrity,
i.e. a message received on an -auth- channel is guaranteed not to have been
modified. In future, the integrity property could also be modelled separately.
data
(a) -auth- and -public- Channel
A
Intruder
data
data
(b) Intruder can Read Data
Figure
5 Semantics of an Authentic and Public Channel
3.8. CHANNEL REQUIREMENTS IN LOW-LEVEL
In the low-level design phase, taking place after the high-level design phase,
the system specification is refined. In this phase, security functionalities can be
used to ensure the security properties and requirements.
We can define the usage of symmetric encryption, asymmetric encryption
and corresponding signatures to realize the requirements of secrecy and au-
thentification. The security tag -sym- marks a channel. It defines that the
realization of the channel must use a symmetric encryption algorithm to ensure
the secrecy requirements. Furthermore the -asym- tag is used to define the
usage of an asymmetric encryption algorithm.
It is also possible to specify the encryption algorithm that should be used
to guarantee secrecy. The security tag -encalg [Parameters]- can
be used together with a channel to specify the usage of a specific encryption
algorithm. We can specify additional parameters of the algorithm. For example
it is possible to define the key length of the encryption keys. Authenticity can
be realized by using specific authentification protocols. The tag -authprotocol
[Parameters]- defines a specific authentification protocol to be used.
Extended Description Techniques for Security Engineering 11
By choosing a specific encryption algorithm for realizing secrecy, we perform
a refinement step. Special encryption drivers are introduced to perform the
encryption and decryption tasks. The data that is sent between the two encryption
drivers is encrypted. To realize a specific authentification protocol, special
protocol drivers are introduced. Furthermore, additional channels between the
protocol drivers are needed to allow bidirectional communication.
After these refinements, the statements on access of the intruder to the channel
in Definition 6 and Definition 7 change for -public- channels: the intruder now
does have read and write access to the channel. The encryption mechanism on
the channel must ensure that the intruder cannot manipulate the channel in an
improper way, so that the security requirements stated by -secret- and -auth-
are still fulfilled.
Definition 9 (Secret and Authentic Channels in Low-Level Design).
In low-level design, -public- channels implementing secret or authentic
communication have to be modelled by including appropriate security
functionality.
A convenient way to do this is using security patterns. Security patterns are
generic solutions for common security problems. Figure 6 shows such a pattern
for the simple case of encryption (guaranteeing secrecy). The communicating
components now include protocol drivers for encryption and decryption of
the messages, so the original channel is replaced by an encrypted one. This
encryption pattern could be extended by including protocol drivers for key
agreement, a public key server or a whole public key infrastructure. Other
security patterns, for instance providing auditing functionality, are possible.
4. EXAMPLE-ELECTRONIC PURSE
TRANSACTION SYSTEM
In the previous section, we have introduced extensions for SSDs to deal with
security requirements. Now, we use the extended SSDs to model an example
system from the area of E-Commerce, an electronic purse card system. The
given specification conforms with the Common Electronic Purse Specification
(CEPS), an international standard for an electronic purse environment (CEP-
SCO, 2000).
Figure
7 shows the complete system structure of the electronic
purse system consisting of several sub-components.
The system user possesses an electronic purse card. The card has some
amount of money stored on it and the user can pay with the card at a dealer
using a point-of-sale (POS) terminal. The electronic purse card is involved in
several security requirements. For example, together with other components
the electronic purse card must ensure that an attacker can not steal any money
from the whole transaction system. Therefore the card is a security critical component
(-critical-). Furthermore, the purse card should be realized by a single
A
<<encalg [Parameter]>>
data
(a) -public- Channel using a Specific Encryption Algorithm
A' enc(data)
Encrypt
Decrypt
data
data
Intruder
enc(data)
(b) Intruder can Read and Write Encrypted Data
Figure
6 The Encryption Security Pattern
integrated chip. It is an encapsulated component (-node-) and consequently
we assume an attacker has no possibility to overhear or manipulate the communication
within the electronic purse card. An attacker could try to produce
a faked card and he could replace the valid card by the faked one. The security
tag -replace- of the component ElectronicPurse visualizes this possibility.
The electronic purse card can communicate with the other components over
its interface. It offers ports to read the balance, decrease the balance and set
the balance to a specific value. The getBal operation of the card is used by the
POS, a card loading terminal at the bank of the card owner (IssuerBank) and by
the CardReader component. The operation of reading the balance is not security
critical because everybody who possesses an electronic purse is allowed to read
the stored balance of the card. Consequently the channels getBal and returnBal
do not have the -critical- tag. But an intruder could build a special adapter
to overhear all communication between the electronic purse card and the other
components. To express this possibility the communication channels getBal,
returnBal, decreaseBal and setBal are marked with the -public- tag.
If the user wants to load money onto his card, he must go to his bank
and use a special card terminal. The channel setBal is used to
transfer money from his bank account to the card. This channel is security
critical, because it affects the security requirement about an attacker stealing
Extended Description Techniques for Security Engineering 13
CardReader
ElectronicPurse
POS
decreaseBal(int)
returnBal(int)
User
pay
IssuerBank
AcquirerBank
getBal
loadCard(int)
getBal
getBal
getBal
returnBal(int)
returnBal(int)
returnBal(int)
returnBal(int)
getBal
Figure
7 Electronic Purse Card System
money. We use the property of authentification (-auth-) to ensure that the card
and the card terminal are valid.
The card reader is a simple device for the user to check the amount of money
that is stored on the card. An attacker can exchange the card reader by another
component (-replace-). On the other hand the card reader component is encapsulated
(-node-). No security requirements are defined for this component
and therefore the component is not security critical.
The POS component is a card terminal located at a dealer. The user can insert
the electronic purse card into the terminal in order to pay for goods. The POS is
directly involved in the money transaction process. Therefore this component
is security critical. The POS is encapsulated to prevent manipulations within the
unit. But a malicious dealer has the possibility to exchange the POS terminal
by a faked unit.
The POS component can instruct the electronic purse card to decrease its
balance by a given amount. The operation decreaseBal is part of the money
transaction process and the communication to perform the operation is security
critical. To comply with our security requirement that an intruder cannot steal
any money, we must ensure that only a valid POS is able to decrease the amount
of money on a money card. The POS must authenticate itself in a way that the
card is sure to communicate with a valid POS. This fact is visualized in the SSD
by the -auth- tag annotated to the decreaseBal channel.
The dealer can submit the earned money to his bank (AcquirerBank) using
the cashCredit channel. This channel is security critical. The communication
medium is a standard telephone line and the potential attacker has possibilities
to overhear and manipulate the communication. This fact is expressed using
the -public- tag on this channel. To ensure that an attacker can not overhear or
manipulate the transferred data, the -secret- and -auth- tags are used.
The channel cashCredit between the two banks is used to transfer money
from one bank to the other. The communication takes place via an open network
(-public- channel). We must ensure secrecy and authenticity for this channel
to protect the transaction data. Both bank components are security critical, but
we do not see a risk that a potential attacker can act as a faked bank component.
Thus the -replace- tag is not used for both banks.
Finally let us have a look at the User. The user is not part of our system.
Therefore the -actor- tag is used. The user can initiate the paying process
at a POS. The initiation of the paying process is not security critical (it just
corresponds to inserting the card, whereas the amount of money to be withdrawn
is negotiated outside of the system). The critical part of the paying process takes
place between the money card and the POS. Furthermore, the user can load the
card with some amount of money. During this action, an amount of money
is transferred from his bank account onto the card. This operation is security
critical because an attacker could try to transfer money from a foreign account
Extended Description Techniques for Security Engineering 15
to his own electronic purse card. Thus we need some kind of authentification
to perform this operation, e.g. the user must enter a PIN code before the money
transaction is performed.
5. CONCLUSIONS AND FURTHER WORK
This work is only the beginning of an effort to extend graphical description
techniques for distributed systems with security aspects to support methodical
development of security critical systems. We used the CASE tool AutoFo-
cus, the description techniques of which are related to UML-RT, for its simplicity
and clear semantics and the possibility to give our security extensions a
straightforward and unambiguous meaning.
We showed how to extend AutoFocus system structure diagrams by security
tags, both for high-level and low-level design. The transition from high-level
to low-level design is aided by the possibility to use security patterns. The
description techniques were illustrated with the help of an example from the
field of E-Commerce, an electronic purse card system.
We focused on the consideration of channels and system structure. In fu-
ture, additional security properties such as integrity and availability are to be
included. The specification of channels and components in low-level design
needs to be detailed, using classifications as pointed out in (Eckert, 1998). Be-
sides, it seems very promising to further examine security patterns providing
generic architectures for specific security functionality and evaluate their use
within the development process. The refinement of security requirements and
security functionalities together with its influence on correctness verification is
also part of our research activities.
Also, state transition diagrams (STDs) specifying the behaviour of a component
can be extended in a similar way with security properties. For this
purpose, it suggests itself to classify the data received and sent on the ports and
to use models such as Bell-LaPadula or non-interference-similar as it is done
in (J-rjens, 2001) for Statechart diagrams. When the behaviour of components
is specified, formal proofs can be carried out (by hand or automatically via
model checking) that the specified security properties are fulfilled.
EETs (extended event traces) can also be enriched by cryptographic primitives
and security properties, and thus be used to specify and verify security
functionality of a component. Examining software development of security
critical systems with the help of AutoFocus EETs (using protocols from the
CEPS purse card system as a case study) is subject of ongoing work.
Acknowledgments
Helpful comments and encouragement from Oscar Slotosch are gratefully acknowledged.
--R
Secure computer systems: Mathematical foundations and model.
The Design of Distributed Systems-An Introduction to FOCUS
Enriching the Software Development Process by Formal Methods.
A logic of authentication.
Common criteria for information technology security evaluation version 2.1.
Security Policy and Security Models.
What do We Mean by Entity Authentication?
Towards Development of Secure Systems using UML.
Formally Defining Security Properties with Relations on Streams.
Breaking and fixing the Needham-Schroeder Public-Key Protocol using FDR
The inductive approach to verifying cryptographic pro- tocols
The Quest for Correct Systems: Model Checking of Diagramms and Datatypes.
Quest: Overview over the Project.
Strand Spaces: Why is a security protocol correct?
Haskell: TheCraft of Functional Programming.
Specification Based Test Sequence Generation with Propositional Logic.
The Needham-Schroeder Protocol- an AutoFocus Case Study
--TR
The inductive approach to verifying cryptographic protocols
The Haskell
Towards Development of Secure Systems Using UMLsec
Breaking and Fixing the Needham-Schroeder Public-Key Protocol Using FDR
Enriching the Software Development Process by Formal Methods
QUEST
The Quest for Correct Systems
Traffic Lights - An AutoFocus Case Study
Tool Supported Specification and Simulation of Distributed Systems
What do we mean by entity authentication?
--CTR
Monika Vetterling , Guido Wimmel , Alexander Wisspeintner, Secure systems development based on the common criteria: the PalME project, ACM SIGSOFT Software Engineering Notes, v.27 n.6, November 2002
Monika Vetterling , Guido Wimmel , Alexander Wisspeintner, Secure systems development based on the common criteria: the PalME project, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, November 18-22, 2002, Charleston, South Carolina, USA
Jan Jrjens, Modelling audit security for Smart-Card payment schemes with UML-SEC, Proceedings of the 16th international conference on Information security: Trusted information: the new decade challenge, June 11-13, 2001, Paris, France
Martin Deubler , Johannes Grnbauer , Jan Jrjens , Guido Wimmel, Sound development of secure service-based systems, Proceedings of the 2nd international conference on Service oriented computing, November 15-19, 2004, New York, NY, USA
Folker den Braber , Theo Dimitrakos , Bjrn Axel Gran , Mass Soldal Lund , Ketil Stlen , Jan yvind Aagedal, The CORAS methodology: model-based risk assessment using UML and UP, UML and the unified process, Idea Group Publishing, Hershey, PA, | security patterns;UML-RT;security properties;formal methods;security engineering;autofocus;requirements engineering;graphical description techniques;design patterns;CASE;software engineering |
510866 | Write barrier removal by static analysis. | We present a set of static analyses for removing write barriers in programs that use generational garbage collection. To our knowledge, these are the first analyses for this purpose. Our Intraprocedural analysis uses a flow-sensitive pointer analysis to locate variables that must point to the most recently allocated object, then eliminates write barriers on stores to objects accessed via one of these variables. The Callee Type Extension incorporates information about the types of objects allocated in invoked methods, while the Caller Context Extension incorporates information about the most recently allocated object at call sites that invoke the currently analyzed method. Results from our implemented system show that our Full Interprocedural analysis, which incorporates both extensions, can eliminate the majority of the write barriers in most of the programs in our benchmark set, producing modest performance improvements of up to 7% of the overall execution time. Moreover, by dynamically instrumenting the executable, we are able to show that for all but two of our nine benchmark programs, our analysis is close to optimal in the sense that it eliminates the write barriers for almost all store instructions observed not to create a reference from an older object to a younger object. | INTRODUCTION
Generational garbage collectors have become the memory
management alternative of choice for many safe languages.
The basic idea behind generational collection is to segregate
objects into dierent generations based on their age. Gen-
This research was supported in part by an NSF Fellowship,
DARPA Contract F33615-00-C-1692, NSF Grant CCR00-
86154, and NSF Grant CCR00-63513.
erations containing recently allocated objects are typically
collected more frequently than older generations; as young
objects age by surviving collections, the collector promotes
them into older generations. Generational collectors therefore
work well for programs that allocate many short-lived
objects and some long-lived objects | promoting long-lived
objects into older generations enables the garbage collector
to quickly scan the objects in younger generations.
Before it scans a generation, the collector must locate all references
into that generation from older generations. Write
barriers are the standard way to locate these references | at
every instruction that stores a heap reference into an object,
the compiler inserts code that updates an intergenerational
reference data structure. This data structure enables the
garbage collector to nd all references from objects in older
generations to objects in younger generations and use these
references as roots during the collections of younger gen-
erations. The write barrier overhead has traditionally been
accepted as part of the cost of using a generational collector.
This paper presents a set of new program analyses that enables
the compiler to statically eliminate write barriers for
instructions that never create a reference from an object in
an older generation to an object in a younger generation.
The basic idea is to use pointer analysis to locate store instructions
that always write the most recently allocated ob-
ject. Because this object is the youngest object, such a store
instruction will never create a reference from an older object
to a younger object. The write barrier for this instruction is
therefore super
uous and the transformation eliminates it. 1
We have implemented several analyses that use this basic
approach to write barrier elimination:
Intraprocedural Analysis: This analysis analyzes
each method separately from all other methods. It
uses a
ow-sensitive, intraprocedural pointer analysis
to nd variables that must refer to the most recently
allocated object. At method entry, the analysis conservatively
assumes that no variable points to the most
recently allocated object. After each method invoca-
This analysis assumes the most recently allocated object is
always allocated in the youngest generation. In some cases
it may be desirable to allocate large objects in older gener-
ations. A straightforward extension of our analysis would
statically identify objects that might be allocated in older
generations and suppress write barrier elimination for stores
that write these objects.
tion site, the analysis also conservatively assumes that
no variable refers to the most recently allocated object.
Callee Type Extension: This extension augments
the Intraprocedural analysis with information from invoked
methods. It nds variables that refer to the object
most recently allocated within the currently analyzed
method (the method-youngest object). It also
tracks the types of objects allocated by each invoked
method. For each program point, it extracts a pair
is the set of variables that refer to the
method-youngest object and T is a set of the types of
objects potentially allocated by methods invoked since
the method-youngest object was allocated. If a store
instruction writes a reference to an object
into the method-youngest object, and C is not a super-type
of any type in T, the transformation can eliminate
the write barrier | the method-youngest object
is younger than the object o.
Caller Context Extension: This extension augments
the Intraprocedural analysis with information about
the points-to information at call sites that may invoke
the currently analyzed method. If the receiver object
of the currently analyzed method is the most recently
allocated object at all possible call sites, the algorithm
can assume that the this variable refers to the most
recently allocated object at the entry point of the currently
analyzed method.
Full Interprocedural This analysis combines the Callee
Type Extension and the Caller Context Extension to
obtain an analysis that uses both type information
from callees and points-to information from callers.
Our experimental results show that, for our set of benchmark
programs, the Full Interprocedural analysis is often
able to eliminate a substantial number of write barriers, producing
modest overall performance improvements of up to
a 7% reduction in the total execution time. Moreover, by
instrumenting the benchmarks to dynamically observe the
age of the source and target objects at each store instruction,
we are able to show that in all but two of our nine bench-
marks, the analysis is able to eliminate the write barriers
at virtually all of the store instructions that do not create
a reference from an older object to a younger object during
the execution on the default input from the benchmark
suite. In other words, the analysis is basically optimal for
these benchmarks. Finally, this optimality requires information
from both the calling context and the called methods.
Neither the Callee Type Extension nor the Caller Context
Extension by itself is able to eliminate a signicant number
of write barriers.
This paper provides the following contributions:
Write Barrier Removal: It identies write barrier
removal as an eective means of improving the performance
of programs that use generational garbage
collection.
Analysis Algorithms: It presents several new static
analysis algorithms that enable the compiler to automatically
remove unnecessary write barriers. To the
class TreeNode {
TreeNode left;
TreeNode right;
Integer depth;
static public void main(String[] arg) {
void linkDepth(int d) {
new
void linkTree(TreeNode l, TreeNode r, int d) {
1:
linkDepth(d);
2: right =
static TreeNode buildTree(int d) {
if (d <=
new TreeNode();
return t;
Figure
1: Binary Tree Example
best of our knowledge, these are the rst algorithms
to use program analysis to eliminate write barriers.
Experimental Results: It presents a complete set of
experimental results that characterize eectiveness of
the analyses on a set of benchmark programs. These
results show that the Full Interprocedural analysis is
able to remove the majority of the write barriers for
most of the programs in our benchmark suite, producing
modest performance benets of up to a 7% reduction
in the total execution time.
The remainder of this paper is structured as follows. Section
presents an example that illustrates how the algorithm
works and how it can be used to remove unnecessary write
barriers. Section 3 presents the analysis algorithms. We
discuss experimental results in Section 4, related work in
Section 5, and conclude in Section 6.
2. AN EXAMPLE
Figure
presents a binary tree construction example. In
addition to the left and right elds, which implement
the tree structure, each tree node also has a depth eld
that refers to an Integer object containing the depth of
the subtree rooted at that node. In this example, the main
method invokes the buildTree method, which calls itself
recursively to create the left and right subtrees before creating
the root TreeNode. The linkTree method links the left
and right subtrees into the the current node, and invokes
the linkDepth method to allocate the Integer object that
holds the depth and link this new object into the tree.
We focus on the two store instructions generated from lines
1 and 2 in
Figure
these store instructions link the left and
right subtrees into the receiver of the linkTree method. In
the absence of any information about the relative ages of
the three objects involved (the left tree node, the right tree
node, and the receiver), the implementation must conservatively
generate write barriers at each store operation. But
in this particular program, these write barriers are super-
uous: the receiver object is always younger than the left
and right tree nodes. This program is an example of a common
pattern in many object-oriented programs in which the
program allocates a new object, then immediately invokes
a method to initialize the object. Write barriers are often
unnecessary for these assignments because the object being
initialized is often the most recently allocated object. 2
In our example, the analysis allows the compiler to omit
the unnecessary write barriers as follows. The analysis rst
determines that, at all call sites that invoke the linkTree
method, the receiver object of linkTree is the most recently
allocated object. It then analyzes the linkTree method with
this information. Since no allocations occur between the entry
point of the linkTree method and store instruction at
line 1, the receiver object remains the most recently allocated
object, so the write barrier at this store instruction
can be safely removed.
In between lines 1 and 2, the linkTree method invokes the
linkDepth method, which allocates a new Integer object
to hold the depth. After the call to linkDepth, the receiver
object is no longer the most recently allocated object. But
during the analysis of the linkTree method, the algorithm
tracks the types of the objects that each invoked method
may create. At line 2, the analysis records the fact that
the receiver referred to the most recently allocated object
when the linkTree method was invoked, that the linkTree
method itself has allocated no new objects so far, and that
the linkDepth method called by the linkTree method allocates
only Integer objects. The store instruction from line
creates a reference from the receiver object to a TreeNode
object. Because TreeNode is not a superclass of Integer,
the referred TreeNode object must have existed when the
linkTree method started its execution. Because the receiver
was the most recently allocated object at that point,
the store instruction at line 2 creates a reference to an object
that is at least as old as the receiver. The write barrier at
line 2 is therefore super
uous and can be safely removed.
3. THE ANALYSIS
Our analysis has the following structure: it consists of a
purely intraprocedural framework, and two interprocedural
extensions. The rst extension, which we call the Callee
Type Extension, incorporates information about called meth-
ods. The second extension, which we call the Caller Context
Extension, incorporates information about the calling
context. With these two extensions, which can be applied
separately or in combination, we have a set of four analyses,
which are given in Table 2.
2 Note that even for the common case of constructors that
initialize a recently allocated object, the receiver of the constructor
may not be the most recently allocated object |
object allocation and initialization are separate operations
in Java bytecode, and other object allocations may occur
between when an object is allocated and when it is initialized
With Callee With Caller
Type Extension Context Extension
Callee Only Yes No
Caller Only No
Full Interprocedural Yes Yes
Figure
2: The Four Analyses
The remainder of this section is structured as follows. We
present the analysis features in Section 3.1 and the program
representation in Section 3.2. In Section 3.3 we present the
Intraprocedural analysis. We present the Callee Only analysis
in Section 3.4, and the Caller Only analysis in Section 3.5.
In Section 3.6, we present the Full Interprocedural analysis.
Finally, in Section 3.7, we describe how the analysis results
are used to remove unnecessary write barriers.
3.1 Analysis features
Our analyses are
ow-sensitive, forward data
ow analyses
that compute must points-to information at each progam
point. The precise nature of the computed data
ow facts
depends on the analysis. In general, the analyses work with
a set of variables V that must point to the object most
recently allocated by the current method, and optionally a
set of types T of objects allocated by invoked methods.
3.2 Program Representation
In the rest of this paper, we use v, v0 , v1 to denote
local variables, m, m0 , m1 to denote methods, and C, C0 ,
to denote types. The statements that are relevant to
our analyses are as follows: the object allocation statement
the move statement and the call
statement In the given form,
the rst parameter to the call, v1 , points to the receiver
object if the method m is an instance method. 3
We assume that a preceding stage of the compiler has constructed
a control
ow graph for each method and a call
graph for the entire program. We use entry m
to denote the
entry point of the method m. For each statement st in the
program, pred(st) is the set of predecessors of st in the
control
ow graph. We use st to denote the program point
immediately before st, and st to denote the program point
immediately after st. For each such program point p (of
the form st or st), we denote A(p) to be the information
computed by the analysis for that program point. We use
Callers(m) to denote the set of call sites that may invoke
the method m.
3.3 The Intraprocedural Analysis
The simplest of our set of analyses is the Intraprocedural
analysis. It is a
ow-sensitive, forward data
ow analysis that
generates, for each program point, the set of variables that
must point to the most recently allocated object, known as
the m-object. We call a variable that points to the m-object
an m-variable.
The property lattice is P(Var) (the powerset of the set of
3 In Java, an instance method is the same as a non-static
method.
any other assignment to v V n fvg
other statements V
Figure
3: Transfer Functions for the Intraprocedural
Analysis
variables Var) with normal set inclusion as the ordering re-
lation, where Var is the set of all program variables. The
operator used to combine data
ow facts at control-
ow
merge points is the usual set intersection operator: u \.
Figure
3 presents the transfer functions for the analysis. In
the case of an allocation statement C," the new
object clearly becomes the most recently allocated object.
Since v is the only variable pointing to this newly-allocated
object, the transfer function returns the singleton fvg. For
a call statement the transfer
function returns ;, since in the absence of any interprocedural
information, the analysis must conservatively assume
that the called method may allocate any number or type of
objects. For a move the source of
the move, v2 , is an m-variable, the destination of the move,
v1 , becomes an m-variable. The transfer function therefore
returns the union of the current set of m-variables with the
singleton fvg. For a move statement where the source of the
move is not an m-variable, or for any other type of assignment
(i.e., a load from a eld or a static eld), the destination
of the move may not be an m-variable after the move.
The transfer function therefore returns the current set of
m-variables less the destination variable. Other statements
leave the set of m-variables unchanged.
The analysis result satises the following equations:
st entry m
ufA(st st
The rst equation states that the analysis result at the program
point immediately before st is ; if st is the entry
point of the method; otherwise, the result is the meet of
the analysis results for the program points immediately after
the predecessors of st. As we want to compute the set
of variables that denitely point to the most recently allocated
object, we use the meet operator (set intersection).
The second equation states that the analysis result at the
program point immediately after st is obtained from applying
the transfer function for st to the analysis result at the
program point immediately before st.
The analysis starts with the set of m-variables initialized
to the empty set for the entry point of method and to the
full set of variables Var (the top element of our property
lattice) for all the other program points, and uses an iterative
algorithm to compute the greatest xed point of the
aforementioned equations under subset inclusion.
3.4 The Callee Only Analysis
The Callee Type Extension builds upon the framework of
the Intraprocedural analysis, and extends it by using information
about the types of objects allocated by invoked
methods.
This extension stems from the following observation. The
Intraprocedural analysis loses all information at call sites because
it must conservatively assume that the invoked method
may allocate any number or type of objects. The Callee
Type Extension allows us to retain information across a call
by computing summary information about the types of the
objects that the invoked methods may allocate.
To do so, the Callee Type Extension relaxes the notion of
the m-object. In the Intraprocedural analysis, the m-object
is simply the most recently allocated object. In the Callee
Type Extension, the m-object is the object most recently allocated
by any statement in the currently analyzed method.
The analysis then computes, for each program point, a tuple
containing a variable set V and a type set T.
The variable set V contains the variables that point to the
m-object (the m-variables), and the type set T contains the
types of objects that may have been allocated by methods
invoked since the allocation of the m-object.
The property lattice is now
where Var is the set of all program variables and Types is the
set of all types used by the program. The ordering relation
on this lattice is
and the corresponding meet operator is
The top element is This lattice is in fact
the cartesian product of the lattices
and These two lattices have
dierent ordering relations because their elements have different
meanings: must information, while
may information.
Figure
4 presents the transfer functions for the Callee Only
analysis. Except for call statements, the transfer functions
treat the variable set component of the tuple in the same
way as in the Intraprocedural analysis. For call statements
of unanalyzable methods (for example, native methods), the
transfer function produces the (very) conservative approximation
h;; ;i. For other call statements, the transfer function
returns the variable set unchanged, but adds to the type
set the types of objects that may be allocated during the call.
Due to dynamic dispatch, the method invoked at st may be
one of a set of methods, which we obtain from the call graph
using the auxiliary function Callees(st). To determine the
types of objects allocated by any particular method, we use
another auxiliary function Allocated Types. The set of
types that may be allocated during the call at st is simply
the union of the result of the Allocated Types function
applied to each component of the set Callees(st). The
only other transfer function that modies the type set is the
st
Allocated Types(m))
any other assignment to v hV n fvg;
other statements hV;
Figure
4: Transfer Functions for the Callee Only Analysis
allocation statement, which returns ; as the second component
of the tuple.
The Callees function can be obtained directly from the
program call graph, while the Allocated Types function
can be e-ciently computed using a simple
ow-insensitive
analysis that determines the least xed point for the equation
given in Figure 5.
The analysis solves the data
ow equations in Figure 4 using
a standard work list algorithm. It starts with the entry point
of the method initialized to h;; ;i and all other program
points initialized to the top element hVar; ;i. It computes
the greatest xed point of the equations as the solution.
3.5 The Caller Only Analysis
The Caller Context Extension stems from the observation
that the Intraprocedural analysis has no information about
the m-object at the entry point of the method. The Caller
Context Extension augments this analysis to determine if
the m-object is always the receiver of the currently analyzed
method. If so, it analyzes the method with the this variable
as an element of the set of variables V that must point to
the m-object at the entry point of the method.
With the Caller Context Extension, the property lattice,
associated ordering relation, and meet operator are the same
as for the Intraprocedural analysis. Figure 6 presents the
additional data
ow equation that denes the data
ow result
at the entry point of each method. The equation basically
states that if the receiver object of the method is the m-
object at all call sites that may invoke the method, then
the this variable refers to the m-object at the start of the
method. Note that because class (static) methods have no
receiver, V is always ; at the start of these methods. It is
straightforward to extend this treatment to handle call sites
in which an m-object is passed as a parameter other than
the receiver.
Within strongly-connected components of the call graph, the
analysis uses a xed point algorithm to compute the greatest
xed point of the combined interprocedural and intraprocedural
equations. It initializes the analysis with fthisg at
each method entry point, Var at all other program points
within the strongly-connected component, then iterates to
a xed point. Between strongly-connected components, the
algorithm simply propagates the caller context information
in a top-down fashion, with each strongly-connected component
analyzed before any of the components that contain
methods that it may invoke.
3.6 The Full Interprocedural Analysis
The Full Interprocedural analysis combines the Callee Type
Extension and Caller Context Extension. The transfer functions
are the same as for the Callee Only analysis, given in
Table
4. Likewise, the property lattice, associated ordering
relation and meet operator are the same as for the Callee
Only analysis. The analysis result at the entry point of the
method, however, is subject to the equation given in Figure
7.
With this extension, the analysis will recognize that it can
use hfthisg; ;i as the analysis result at the entry point
entry m
of a method m if, at all call sites that may invoke
m, the receiver object of the method is the m-object and the
type set is ;. Note that if we expand our denition of the
safe method, we can additionally propagate type set information
from the calling context into the called method.
Like the algorithm from the Caller Only analysis, the algorithm
for the Full Interprocedural analysis uses a xed
point algorithm within strongly-connected components and
propagates caller context information in a top-down fashion
between components. It initializes the analysis algorithm to
compute the greatest xed point of the data
ow equations.
3.7 How to Use the Analysis Results
It is easy to see how the results of the Intraprocedural analysis
can be used to remove unnecessary write barriers. Since
an m-variable must point to the most recently allocated ob-
ject, the write barrier can be removed for any store to an
object pointed to by an m-variable, since the reference created
must point from a younger object to an older one. The
results of the Caller Only analysis are used in the same way.
It is less obvious how the analysis results are used when the
Callee Type Extension is applied, since the results now include
a type set in addition to the variable set. Consider
a store of the form \v1 and the analysis result
computed for the program point immediately before
the store. If v1 2 V, then v1 must point to the m-object.
Any object allocated more recently than the m-object must
have type C such that C 2 T. If the actual (i.e., dynamic)
type of the object pointed to by v2 is not included in T,
then the object that v2 points to must be older than the
object that v1 points to. The write barrier associated with
Allocated
st
st i is a CALL@ [
Allocated Types(m j )AC C C C C A
Figure
5: Equation for the Allocated Types Function
A(entry m
is an instance method and
st 2 Callers(m); v1 2 V
st is of the form
Figure
Equation for the Entry Point of a Method m for the Caller Only Analysis
the store can therefore be removed if v1 2 V, and if the
type of v2 is not an ancestor of any type in T. Note that
v2 62 T is not a su-cient condition since the static type of
v2 may be dierent from its dynamic type. The analysis
results are used in this way whenever the Callee Type Extension
is applied (i.e., for both the Callee Only and the Full
Interprocedural analyses).
4. EXPERIMENTAL RESULTS
We next present experimental results that characterize the
eectiveness of our optimization. In general, the Full Interprocedural
analysis is able to remove the majority of the
write barriers for most of our applications. For applications
that execute many write barriers per second, this optimization
can deliver modest performance benets of up to 7% of
the overall execution time. There is synergistic interaction
between the Callee Type Extension and the Caller Context
Extension; in general, the analysis must use both extensions
to remove a signicant number of write barriers.
4.1 Methodology
We implemented all four of our write barrier elimination
analyses in the MIT Flex compiler system, an ahead-of-time
compiler for Java programs written in Java. This system,
including our implemented analyses, is available under the
GNU GPL at www:flexc:lcs:mit:edu. The Flex runtime uses
a copying generational collector with two generations, the
nursery and the tenured generation. It uses remembered
sets to track pointers from the tenured generation into the
nursery [18, 1]. Our remembered set implementation uses a
statically allocated array to store the addresses of the created
references. Each write barrier therefore executes a store
into the next free element of the array and increments the
pointer to that element. By manually tuning the size of the
array to the characteristics of our applications, we are able
to eliminate the array over
ow check that would otherwise
be necessary for this implementation. 4
We present results for our analysis running on the Java ver-
4 Our write barriers are therefore somewhat more e-cient
than they would be in a general system designed to execute
arbitrary programs with no a-priori information about the
behavior of the program.
sion of the Olden Benchmarks [6, 5]. This benchmark set
contains the following applications:
An implementation of the Barnes-Hut N-body
solver [2].
bisort: An implementation of bitonic sort [4].
em3d: Models the propagation of electromagnetic waves
through objects in three dimensions [8].
Simulates the health-care system in Colombia
[15].
mst: Computes the minimum spanning tree of a graph
using Bentley's algorithm [3].
perimeter: Computes the total perimeter of a region
in a binary image represented by a quadtree [17].
power: Maximizes the economic e-ciency of a community
of power consumers [16].
treeadd: Sums the values of the nodes in a binary
tree using a recursive depth-rst traversal.
tsp: Solves the traveling salesman problem [14].
voronoi: Computes a Voronoi diagram for a random
set of points [9].
We do not include results for tsp because it uses a nonde-
terministic, probabilistic algorithm, causing the number of
barriers executed to be vastly dierent in each run of
the same executable. In addition, for three of the benchmarks
(bh, power, and treeadd) we modied the benchmarks
to construct the MathVector, Leaf, and TreeNode
data structures, respectively, in a bottom-up instead of a
top-down manner.
We present results for the following compiler options:
Baseline: No optimization, all writes to the heap have
associated write barriers.
A(entry m
is an instance method and
st 2 Callers(m); v1 2 V;
where hV;
st is of the form
Figure
7: Equation for the Entry Point of a Method m for the Full Interprocedural Analysis
Intraprocedural: The Intraprocedural analysis described
in Section 3.3.
Callee Only: The analysis described in Section 3.4,
which uses information about the types of objects allocated
in invoked methods.
Caller Only: The analysis described in Section 3.5,
which uses information about the contexts in which
the method is invoked. Specically, the analysis determines
if the receiver of the analyzed method is always
the most recently allocated object and, if so, exploits
this fact in the analysis of the method.
Full Interprocedural: The analysis described in Section
3.6, which uses both information about the types
of objects allocated in invoked methods and the contexts
in which the analyzed method is invoked.
The Caller Only and Full Interprocedural analyses view dynamically
dispatched calls as :Analyzable. The transfer
functions for these call sites conservatively set the analysis
information to h;; ;i. As explained below in Section 4.4,
including the allocation information from these call sites sig-
nicantly increases the analysis times but provides no corresponding
increase in the number of eliminated write barriers.
For each application and each of the analyses, we used the
MIT Flex compiler to generate two executables: an instrumented
executable that counts the number of executed write
barriers, and an uninstrumented executable without these
counts. For all versions except the Baseline version, the compiler
uses the analysis results to eliminate unnecessary write
barriers. We then ran these executables on a 900MHz Intel
Pentium-III CPU with 512MB of memory running RedHat
Linux 6.2. We used the default input parameters for the
Java version of the Olden benchmark set for each application
(given in Table 13).
4.2 Eliminated Write Barriers
Figure
8 presents the percentage of write barriers that the
dierent analyses eliminated. There is a bar for each version
of each application; this bar plots (1 W=WB ) 100%
where W is the number of write barriers dynamically executed
in the corresponding version of program and WB is
the number of write barriers executed in the Baseline version
of the program. For bh, health, perimeter, and treeadd,
the Full Interprocedural analysis eliminated over 80% of the
write barriers. It eliminated less than 20% only for bisort
and em3d. Note the synergistic interaction that occurs when
exploiting information from both the called methods and
the calling context. For all applications except health, the
Caller Only and Callee Only versions of the analysis are able
to eliminate very few write barriers. But when combined,
as in the Full Interprocedural analysis, in many cases the
analysis is able to eliminate the vast majority of the write
barriers.
bh
bisort em3d health mst
perimeter power treeadd voronoi
Full Interprocedural
Caller Only
Callee Only
Intraprocedural
0%
20%
40%
80%
100%
Percentage
Decrease
in
Write
Barriers
Executed
Figure
8: Percentage Decrease in Write Barriers Execute
To evaluate the optimality of our analysis, we used the MIT
Flex compiler system to produce a version of each application
in which each write instruction is instrumented to
determine if, during the current execution of the program,
that write instruction ever creates a reference from an older
object to a younger object. If the instruction ever creates
such a reference, the write barrier is denitely necessary, and
cannot be removed by any age-based algorithm whose goal
is to eliminate write barriers associated with instructions
that always create references from younger objects to older
objects. There are two possibilities if the store instruction
never creates a reference from an older object to a younger
object: 1) Regardless of the input, the store instruction will
never create a reference from an older object to a younger
object. In this case, the write barrier can be statically re-
moved. Even though the store instruction did not create
a reference from an older object to a younger object in the
current execution, it may do so in other executions for other
inputs. In this case, the write barrier cannot be statically
removed.
Figure
9 presents the results of these experiments. We present
one bar for each application and divide each bar into three
categories:
Unremovable Write Barriers: The percentage of
executed write barriers from instructions that create a
reference from an older object to a younger object.
Removed Write Barriers: The percentage of executed
write barriers that the Full Interprocedural anal-0.10.30.50.70.9bh bisort em3d health mst
perimeter power treeadd voronoi
Proportion
of
DynamicWrite
Barriers
Unremovable Removed Potentially Removable
Figure
9: Write Barrier Characterization
ysis eliminates.
Potentially Removable: The rest of the write barri-
ers, i.e., the percentage of executed write barriers that
the Full Interprocedural analysis failed to eliminate,
but are from instructions that never create a reference
from an older object to a younger object when run on
our input set.
These results show that for all but two of our applications,
our analysis is almost optimal in the sense that it managed
to eliminate almost all of the write barriers that can be eliminated
by any age-based write barrier elimination scheme.
4.3 Execution Times
We ran each version of each application (without instrumen-
tation) four times, measuring the execution time of each
run. The times were reproducible; see Figure 15 for the
raw execution time data and the standard deviations. Figure
presents the mean execution time for each version of
each application, with this execution time normalized to the
mean execution time of the Baseline version. In general, the
benets are rather modest, with the optimization producing
overall performance improvements of up to 7%. Six of the
applications obtain no signicant benet from the optimiza-
tion, even though the analysis managed to remove the vast
majority of the write barriers in some of these applications.
Figure
11 presents the write barrier densities for the dier-
ent versions of the dierent applications. The write barrier
density is simply the number of write barriers executed per
second, i.e., the number of executed write barriers divided by
the execution time of the program. These numbers clearly
show that to obtain signicant benets from write barrier
elimination, two things must occur: 1) The Baseline version
of the application must have a high write barrier density, and
2) The analysis must eliminate most of the write barriers.
4.4 Analysis Times
Figure
12 presents the analysis times for the dierent applications
and analyses. We include the Full Dynamic Interprocedural
analysis in this table | this version of the
analysis includes callee allocated type information for call0.910.930.950.970.99bh bisort em3d health mst
perimeter power treeadd voronoi
Normalized
Execution
Time
Intraprocedural Callee Only Caller Only Full Interprocedural
Figure
10: Normalized Execution Times for Benchmark
Programs
Benchmark Write Barrier Density
(write barriers/s)
bisort 4769518
em3d 773375
health 624960
mst 1031059
perimeter 2053484
power 3286
treeadd 955755
Figure
11: Write Barrier Densities of the Baseline
Version of the Benchmark Programs
sites that (because of dynamic dispatch) have multiple potentially
invoked methods. As the times indicate, including
the dynamically dispatched call sites signicantly increases
the analysis times. Including these sites does not signi-
cantly improve the ability of the compiler to eliminate write
barriers, however, since the Full Interprocedural analysis is
already nearly optimal for seven out of nine of our benchmark
programs.
4.5 Discussion
The experimental results show that, for many of our benchmark
programs, our analysis is able to remove a substantial
number of the write barriers. The performance improvement
from removing these write barriers depends on the inherent
barrier density of the application | the larger the
barrier density, the larger the performance improve-
ment. While the performance impact of the optimization
will clearly vary based on the performance characteristics
of the particular execution platform, the optimization produces
modest performance increases on our platform.
By instrumenting the application to nd store instructions
Analysis Time
Full Full Dynamic
Benchmark Intraprocedural Callee Only Caller Only Interprocedural Interprocedural
bh
bisort
em3d
health
mst
perimeter
power
treeadd
voronoi
Figure
12: Analysis Times for Dierent Analysis Versions
that create a reference from an older object to a younger
object, we are able to obtain a conservative upper bound
for the number of write barriers that any age-based write
barrier elimination algorithm would be able to eliminate.
Our results show that in all but two cases, our algorithm
achieves this upper bound.
We anticipate that future analyses and transformations will
focus on changing the object allocation order to expose additional
opportunities to eliminate write barriers. In general,
this may be a non-trivial task to automate, since it may involve
hoisting allocations up several levels in the call graph
and even restructuring the application to change the allocation
strategy for an entire data structure.
5. RELATED WORK
There is a vast body of literature on dierent approaches to
write barriers for generational garbage collection. Comparisons
of some of these techniques can be found in [19, 12, 13].
Several researchers have investigated implementation techniques
for e-cient write barriers [7, 10, 11]; the goal is to
reduce the write barrier overhead. We view our techniques
as orthogonal and complementary: the goal of our analyses
is not to reduce the time required to execute a write barrier,
but to nd super
uous write barriers and simply remove
them from the program. To the best of our knowledge, our
algorithms are the rst to use program analysis to remove
these unnecessary write barriers.
6. CONCLUSION
Write barrier overhead has traditionally been an unavoidable
price that one pays to use generational garbage collec-
tion. But as the results in this paper show, it is possible to
develop a relatively simple interprocedural algorithm that
can, in many cases, eliminate most of the write barriers in
the program. The key ideas are to use an intraprocedural
must points-to analysis to nd variables that point to the
most recently allocated object, then extend the analysis with
information about the types of objects allocated in invoked
methods and information about the must points-to relationships
in calling contexts. Incorporating these two kinds of
information produces an algorithm that can often eectively
eliminate virtually all of the unnecessary write barriers.
Benchmark Input Parameters Used
bh
bisort 250000 numbers
em3d 2000 nodes, out-degree 100
health 5 levels, 500 time steps
mst 1024 vertices
perimeter
power 10000 customers
treeadd 20 levels
voronoi 20000 points
Figure
13: Input Parameters Used on the Java Version
of the Olden Benchmarks
7.
ACKNOWLEDGEMENTS
C. Scott Ananian implemented the Flex compiler infrastructure
on which the analysis was implemented. Many thanks
to Alexandru Salcianu for his help in formalizing the analyses
8.
--R
Simple generational collection and fast allocation.
A hierarchical O(N log N) force calculation algorithm.
A parallel algorithm for constructing minimum spanning trees.
Adaptive bitonic sorting: An optimal parallel algorithm for shared-memory machines
Data ow analysis for software prefetching linked data structures in Java.
Software caching and computation migration in Olden.
The Design and Implementation of the Self Compiler
Parallel programming in Split-C
Remembered sets can also play cards.
Garbage Collection Algorithms for Automatic Dynamic Memory Management.
Probabilistic analysis of partitioning algorithms for the traveling-salesman problem in the plane
A performance study of Time Warp.
Decentralized optimal power pricing: the development of a parallel program.
Computing perimeters of regions in images represented by quadtrees.
Generational scavenging: A non-disruptive high performance storage reclamation algorithm
Barrier methods for garbage collection.
--TR
Adaptive bitonic sorting: an optimal parallel algorithm for shared-memory machines
Simple generational garbage collection and fast allocation
A comparative performance evaluation of write barrier implementation
The design and implementation of the self compiler, an optimizing compiler for object-oriented programming languages
Decentralized optimal power pricing
Parallel programming in Split-C
Software caching and computation migration in Olden
Garbage collection
Data Flow Analysis for Software Prefetching Linked Data Structures in Java
Generation Scavenging | program analysis;generational garbage collection;write barriers;pointer analysis |
510867 | Efficient global register allocation for minimizing energy consumption. | Data referencing during program execution can be a significant source of energy consumption especially for data-intensive programs. In this paper, we propose an approach to minimize such energy consumption by allocating data to proper registers and memory. Through careful analysis of boundary conditions between consecutive blocks, our approach efficiently handles various control structures including branches, merges and loops, and achieves the allocation results benefiting the whole program. The computational cost for solving the energy minimization allocation problem is rather low comparing with known approaches while the quality of the results are very encouraging. | Introduction
Today's high demand of portable electronic products makes low energy consumption as important
as high speed and small area in computer system design. Even for non-portable high performance
systems, lower power consumption design helps to decrease packing and cooling cost and increase
the reliability of the systems [25]. A lot of research has been done in improving the power
consumptions of various components in computer systems [8, 13, 19, 20, 25]. In particular, it is
well recognized that access to dierent levels of storage components, such as register, cache, main
memory and magnetic disk, diers dramatically in speed and power consumption [8, 17]. Hence
allocation of variables in a program to dierent storage elements plays an important role toward
achieving high performance and low energy consumption [13]. With today's IC technology and
computer architecture, memory read/write operations take much more time and consume much
more power than register read/write operations [3, 8]. For some data-intensive applications,
energy consumption due to memory access can be more than 50%[16]. In this paper, we focus on
the problem of allocating variables in a program between registers and main memory to achieve
both short execution time and low energy consumption.
Register allocation for achieving the optimal execution time of a program on a given system
is a well known problem and has been studied extensively. In general, existing results can be
divided into two categories, local register allocation [5, 15, 18] and global register allocation
[2, 4, 11, 14, 22, 23, 27, 28, 29]. A local register allocator considers the code in one basic block
(a sequence of code without any branch) at a time and nds the best allocation of the variables
in that basic block to achieve the minimum execution time. A global allocator deals with code
containing branches, loops, and procedure calls. The global register allocation problems are in
general NP-hard [22] and can be modeled as a graph-coloring problem [1, 4, 6, 7, 11]. Integer
linear programming approaches and heuristics have been proposed to solve the global register
allocation problems [22, 14].
However, optimal-time register allocations do not necessarily consume the least amount of
energy [8, 19, 20]. Some researchers have recognized this challenge and have proposed interesting
approaches to deal with the optimal-energy register allocation problem [9, 13, 25]. Chang and
Pedram [9] gave an approach to solve the register assignment problem in the behavioral synthesis
process. They formulated the register assignment problem for minimum energy consumption
as a minimum cost clique covering problem and used max-cost
ow algorithm to solve it. But
memory allocation is not considered in [9]. Gebotys [13] modeled the problem of simultaneous
memory partitioning and register allocation as a network
ow problem and an ecient network
ow algorithm can be applied to nd the best allocation of variables in a basic block to registers
and memory. However, the work in [13] is only applicable to basic blocks.
In this paper, we extend the work in [13] by investigating the problem of register and memory
allocation for low energy beyond basic blocks. We consider programs that may contain any
combination of branches, merges, and loops. The basis of our approach is to perform the allocation
block by block. By carefully analyzing boundary conditions between consecutive blocks, we are
able to nd the best allocation for each block based on the allocation results from previous blocks.
In this way, we maintain the global information and make allocation decision to benet the whole
program, not just each basic block. Our approach can be applied to dierent energy and variable
access models. The time complexity of our algorithm is O(b l(n; k)), where b is the total number
of blocks and l(n; k) is the time complexity for register allocation in a single block that has n
variables and k available registers. The function l(n; k) is either O(nlog n), O(knlog n), or O(kn 2 ),
depending on the dierent energy and access models used.
The rest of the paper is organized as follows. Section 2 reviews the energy models and known
approaches for local register allocation for low energy. We also give an overview of how our
allocation approach works. Section 3 gives our approach to do the allocation for the branch
control structure. Section 4 presents our algorithm for low energy register allocation for the
merge structure in a control
ow graph. In section 5, we discuss how to treat loops to get better
register allocation results. Section 6 summarizes the experimental results and concludes with
some discussions.
Overview
Allocating variables to dierent levels of storage components is usually done by a compiler. Many
factors can aect the allocation results. There are a number of techniques that can be applied to
reduce the number of cycles needed to execute a program, including instructions reordering and
loop unrolling. In this paper, we assume that such optimization techniques have been applied to
the program and a satisfactory schedule of the program is already given.
2.1 Energy models and known approaches
be the set of variables in the program under consideration. Given a
schedule, the lifetime of a variable v in a basic block is well dened and is denoted by its starting
time, t s (v), and nishing time, t f (v). We use VR (resp., VM ) to represent the set of variables
assigned to the k available registers (resp., memory). Furthermore, let e M
wv ) be the
energy consumed by reading (resp., writing) variable v from (resp., to) memory and e R
wv ) be the energy consumed by reading (resp., writing) variable v from (resp., to) registers. If v
is not specied, e M
r
represents the energy consumed by one read (resp., write) access to
the memory and e R
r
represents the energy consumed by one read (resp., write) access
to registers. Denote the total energy consumed by a given program as E. Then the objective of
the register allocation problem is to map each variable v 2 V to register les or memory locations
in order to achieve the least energy consumption due to data accesses, that is, to minimize
v2VR
Dierent energy models and dierent data access models have been proposed in the literatures
[5, 8, 9, 13, 20]. Depending on the particular models used, the calculation of energy consumption
varies. One of the energy model is the static energy model SE, which assumes that references
to the same storage component consume the same amount of energy [8]. This model is like the
reference-time model, where accesses to the same storage component take the same amount of
time, no matter of what is the value of the referenced data. In [9, 20], the authors proposed a
more sophisticated energy model, called activity based energy model AE, capturing the actual
data conguration in a storage. Under this model, the energy consumed by a program is related to
the switched capacitance of successive accesses of data variables which share the same locations.
The switched capacitance varies for dierent access sequences. In particular, the product of the
Hamming distance, the number of bits two data items dier [10], or other measurements [9] and
the total capacitance of a storage is used to represent the switched capacitance for two consecutive
references.
Whether a data variable is read only once (SR) or multiple times (MR) after it is dened
(written) also has an eect on the overall energy estimation. Carlisle and Lloyd [5] presented
a greedy algorithm for putting the maximum number of variables into registers to minimize the
execution time of a basic block under the signal read model. Their approach also gives an optimal
energy consumption for the static energy model. It takes O(nlog n) time and is quite easy to
implement.
For the multiple-read case, the algorithm proposed in [13] uses a graph which can have O(n 2 )
edges in the worst case. A better graph model is given in [5], which has only O(n) edges. Applying
the minimum cost network
ow algorithm, the optimal-time solution, which is also the optimal-
energy solution, can be obtained in O(knlog n) time [5] rather than O(kn 2 ) as in [13].
As one can see, dierent models result in dierent computational overhead in computing
energy consumption. For example, the AE-MR model is the most comprehensive and also most
computationally expensive one. Depending on the availability of the models, the architecture
features of a system, and the computational overhead that one is willing to incur, any of the
above models may be used in energy consumption estimation. Therefore, we consider all the two
energy models and the two variable access models in our paper.
For the activity based energy model, we will only consider it for register le access to simplify
the problem formulation. (Adopting the activity based model for memory is a simple extension
to the algorithm discussed later in this paper.) Under this model, the total energy consumption
of a program can be calculated as
is the Hamming distance between v i and v j , designates that v j is accessed
immediately after v i and that v i and v j share the same register, C R
rw is the average switched
capacitance for register le reference and VR is the operational voltage of the register le.
The objective for optimal energy allocation is to minimize the objective function (2). For both
the single read and multiple read models, Gebotys [13] formulated the problem as a minimum
cost network
ow problem with a solution of O(kn 2 complexity. The cost for each edge in
the network
ow problem for dierent read models is also dierent.
In some designs, it is desirable to allow a variable to be assigned to registers in one time interval
and switched it to memory in another time interval, in order to minimize the total number of
accesses to memory. This is called split lifetime [11]. If the split lifetime model is used, the graph
model will need to accommodate all cases where a split occurs. Hence, O(n 2 ) graph edges will be
needed, and the algorithm in [5] has an O(kn 2 complexity, the same as that in [13].
The results we discussed above only apply to variables in a basic block, i.e., a piece of straight-line
code. But real application programs may contain some control structures, such as branches,
merges and loops. A merge is the case where one block has more than one parent blocks. Such
type of control
ows occurs when an instruction has more than one entry point (due to branching,
looping, or subroutine calls). To consider register allocation in a program with these control
structures, one must deal with those variables whose lifetimes extend beyond one basic block. A
straightforward way of applying the existing approaches to such programs would be to enumerate
all possible execution paths and treat each path as a piece of straight line code. Two problems
arise with this approach. First, the number of paths increases exponentially in the worst case
as the number of basic blocks increases. Secondly, each variable may have dierent lifetimes
in dierent paths. Hence such an approach would be computationally costly and be dicult to
obtain an optimal allocation. On the other hand, nding an optimal allocation for all the variables
in a general program has been known to be computational intractable (an NP-hard problem).
2.2 Overview of our approach
In the following, we outline our heuristic approach which can nd superior solutions for the
minimum energy register allocation problem. To avoid the diculties encountered by the known
approaches discussed above, we use an iterative approach to handle programs containing non-
straight-line code. Such program is modeled by a directed graph G in which each node represents
a basic block. Given the nature of a program, the corresponding graph always has a node whose
in-degree is zero. Furthermore, for each loop, the edges that branch backwards can be identied. A
topological order for the nodes in G can thus be established after removing those edges branching
backwards. To ease our explanation, we generalize the concept of parent and child used in trees.
A parent (resp., child) of a block B is a block whose corresponding node in G is an immediate
ancestor (resp., descendent) of the node corresponding to B. Hence, a block may have multiple
parents blocks (in the case of a merge) and it may be a parent block to itself (in the case of a
loop).
We solve the register allocation problem for each basic block in the topological order. The
allocation of variables within a block is based on the results from its parent blocks. Any assignment
of this variable in the current block must consider the eect of its initial assignment. Furthermore,
we do not allow the allocation result in a block to aect the result in its parent blocks. By
allowing the allocation results to propagate \downwards" without back-tracking, we eliminate
the excessive computational cost yet obtain superior allocations. A main challenge is how to
handle those variables whose lifetimes extend beyond a basic block such that the overall energy
consumption of the program is minimized. We have made several observations to help deal with
such variables. The key idea is to set up appropriate boundary conditions between parent and
child basic blocks and use these conditions to guide the register allocation procedure within the
child basic blocks.
We rst describe the graph model used in [13], and then introduce extensions to handle
variables beyond basic blocks. Consider a basic block B in G. A directed graph
which is a generalization of an interval graph, is associated with B. For each data variable
whose lifetime overlaps with the execution duration of B, two nodes, n s (v) and n f (v) are
introduced, which correspond to the starting time t s (v) (when v is rst written) and the nishing
time t f (v) (when v is last read), respectively. Note that v can be either a variable referenced in
B or a variable which will be referenced by B's descendents.
F
a b
c d
e
f
ns(a) ns(b)
ns(c)
ns(d)
nf(a)
nf(d)
Figure
1: The graph G for a basic block without considering other blocks in the program
Several dierent types of arcs are used in G. First, each pair of n s (v) and n f (v) is connected
by an arc a(n s (v); n f (v)). To dene the rest of the arcs, we introduce the concept of critical
set. A critical set, C i , is a set of variables with overlapping lifetimes to one another such that
the number of variables in the set is greater than k, the number of available registers. For each
pair of C i and C i+1 (the index ordering is based on scanning the variables from the beginning to
the end of block B), let D i be the set of variables whose lifetimes are in between the minimum
nishing time of all variables in C i and the maximum starting time of all variables in C i+1 . Note
that D 0 contains the variables whose lifetimes are in between the starting time of B and the
maximum starting time of all variables in C 1 , and that D g is the set of variables whose lifetimes
are in between the minimum nishing time of all variables in C g (the last critical set in B) and
the nishing time of B. Now, an arc is introduced from n f (u) for each u 2 (C i [D i ) to n s (v) for
each Intuitively, these arcs represent allowable register
sharing among subsequent data references.
Similar to [13], a source node, S, and a nish node, F , are introduced at the beginning and
end of B, respectively. Arcs are used to connect S to n s (v) for each v 2 (D 0 [C 1 ) and to connect
for each u 2 (C g [D g ) to F . The nodes S and F can be considered as the registers available
at the beginning and end of block B. An example graph is shown in Figure 1, where the solid lines
correspond to the arcs associated with variables in B and the dash lines are the arcs introduced
based on C i 's and D i 's. In this graph, variable b, c and d form the rst critical set, C 1 , while
variable a and b form the set D 0 .
To handle the allocation beyond a block boundary, the graph needs to be modied. Substantial
modications to the existing approaches as well as new ideas are proposed here in order to
eciently handle the branch, merge, and loop structures.
Bc
Figure
2: Four dierent allocation results for a cross-boundary variable
3 Register Allocation for the Branch Structure
In this section, we discuss in detail our algorithms of register allocation for the branch structure
for the two dierent energy models.
3.1 Beyond boundary allocation for the static energy model
We rst consider the case in which only single read is allowed, and then extend the result to the
multiple read case.
For a variable, v, if it is dened in a block B 0 and is also read in another block B, the lifetime
of v crosses the boundary of the two blocks. The block B 0 is the parent block, B p , of the block B.
We say v is in both B p and B. When our algorithm computes the allocation for B p , it performs
the computation as if v did not cross the boundary. When we come to the block B, we will make
the best allocation decision for v in B based on the allocation result of v in B p .
There are totally four dierent combinations of allocation results for v in the two consecutive
blocks: (1) v in register in both B p and B (R!R), (2) v in register in B p but in memory in B
(R!M), (3) v in memory in B p but in register in B (M!R), and (4) v in memory both in B p and
B (M!M), as shown in Figure 2. In Figure 2, solid line segments mean the variable is in register,
while dashed segments mean the variable is in memory. (We will use this convention throughout
the paper unless stated otherwise explicitly.)
We will analyze the energy consumption for all the four cases, R!R, R!M, M!R, and
M!M. For the R!R case, the energy consumed by v in block B p is e R w when it is dened, in
block B is e R r when it is read from the same register, and totally the energy consumed by v in the
two consecutive blocks,
r . By applying the same analysis to other three cases,
we obtain the amount of energy consumed by a crossing-boundary variable in dierent blocks, as
shown in the column R!R, R!M, M!R, and M!M in Table 1.
For a local variable of B, which is written and read only in block B, the energy consumed by
Table
1: Energy consumption in dierent blocks under dierent allocation results
Block R! R R! M M! R M! M Local R Local
w e R
r e R
r e R
r
r e R
r e R
r
it in B is e R
r or e M
r if it is assigned to a register (Local R) or a memory (Local M)
location. It consumes no energy in B p since it does not exist in B p . The energy consumed by a
local variable in B is shown in the last two columns (Local R and Local M) in Table 1.
From
Table
1, it is easy to see that, based on the allocation result for B p , the best choice for
a global variable in B should be:
is in memory in B p , it should stay in memory in B to achieve the lowest energy.
Comparing the two columns corresponding to the situation when v is in memory in B p ,
M!R and M!M, it is clear that if v is in memory in B p , assigned it to a register will cost
more energy than assign it to memory in B.
R!Local : If v is in register in B p , it should be treated as a brand new local variable in block
B to achieve the optimal energy. The energy data for block B and B in the columns
for R!R and Local R is same, while the data diers only by e R
r in the columns for
R!M and Local M in which the total amount of energy is much larger than the dierence.
So a simple way is to just treat this kind of global variable as a brand new local variable
whose allocation is to be determined in block B. If it turns out to be assigned to register
in B too, it should stay in the same register as it uses in B p .
Treating v, which extends beyond boundary and is assigned to a register in B p , as a brand
new variable in B is clearly an approximation. Another way is to assign the actual energy
saving, the energy saved by assigning a variable to register comparing with assigning it to
memory, as the weight of the variable and construct a network
ow and then use minimum-cost
network
ow algorithms to solve the problem as described in [5] for the weighted case.
A variable, v, can also be dened in B, not read in the child of B, but read in the grandchild
or even grand-grandchild of B. The analysis for this kind of variable is as same as the above
analysis for the variable that dened in B p and used in B, and same conclusion is drawn for the
allocation in the children blocks of B.
With these rules, we can execute the local register allocation algorithm on each of the blocks
in a program. For the simple case, the time complexity of the allocation for the whole program
is O(bnlog n), where n is the largest number of local variables in a basic block. For the more
complex case, the time complexity is O(bknlog n). The algorithm is shown in Algorithm 1.
As we discussed earlier in Section 2, a variable in a program may be read more than once
after it is written. In this case, we can use the same graph model as in [5] but modify the weights
Algorithm 1 Algorithm for low energy register allocation in static energy model
Input: A scheduled program with blocks, P g.
Output: The register assignment for every variable v in program P .
Denitions:
allocation result for variable v in block B i . B i (v) is 1 if v is in register, 0 if v is in
memory.
weight(v): the weight of variable v.
apply algorithm for the unweighted case in [5] to B 1
to m do
if using the simple way for approximation then
for all variables, v, live in B i p and B i do
else
v is treated the same as other local variables
end for
apply algorithm for the unweighted case in [5] to B i
else
for all variables, v, live in B i p and B i and is read in B i do
else
r
end for
for all variables, v, live in B i p and B i but not read in B i do
else
end for
for all variables written and read in B i do
end for
for all variables written but not read in B i do
w e R
end for
apply algorithm for the weighted case in [5] to B i
end for
Table
2: Weight for global variables and local variables in B
Block Global v in Register in B p Global v in Memory in B p Local
associated with certain graph edges. Specically, we assign the energy saving by assigning a
variable to a register as the weight for this variable. For the current block, B, the weight of a
variable dened in B is h v e M
r +e R
is the total number of read accesses to
v as dened in Section 2. If a variable is live in the parent block, B p , and is assigned to a register
in B p , the weight is for it in the current block B is h v e M
r ). Otherwise, if it
is allocated to memory in B p , the weight is h v e M
r ). Note that the weight is
the energy saving when v is allocated to a register in the current block B. Since the total energy
of accessing all the variabls from memory is xed for a basic block, the more energy saved by
allocating variables to registers, the less the energy the block consumes. The weight assignment
is summaried in Table 2.
By applying the network
ow approach in [5] for weighted intervals on each basic block one
after another, we can get a low energy allocation solution for the whole program in O(bknlog n)
time.
If splitting is allowed, the graph model in [5], which only allows the register transfered from
longer sucient. In this case, we can use the graph model (to
be described in the next section) and the network
ow approach presented in [13]. That is, we
assign weights as described in Table 2 and use a minimum-cost network
ow algorithm to solve
the problem in O(bkn 2 ) time.
3.2 Register allocation for branches for the activity-based energy model
To handle the allocation beyond a block boundary, our approach is to decide the register allocation
for block B based on the allocation result from B's parent block, B p . Depending on which variable
is assigned to which register prior to entering B, the amount of energy consumption by each
variable in B can be dierent. Therefore, simply using a single source S to represent the registers
available at the beginning of B is no longer sucient. We generalize the construction of graph G
for B, where B has both a parent and at least one child, as follows. (The graphs for the root and
leave blocks in T are simply special cases of the graph we discuss.)
For those variables that are referenced only in B, we introduce arcs and nodes associated with
them in the same way as we discussed above. The start and nish nodes, S and F , for B are
also maintained. Let the starting and nishing times of block B be t s (B) and t f (B), respectively.
For each variable v in B whose starting (resp. nishing) time is earlier (resp. later) than the
starting use two end
nodes n s (v) and n f (v) in G for the associated arc a(v) (which means that v is considered by all
e
f
c
np(a)
ns(c)
nf(c)
F
Figure
3: The graph G for block B based on the allocation of its parent block B p .
the graphs corresponding to the blocks with which the lifetime of v overlaps). The arcs between
these nodes and S or F are dened in the same ways as discussed in the previous paragraphs.
Furthermore, we introduce a register set, VRB , which contains the variables residing in registers
at the completion of the allocation process for block B. Note that VRB becomes uniquely dened
after the allocation process of B is completed, and that the size of VRB , jV RB j, is always less than
or equal to k, the number of available registers. We expand G by adding a node n p (v) for each
is the parent block of B. It is not dicult to see that the variables in VRBp
are the only variables which have a chance to pass on their register locations to variables in B
directly. Now, we insert an arc from each n p (u) to n s (v) for each v 2 (D
are as dened in the previous paragraphs). Comparing our generalized graph with
the original graph, one can see that at most k additional nodes and k jD additional arcs
are used in the generalized graph. Figure 3 shows an example graph G for the current block B
assuming that there are three available registers.
Sometimes, a program may read a variable, v, more than once, In this case, we introduce an
additional node n r i (v) for each read of v except the last read. Additional arcs are also introduced
to model possible register sharing between variables. Due to the page limit, we omit the discussion
on this part.
Given the directed graph G for B, we are ready to construct the network
ow problem associated
with G. Let x(n f (u); n s (v)) and c(n f (u); n s (v)) be the amount of
ow and the cost of one
unit of
ow on arc a(n f (u); n s (v)), respectively. Denote the total amount of energy consumed by
B as E. The objective function of our network
ow problem can be written as:
v2Bjts (v)<ts (B)
a(p;q)2A
where A is the set of arcs in G. In (3), the rst three terms are the amount of energy consumed
by B if all the variables are assigned to memory, and the last term represents the energy saved by
allocating certain variables to registers. The values of x(p; q) are unknown and to be determined.
If corresponds to a variable v, then v will be assigned to a register. The
values of c(p; q) are dependent on the types of arcs associated with them, and can be categorized
into the following cases.
1) For an arc from a node of type n f to another node of type n s , i.e. a(n f (u); n s (v)), the cost
associated to the arc, c(n f (u); n s (v)), is computed by
r H(u; v)C R
where N is the set of nodes in G. This is the amount of energy saved by reading u from a register
and writing v to the same register.
For an arc from a node of type n p to another node of type n s , i.e. a(n p (u); n s (v)), the cost
associated to the arc, c(n p (u); n s (v)), is dened dierently. There are a total of 7 cases to be
considered.
2.1) If u is not in B(i.e., u's lifetime does not overlap with that of B), and v is written in B, the
cost c(n p (u); n s (v)) is computed by
2.2) If u is not in B, and v has been assigned to a register during the allocation process of B p ,
If u is not in B, and v has been assigned to memory during the allocation process of B p ,
r H(u; v)C R
If u is in B, and v is written in B, the cost c(n p (u); n s (v)) is the same as dened in (6) for
Case 2.2.
2.5) If u is in B, and v has been assigned to a register during the allocation process of B p ,
If u is in B, and v has been assigned to memory during the allocation process of B p ,
r H(u; v)C R
2.7) If u and v represent the same variable, the cost c(n p (u); n s (v)) is simply assigned to zero.
For an arc from start node S to another node of type n s , i.e.
need to have three dierent cost functions.
3.1) If v is written in B,
where H(0; v) is the average Hamming distance between 0 and a variable v, and is normally
assumed to be 0.5.
If v has been assigned to a register during the allocation process of B p ,
If v has been assigned to memory during the allocation process of B p ,
For an arc from a node of type n f to the nish node, F , i.e. a(n f (v); F ) for v 2 D g [ C g , we
need to have two dierent cost functions.
If v is read in B,
c(n f (v); F
r
4.2) If v is not read in B, the cost c(n f (v); F ) is simply assigned to zero.
5) For an arc from a node of type n s to another node of type n f , which is the arc corresponding
to the same variable, the cost associated to the arc is assigned to zero.
Using the above equations, the objective function for the network
ow problem will be uniquely
dened. The constraints for the network
ow problem is dened based on the number of registers
available to the arc. They are summarized as following:
Applying a network
ow algorithm such as the one in [21] to our network
ow problem instance,
we can obtain the value of each x(p; q) in O(kn 2 ) time for the block B, where k is the number of
available registers and n is the number of variables whose lifetimes overlap with the lifetime of B.
If the resulted x value of the arc associated with a variable in B is one, the variable is assigned
to the appropriate register based on the
ow information. The above formulation can then be
applied to each basic block in the program tree in the depth-rst search order as we discussed at
the beginning of this section.
Here we should point out that our method can also be used if one wishes to explore program
execution paths to determine the register allocation. For example, we can associate each execution
path of a program with a priority. One way to assign priorities is based on the execution frequency
of each path (obtained from proling). Another way is based on the timing requirements. To solve
the register allocation problem, one can start with the highest priority path, P 1 , and proceed to
the lower priority ones sequentially. For the rst path, form a super block for the path and use the
local register allocation algorithm in [13] to nd the best allocation for this single path. For the
next path, remove the basic blocks on this path whose register assignments have been determined.
Then form a super block B 0 for the rest of the basic blocks in this path. To construct the graph
for this super block B 0 , we need to consider all those B p 's that have a child in B 0 and introduce
the necessary n p nodes to complete the graph. The network
ow problem for the super block B 0
can use the same formulations as we discussed above. The process is repeated for all subsequent
paths. By applying our algorithm, the variable allocation decisions made for the higher priority
paths will not be changed by the allocation processes of the lower priority paths. Rather, the
results from the higher priority paths are used to insure that good allocations are found for the
lower priority paths based on these results. Hence, we have eectively eliminated the con
icting
assignments that can arise when solving the allocation problem for each path separately.
4 Register Allocation for Blocks with Merging Parent Blocks
In this section we present our approach to handle the merge case. The fact that the allocation
results from dierent parent blocks may be dierent makes the merge case more dicult to handle
than the branch case. If a variable, v, is alive in B but is not dened (written) in B, v must
be alive when leaving every B's parents. Since the allocation results for v in dierent parent
blocks may be dierent, we cannot simply do the allocation for v in B based on the allocation
result from one particular parent block. Hence our register allocation approach for programs with
branches, where each basic block has only one parent block, is not adequate for a block when more
than one blocks merging into it. To handle the control structure of merge blocks, we devise a
method to capture the non-unique assignments of variables to registers. Based on this approach,
we formulate the problem of register allocation for both the static energy model and the activity
based energy model as an instance of the network
ow problem.
Let the parent blocks of B be B assuming m is the total number of blocks
merge into B and m > 1. Each parent block, B p i is associated with a probability, P (B
indicating how frequently the block is executed. (Such probabilities can be obtained from proling
information.) After the allocation for a parent block is nished, the allocation decision for a
variable in B p i which goes beyond the boundary of B p i and B is xed. We dene the probability of
v allocated to a register (resp., memory) in B p i as P (v; R; B
let P (v; R be the probability of v being assigned to the register R j in block B p i . We also
dene P (v; R;
as the total probability of v allocated to a
register (resp., memory) in all the parent blocks, P (v; R
as the total probability of v
allocated to the register R j in all the parent blocks. The following relations hold for the variables
dened above, where k is the total number of registers.
We modify the
ow graph to be used in the merge case as follows. The source node, S, and the
nish node, F , are kept as the same. The nodes used to represent a variable, v, are still the same
as n s (v) for the starting node, and n f (v) for the nishing node. Because the register assignments
in dierent parent blocks may be dierent, a cross boundary variable may have dierent allocation
assignment in dierent parent blocks. To re
ect this fact, a critical set, C i , is dened as a set of
variables with overlapping lifetimes to one another such that the number of variables written in
B in the set is greater than the number k of available registers. Furthermore, we introduce a new
set of register nodes, n r (i), which correspond to the registers. In addition to all
the arcs dened in Section 2, we add new arcs from source node, S, to each register node, n r (i).
A complete bipartite graph is formed between the register nodes and the starting nodes for all
variables in the rst critical set C 1 or D 0 . An example
ow graph for a block B with multiple
parent blocks is shown in Figure 4, assuming that there are two registers, R 1 and R 2 , available
in the system.
In minimizing energy consumption by a single block with multiple blocks merging into it,
one should minimize the overall energy consumption under all execution scenarios for this block.
Thus, the objective function for the
ow problem needs to take into account the probabilities of
executing dierent parent blocks. For dierent energy models, static energy model and activity
based energy model, the objective function and the cost associated with arcs are slightly dierent.
To reduce repetition, we will integrate the presentation of the formulations for both models and
point out the dierences wherever necessary. The objective function can be written as:
F
a
b c d
e
f
ns(a)
nf(a)
ns(c) ns(d)
nf(c)
nf(d)
Figure
4: The graph G for a basic block with more than one parent blocks
v2Bjts (v)<ts (B)
a(p;q)2A
where
1 if static energy model
The rst three terms of computing the objective function (19) are the amount of energy
consumed by block B if all the variables are assigned to memory, while the last term represents
the energy saved by allocating certain variables to registers. The dierence in the value is due
to the fact that in the activity based energy model, the energy consumption of register references
is computed by the switched capacitance between two consecutive accesses and is captured by the
cost, c(p; q), associated with corresponding arcs in the last term of (19). The values of x(p; q) are
unknown and to be determined. If x(p; corresponds to a variable v, then v
will be assigned to a register. The values of c(p; q) are dependent on the types of arcs associated
with them, and can be categorized into the following cases.
1) For an arc from a node of type n f to another node of type n s , i.e., a(n f (u); n s (v)), the cost
associated to the arc, c(n f (u); n s (v)), is computed dierently for two dierent kinds of energy
model. For the static energy model,
r
while for the activity based energy model,
r H(u; v)C R
where N is the set of nodes in G. This is the amount of energy saved by reading u from a register
and writing v to the same register.
For an arc from a node of type n r (i) to another node of type n s , i.e., a(n r (i); n s (v)), the cost
associated to the arc, c(n r (i); n s (v)), can take one of the following values.
2.1) If v is written in B, for the static energy model,
c(n r (i); n s
w e R
while for the activity based energy model,
c(n r (i); n s
For the static energy model, the energy saving is the energy of a memory write, e M
the energy of a register write, e R
w . On the other hand, for the activity based energy model,
the energy for writing v into a register is determined by the hamming distance between v and
the variable u that occupied R i at the entry of block B. Since the probability for variable
u occupying R i is P (u; R hamming distance would be the summation of
the hamming distance between v and all the u which has non-zero probability of occupying
R i in one of the parent blocks. The energy of writing variables in R i back to memory when
R i is assigned to v is already included in the objective function in the second item.
2.2) If v is not written in B, for the static energy model,
c(n r (i); n s
where the rst term is the energy consumption if v is assigned to a memory location, the
second is the energy of assigning v to R i in B if v is in memory in B 0 s parent blocks, and
the last term is the energy of assigning v to R i in B if v is already in a register other than
R i in B's parent blocks.
For the activity based energy model,
c(n r (i); n s
r
where the last term is the energy consumed by assigning v to R i which is determined by
the hamming distance of v and the variables that occupies R i in any of B's parent blocks .
For an arc from source node S to the node of type n r (i), i.e., a(S; n r (i)), the cost function
simply zero.
For an arc from a node of type n f to the nish node, F , i.e., a(n f (v); F ) for v 2 D g [ C g , we
need to have two dierent cost functions.
If v is read in B, for static energy model,
c(n f (v); F
r e R
while for the activity based energy model,
c(n f (v); F
r (27)
4.2) If v is not read in B, the cost c(n f (v); F ) is simply assigned to zero in both the static and
activity based energy model.
5) For an arc from a node of type n s to another node of type n f , which is the arc corresponding
to the same variable, the cost associated to the arc is assigned to zero.
Using the above c(p; q) denitions, the objective function for the network
ow problem is
uniquely dened. The constraints for the network
ow problem are dened based on the number
of registers available to dierent types of arcs. They are summarized in the following:
Applying a network
ow algorithm [21] to our network
ow problem instance, the value of
each x(p; q) can be obtained in O(kn 2 ) time for the block B, where k is the number of available
registers and n is the number of variables whose lifetimes overlap with the lifetime of B. If
the resulted x value of the arc associated with a variable is one, the variable is assigned to the
appropriate register based on the
ow information. The above formulation can then be applied
to each basic block in the program control
ow graph in the topological order.
5 Register Allocation for Loop
In this section, we extend our approach to handle programs with loops. A loop can be considered
as a control structure with more than one parent blocks merging into it. However, it presents
some unique challenges since one of its parent blocks is itself.
One way to solve the allocation problem for loops is to modify the
ow problem formulation
presented in the last section. Consider a loop with one basic block as shown in Figure 5, where
the dash lines represent those variables that are referenced in more than one iteration of the
loop execution. We refer to these variables as loop variables and the rest as non-loop variables.
The graph for modeling the allocation problem of a loop block can be built following the same
guidelines used for the merge case. The graph for the loop block in Figure 5 is shown in Figure 6.
(Note that the existence of loop variables does not change the graph since only register nodes
are needed to model parent block assignments.) Similar to the merge case, we associate certain
probabilities to the variables at the boundary of the loop block and its parent blocks to capture
potentially dierent assignments dictated by dierent parent blocks. For the example in Figure 5,
assuming that the loop has one parent block
corresponds to the
loop itself. It may seem that one can now simply use the
ow problem formulation for the merge
case to nd the allocation for a loop. However, the diculty is that the probabilities associated
with each loop variable (e.g., P are dependent on the allocation result of the loop
block which is yet to be determined.
a f
f
c
d
e
e
in-assignment
pre-assignment
post-assignment
loop variable
loop variable
Figure
5: A loop with one basic block, where e and f are loop variables, and a, b, c and d are
non-loop variables.
The optimization problem can still be solved but it is no longer a simple
ow problem. In
general, assume that B p l is the loop body itself and that the probability of this block loops back to
itself is P (B p l ). (P (B p l ) can be obtained from proling information.) The probability associated
with a loop variable v can be expressed as follows. If v is assigned to register R i upon exiting the
loop, i.e., x(n f (v); F
is assigned to memory upon exiting the loop, i.e., x(n f (v); F
then Consequently, the following expression can be
derived:
It follows that the
ow problem in (19) can be transformed to an integer quadratic programming
Loop Block
e
F
a
f
f
c
d
e
ns(c)
ns(d)
nf(a)
nf(c)
nf(d)
Figure
Flow graph for the loop in Figure 5.
problem. Integer quadratic programming problems are generally NP-hard and may be solved by
the branch-and-bound method [26].
We propose an ecient heuristic approach, which generally produces superior allocation re-
sults. Recall that our register allocation algorithm determines the variable allocation of a block
based on the allocation results of its parent blocks. Consider a loop variable, it may have some
initial assignments dictated by the allocation of its parent blocks (other than the loop block it-
self), another assignment before its rst reference in the loop block, and yet another assignment
upon leaving the block. To ease our explanation, we refer to these assignments as pre-assignment,
in-assignment, and post-assignment, respectively. In Figure 7(a), we show a possible assignment
result for the block in Figure 5, where for variable e, its pre-assignment is a register, its
in-assignment is memory, and its post-assignment is a register. The basis for our approach is
the following observations. In general, a loop body is executed frequently, i.e., P (B p l ) is relatively
close to one. If a loop variable's pre-assignment is dierent from its in-assignment, only
a single switching from memory to register or vise versa is required. However, if a loop vari-
able's in-assignment is dierent from its post-assignment, as many switchings as the number of
loop iterations would be needed. Hence, reducing the number of loop variables with dierent
in-assignment and post-assignment would tend to result in better allocations.
Our algorithm employs an iterative approach. With the rst allocation attempt, we ignore
the fact that the loop body is one of its parents and compute the allocation by simply using the
results from the other parent blocks. That is, the value of P (v; R; B in the
ow formulation in (19) are assumed to be zero if B p i is the loop body itself and v is a loop
variable. We refer to as the initialization phase ( 0 ).
The algorithm proceeds as follows. If there is no loop variable whose in-assignment is dierent
from its post-assignment, the optimal allocation is found and the algorithm exits. Otherwise,
we perform two more allocation phases: the second phase ( 1 ) and the third phase ( 2 ). In the
second phase, we let the in-assignment of each loop variable to be the value obtained from 0
and solve the resultant allocation problem. Since the values of P (v;
are unknown if v is a loop variable, it may seem that we still need to solve an integer quadratic
programming problem. However, applying some simple manipulations, we can avoid this diculty.
In particular, we modify the value of cost c(n f (v); F ) for each loop variable v. Instead of always
setting it to zero as in the merge case, it is dened according to the in-assignment of the same
loop variable as follows:
c(n f (v); F
in-assignment is register
e M e R otherwise
The value of P (v; R; B p l ) and P (v; M;B p l ) for each loop variable v are still assumed to be
zero. The modied
ow problem correctly captures the overall energy consumption when the
in-assignment of each loop variable is set to that obtained from 0 . The solution to the modied
ow problem leads to an improved allocation result.
In the third phase ( 2 ), we x the post-assignment of each loop variable to the value obtained
in 0 . Then, P (v; R; B p l ) and P (v; M;B p l ) can be computed by
post-assignment is register R
if v's post-assignment is memory:
By substituting in (19) the above probability values, we again reduce the allocation problem to
a min-cost
ow problem. Solving this
ow problem will result in an allocation that improves the
result obtained from 0 . The better one from 1 and 2 is kept as the best allocation result found
so far. Further improvement could be obtained if more iterations were carried out by treating the
current best allocation as the one obtained from 0 and following the same procedure as used in
In the following, we show that the optimal allocation for a loop block is obtained if our
algorithm exits after the initialization phase 0 . Furthermore, we show that if the second and
third phase are needed, their results always improve that from 0 .
be the energy consumption by loop block B p l and E 0 be the energy consumption
by l assuming that the loop variables do not loop back. Then, minE minE 0 .
can be computed by setting in (19) P (v; R
to zero for each loop variable v. Then, E can be expressed as
r l )(e R
l ))(e R
where VL is the set of loop variables. It is not dicult to verify that E E 0 for any combination
of x(n(R i values. Hence, minE minE 0 . 2
Based on Lemma 1, we can prove the following theorem.
Theorem 1 If the allocation result from the initialization phase ( 0 ) makes the in-assignment of
each loop variable to be the same as its post-assignment, then
Proof: Consider a loop variable v in B p l . If its in-assignment and post-assignment are both
memory, we have x(n(R i contribution to E in
(31) reduces to zero. Similarly, if v's in-assignment and post-assignment are both register R i , we
have
v's contribution to E in (31) becomes zero. When more than one loop variable exists, the same
conclusion can be obtained. Therefore, if the in-assignment of each loop variable is the same as
its post-assignment, we have Ej 0
is the total energy consumption of B p l using
the same allocation as that obtained from 0 . Since minE minE 0 by Lemma 1, we obtain
According to Theorem 1, if the in-assignment of each loop variable is the same as its post-
assignment, the allocation result from 0 is the optimal one for the loop block, and no more
allocation phases are needed.
When the in-assignment of some loop variable diers from its post-assignment, the result
from 0 is no longer optimal. The reason is that the problem formulation used in 0 fails to
account for the extra energy incurred by the loop variable switching its assignment at the loop
boundary. Our second and third phases aim at reducing such energy. In the second phase 1 , the
in-assignment of each loop variable is assumed to be the same as that obtained from 0 , and the
best allocation based on this assumption is identied. Let Ej 0
be the energy consumption of B p l
when the allocation result from 0 is used, and Ej 1
be the energy consumption of B p l when only
the in-assignments of loop variables obtained from 0 are used. After 1 , we obtain an allocation
result having minfEj 1
g as the energy consumption. Note that Ej 0
is simply one of the possible
values of Ej 1
. Hence, minfEj 1
. That is, 1 always improves the result from 0 . In
the third phase 2 , we x the post-assignments of loop variables to the values obtained from 0
and nd the corresponding best allocation. Similar to the statement made for 1 , the allocation
result from 2 also improves that of 0 .
If more allocation phases are carried out based on the better allocation from 1 and 2 , further
improvement may be obtained. Our approach is essentially a hill-climbing technique and could
settle to a suboptimal solution. However, observe that our incremental improvement is based on
the optimal allocation for the loop body itself and that there are usually only a small subset of
the variables in a loop are loop variables. Therefore, our approach tends to have a good chance
to obtain the optimal allocation. Our experimental results also support this observation.
We use the example in Figure 5 to illustrate how our algorithm works for a single-block loop.
Figure
7(a) depicts the allocation result after the initialization phase. Since the in-assignment and
post-assignment for loop variables e and f are dierent, the second and third phase are needed.
In the second phase, cost c(n f;2 (e); F ) of the
ow graph in Figure 6 is set to e M
w e R
r as the
in-assignment of e is memory, while cost c(n f;2 (f); F ) is set to zero as the in-assignment of f is
register. The allocation result after the second phase is shown in Figure 7(b). In the third phase,
we let P since the post-assignment of e
is register R 1 , and P (f; R; B p l since the post-assignment of f is memory.
The allocation result after the third phase is the same as that after 1 . Since the in-assignment
of e (resp., f) is the same as its post-assignment in the allocation result, our process stops. It is
obvious that the allocation result is optimal for this block.
Our algorithm can be readily extended to handle the case when there are more than one block
inside a loop body, i.e., the control structure inside a loop body contains branches and loops. For
branches and merges, each phase in our algorithm needs to solve the allocation problem for each
block in the loop body following the process discussed in Section 3 and 4. In 1 (resp., 2 ), the
in-assignments (resp., post-assignments) of loop variables obtained from 0 are used to compute
the necessary cost or probability values. For loops inside a loop, the sequence of the allocation
is determined by the nested levels of loops. Since inner loops are executed more frequently than
outer loops, the inner-most loop is allocated rst and the result is used by the outer loops.
(a) (b)
e
a
e
c
d
f
f
e
a
e c
d
f
f
Figure
7: The allocation results of dierent allocation phases for a loop. The solid lines correspond
to variables in registers, while the dash lines correspond to variables in memory.
6 Experimental Results and Discussions
Graph-coloring based techniques such as the ones in [6] are most often used for solving the
register allocation problem. We compare our algorithm with a graph-coloring approach. We have
implemented a graph-coloring register allocator and our block-by-block register allocator in the
Tiger compiler presented in [1]. The modularity of the Tiger compiler makes it easy to x other
phases of the compiler, while only alternate the register allocation algorithms. The procedures for
nding the basic blocks, building the trace, and computing the lifetime of each variable are kept
the same in both allocators. In the implementation of the graph-coloring allocator, an interference
graph is built for each program. In applying the graph-coloring approach to color the graph, if
there is no node with degree less than the number of available registers in the graph, the node with
the highest degree will be spilled until all the nodes are colored or spilled. In the implementation
of our algorithm, a network
ow graph is built for each block. Then our algorithm for dierent
control structures is applied to each basic block in the topological order to nd the best allocation.
We compare the allocation results of the two dierent approaches for several real programs
containing complex control structures. The numbers of memory references by the two allocators
are summarized in Table 3. The improvement in the energy consumption and the number of
memory references by using our algorithm over the graph-coloring approach is summarized in
Table
4. According to data in [3, 17, 12], it is reasonable to assume that the ratio of the average
energy for memory reference over the average energy for register reference, e M =e R , is in the range
of 10 to 30. In Table 5, we summarize the sizes of allocation problems that the two approaches need
to solve. As stated at the beginning of the paper, the complexity of our algorithm is O(b l(n; k)),
where b is the total number of blocks and l(n; k) is the time complexity for solving the register
allocation problem for a single block that has n variables and k available registers. The function
l(n; is either O(nlog n) for the single-read static-energy model, O(knlog n) for the multiple-read
static-energy model, or O(kn 2 ) for the activity based energy model [5, 13].
Table
3: The number of memory references allocated by two approaches.
# of # of Our approach Graph-coloring
Example k register data # of spilled # of memory # of spilled # of memory
Programs candidates references candidates references candidates references
branch 14 28 67
Table
4: The improvement of results by our algorithm over graph-coloring
Energy improvement (%) Improvement of # of
Programs e M =e
factorial 72.8 80.7 89.6 29.8
branch 68.2 77.0 87.4 23.9
Table
5: The size of problems solved by two approaches.
Interference graph Network
ow
Benchmarks # of nodes # of edges # of blocks b max #of variables n
branch 28 293 4 26
The experiment results show that our algorithm can achieve more than 60% improvement
in the energy consumption due to data references of a program over a graph-coloring approach.
At the same time we also saved more than 20% of the memory references which will save the
execution time due to data references of a program. Furthermore, our approach is simple to use
and the running time is polynomial.
There are some other improved graph-coloring based algorithms, such as the one in [11].
The register allocation results by those improved algorithms are better than the simple graph-coloring
approach implemented here. But again, those algorithms are all proposed to optimize
the execution time, not the energy consumption, of a program.
Our algorithm proposed in this paper does not take into account any look ahead information
while doing register allocation. It also does not allow any backtracking. In some cases, our
approach may produce suboptimal allocation results. One simple extension of the algorithm
would be to change the cost c(n f (v); F ) associated with the arc a(n f (v); F ) for a cross boundary
variable v according to the possible allocation results of v in child blocks.
In this paper, we proposed a new algorithm to deal with those variables whose lifetimes
extend beyond basic blocks for low energy register allocation. Our algorithm has much less time
and space complexity than other approaches in doing register allocation for variables extending
beyond basic blocks. The experimental results by using our algorithm so far are very promising.
More experiments with larger code sizes are being conducted. We are also investigating on how
to deal with procedure calls.
Acknowledgment
This research was supported in part by the National Science Foundation under Grant CCR-
9623585 and MIP-9701416, and by an External Research Program Grant from Hewlett-Packard
Laboratories, Bristol, England.
--R
Modern Compiler Implementation in C
Digital Circuits with Microprocessor Applications
Computer Architecture: A Quantitative Approach
Networks and Matroids
Integer and
--TR
Combinatorial optimization: algorithms and complexity
On the Minimization of Loads/Stores in Local Register Allocation
The priority-based coloring approach to register allocation
Register allocation via hierarchical graph coloring
A global, dynamic register allocation and binding for a data path synthesis system
Allocation algorithms based on path analysis
On the <italic>k</italic>-coloring of intervals
Register allocation and binding for low power
Power minimization in IC design
Demand-driven register allocation
Low energy memory and register allocation using network flow
Optimal and near-optimal global register allocations using 0MYAMPERSANDndash;1 integer programming
The design and implementation of RAP
Computer architecture (2nd ed.)
Global register allocation for minimizing energy consumption
Modern Compiler Implementation in C
Digital Circuits with Microprocessor Applications
Global Register Allocation Based on Graph Fusion
Register allocation MYAMPERSANDamp; spilling via graph coloring | low energy;register allocation |
510962 | How to Choose Secret Parameters for RSA-Type Cryptosystems over Elliptic Curves. | Recently, and contrary to the common belief, Rivest and Silverman argued that the use of strong primes is unnecessary in the RSA cryptosystem. This paper analyzes how valid this assertion is for RSA-type cryptosystems over elliptic curves. The analysis is more difficult because the underlying groups are not always cyclic. Previous papers suggested the use of strong primes in order to prevent factoring attacks and cycling attacks. In this paper, we only focus on cycling attacks because for both RSA and its elliptic curve-based analogues, the length of the RSA-modulus n is typically the same. Therefore, a factoring attack will succeed with equal probability against all RSA-type cryptosystems. We also prove that cycling attacks reduce to find fixed points, and derive a factorization algorithm which (most probably) completely breaks RSA-type systems over elliptic curves if a fixed point is found. | Introduction
The theory of elliptic curves has been extensively studied for the last 90 years. In
1985, Koblitz and Miller independently suggested their use in cryptography [9, 19].
After this breakthrough, elliptic curve-based analogues of RSA cryptosystem were
proposed [10, 4].
RSA-type systems belong to the family of public-key cryptosystems. A public-key
cryptosystem is a pair of public encryption function fK and a secret decryption
K indexed by a key K and representing a permutation on a finite set
M of messages. The particularity of such systems is that given the encryption
function fK , it is computationally infeasible to recover f \Gamma1
K . Moreover, it might
be suitable that the encryption function does not let the message unchanged, i.e.
given a message m 2 M, we want that fK (m) 6= m. This is known as the message-
concealing problem [3]. Simmons and Norris [29] exploited this feature for possibly
recovering a plaintext from the only public information. Their attack, the so-
Technical Report No. TI-35/97
Technische Universit?t Darmstadt
November 1997
called cycling attack, relies on the cycle detection of the ciphertext. This was later
generalized by Williams and Schmid [31] (see also [7, 1]).
There are basically two ways to compromise the security of cryptosystems. The
first one is to find protocol failures [20] and the other one is to directly attack the
underpinning crypto-algorithm. The cycling attack and its generalizations fall into
the second category. So, it is important to carefully analyze the significance of
this attack. For RSA, Rivest and Silverman [25] (see also [16]) concluded that the
chance that a cycling attack will succeed is negligible, whatever the form of the
public modulus n. For elliptic curve-based systems, the analysis is more difficult
because the underlying group is not always cyclic. We will actually give some results
valid for groups of any rank, but we will mainly dwell on the security of KMOV
and Demytko's system.
The paper is organized as follows. In Section 2, we review KMOV and Demytko's
system. We extend the message-concealing problem to elliptic curves in Section 3.
Then, we show how this enables to mount a cycling attack on KMOV and De-
mytko's system in Section 4. We explain how the secret factors can be recovered
thanks to the cycling attack in Section 5. Finally, in Section 6, we give some concluding
remarks in order to help the programmer to implement "secure" RSA-type
cryptosystems.
2. Elliptic curves
pq be the product of two large primes p and q, and let two integers a; b
such that gcd(4a 3 curve En (a; b) over the ring Zn is the
set of points (x; y) 2 Zn \Theta Zn satisfying the Weierstra- equation
together with a single element On called the point at infinity.
be an elliptic curve defined over the prime field F p . It is well known
that the chord-and-tangent rule [17, x 2.2] makes E into an Abelian group.
Algebraically, we have:
(i) O p is the identity element, i.e. 8P
(ii) The inverse of
with
\Gammay 2
otherwise.
The points of En (a; b) unfortunately do not form an Abelian group. But writing
e
En (a; b) for the group given by the direct product e
since En (a; b) ae e
En (a; b), we can "add" points of En (a; b) by the chord-and-tangent
rule. For large p and q, the resulting point will be a point of En (a; b) with high
probability [10].
It is useful to introduce some notations. Let
is defined, [k]P will denote P+P+ \Delta \Delta b). The x-coordinate
of P will be denoted by x(P). Moreover, since p 2 (the y-coordinate of P) is not
required to compute the x-coordinate of [k]P, we will write [k] x p 1 for x([k]P).
We can now define an analogue of RSA. The public encryption key e is chosen
relatively prime to
and the secret decryption key d is chosen according to ed j 1 (mod Nn ). To
encrypt a point P 2 En (a; b), one computes the ciphertext [e]P. Then, the
authorized receiver recovers P by computing his secret key d.
The only problem is to imbed messages as points on a given elliptic curve without
the knowledge of the secret factors p and q. A first solution was proposed by
Koyama, Maurer, Okamoto and Vanstone [10]. Another one was later proposed by
2.1. KMOV
KMOV cryptosystem uses a family of supersingular elliptic curves of the form
The main property of this system is that if p and q are both congruent to 2 mod 3,
then whatever the value of parameter b. Therefore, to encrypt
a message chosen according to
and the ciphertext is given by [e]M over the curve En (0; b). The plaintext is
then recovered by
Another possibility is to work with elliptic curves of the form En (a;
and q both congruent to 3 mod 4. The first system based on En (0; b) with p; q j 2
(mod will be referred as Type 1 scheme, and the second one based on En (a;
with systems were extended by
Kuwakado and Koyama to form-free primes [12].
2.2. Demytko's system
Demytko's system uses fixed parameters a and b. It has the particularity to only
make use of the x-coordinate of points of elliptic curves. It relies on the fact that
if a number x is not the x-coordinate of a point on an elliptic curve E p (a; b), then
it will be the x-coordinate of a point of the twisted curve defined as the set
of points (x; y) satisfying
is a fixed quadratic non-residue modulo p, together with the
point at infinity. So, Nn is given by
message m is encrypted as [e] x m. Then, m is recovered from the ciphertext
c by
For efficiency purposes, the original scheme (see [4]) was presented with message-
dependent decryption keys. The length of the decryption key is divided by a factor
of 2, on average. However, in the sequel, we will use the message-independent
description because this simplifies the analysis, and because we are not concerned
with efficiency issues.
3. Concealing-message problem
In [3], Blakley and Borosh showed that there are always at least 9 messages that are
unconcealable (i.e. the ciphertext of a message is exactly the same as the cleartext)
for any RSA cryptosystem. Though this problem is well-known for RSA, nothing
appears in the literature about its elliptic curve-based analogues. Since unconcealed
messages must be avoided, effective criteria are needed for evaluating the concealing
power of these latter systems.
Before analyzing the number of unconcealed messages for elliptic curve-based
systems, we will first give some general group-theoretic results.
G be an Abelian (multiplicatively written) finite group of order #G.
Consider the map G; x 7! x k . Then - k permutes the elements of G if
and only if gcd(k;
Theorem 1 Let G be an Abelian (multiplicatively written) finite group of rank r
whose generators are G; x 7! x k permutes the elements
of G, then - k has exactly
Fix(G;
r
Y
fixed points.
Proof: Write
rg. So,
Each equation has solutions. There are thus
fixed points by the permutation map - k .
Let p and q be distinct primes and let unconcealed message on
RSA, we mean a message m 2 Zn so that m e j m (mod n) for a fixed integer e
latter condition ensures that
the exponentiation by e is a permutation map, or equivalently that RSA encryption
is a permutation of Zn .
pq be the RSA-modulus and let e be the RSA-encryption
key. Then, the number of unconcealed messages for RSA is given by
Proof: Since Z
since 0 is always solution to x e j x (mod p),
Theorem 1 tells that there are (gcd(e \Gamma fixed points in Z p . Moreover,
since by Chinese remaindering, the proof is complete.
Note that since p, q and e are odd integers, there are at least 9 unconcealed
messages for the original RSA system. If we exclude to encrypt 0 and \Sigma1 (that are
always unconcealable messages), there are at least 6 unconcealed messages.
An elliptic curve E p (a; b) over the prime field F p is an Abelian group of rank 1
or 2 and of type (n Theorem 2.12]. Therefore, we can
1. If we call x-fixed point a point
such that, when given an integer k, Theorem 1 becomes:
Theorem be an elliptic curve over the prime field F p . If
permutes exactly
fixed points. Furthermore, - k has exactly
Fix x
x-fixed points, where - 2 is the number of points of order 2.
Proof: The first part follows immediately from Theorem 1.
b). P is a x-fixed point if and only if
we
\Psi , we have
ae
Since O p and points of order 2 are counted twice, we obtain Eq. (11). Indeed,
KMOV Type 1 scheme is based on elliptic curves of the form E
and 3). The underlying group is isomorphic to the cyclic group Z p+1 .
Type 2 scheme uses curves of the form E In
that case, the underlying group is also isomorphic to Z p+1 if a is a quadratic residue
modulo p; and it is isomorphic to Zp+1\Phi Z 2 otherwise. From Eq. (10), for an odd
we see that, for a given KMOV elliptic curve E p (a; b), there are at least 2
fixed points if E p (a; b) is cyclic and at least 4 fixed points otherwise. These points
correspond to the point at infinity together with the points of order 2. Noting that
the encryption key e is always odd for KMOV, and since the point at infinity is not
used to represent messages, there are at least 1, 3 or 9 unconcealed messages on a
given KMOV elliptic curve En (a; b). Consequently, the probability that a random
message is unconcealed can be at least 1=n. This has to be compared with 6=n for
the original RSA.
Demytko's encryption works in a group of the form G (i)
p \Theta G (j)
where G (1)
G (2)
Writing G (i)
, we define
and similarly for G (j)
q . Demytko's system only makes use of the x-coordinate. So,
since the point at infinity is never used for encryption, Theorem 2 indicates that
there are
messages. 2 This
number may be equal to 0, and we cannot give general information on the minimal
number of unconcealed messages in Demytko's system.
For efficiency purposes, the public encryption key e is usually relatively small
(for example, are common choices). In all systems, the
number of unconcealed messages depends on expressions of the form
. Therefore, the maximal number of unconcealed messages
is mainly bounded by (e \Sigma 1) 2 . So, if the encryption key is equal to 2
then the probability that a message is unconcealed is at most - 10 \Gamma144 for a 512-
bit RSA-modulus and - 10 \Gamma299 for a 1024-bit RSA-modulus. Even if the number
of unconcealed messages is small, we will see in the next section how this can be
turned into an active attack.
4. Cycling attack
4.1. Previous results on RSA
be the ciphertext corresponding to message m, where (e; n) is
the public key. If we find an integer k that satisfies the equation
then we can obviously recover the plaintext m by computing
mod n.
Note that we do not have to factor the public modulus n, so this might be a serious
failure for the RSA cryptosystem. This attack, firstly proposed by Simmons and
Norris [29], was later extended by Williams and Schmid [31] (see also [7]) in the
following way. Let P(t) be a polynomial. They showed that if the ciphertext c has
a period such that
for some integer g, then the plaintext m can be recovered.
4.2. Generalizing the cycling attack
We can generalize the results of the previous paragraph to any Abelian finite
group G.
Theorem 3 Let G be an Abelian (multiplicatively written) finite group. Let a
message e be the corresponding ciphertext, where gcd(e;
1. 3 If we find an integer P such that c G, then the plaintext m can be
recovered by computing
is the smallest integer such that m
G. By Lagrange's Theorem, t j #G and since gcd(e;
implies that t j eP and thus t j P . Therefore,
Z such that
letting
and c
We call this theorem the generalized cycling attack. This theorem indicates that
KMOV and Demytko's system are also susceptible to the cycling attack.
Detecting the integer P is equivalent to the problem of finding a polynomial P(t)
and an integer P(g). Moreover, the relation c equivalent
to
If
i denotes the prime decomposition of group order #G and since
(c) divides #G, Eq. (16) can be reduced to
for all primes p i dividing ordG (c).
Here, we must check that these relations hold by picking up a random polynomial
P(t) and a random integer g. This means that the cycling attack depends on
the distribution of such polynomial and of the order of ciphertext c.
Roughly speaking, if the order of G is smooth, we can expect that there are
many elements c 2 G with small order. So, primes p i in Eq. (17) will be small,
and polynomial P will be more easily found. Consequently, it might be desirable
to impose that #G contains at least one large prime in order to make harder the
cycling attack. We will now analyze in more details this assumption for elliptic
curve-based systems.
4.3. Application to elliptic curve systems
As previously mentioned, an elliptic curve E p (a; b; ) over the prime field F p is not
necessarily cyclic, but isomorphic to Zn1 \PhiZ Therefore,
for analyzing the cycling attack over elliptic curves, we have to estimate the number
of points in E p (a; b) of a given order. If is a cyclic group), then
the number of elements of order d is given by the Euler's totient function, namely
OE(d). For the general case, we have:
Proposition be an elliptic curve over the prime field F p . If we
then the number of elements of order d is
equal to
Y
where\Omega d;n2 is the set of primes
is the power of p i which appears in the prime decomposition of n.
Furthermore, given the prime factorization of gcd(#E p (a; can be
computed in probabilistic polynomial time.
Note that
then we take
Proof: The first part of the proposition is proved in Appendix A. The second
part follows from Miller's probabilistic polynomial time algorithm for finding n 1
and n 2 (see [17, x5.4]).
We can now derive a lower bound on the number of elements whose order is
divisible by a large prime factor of the order of E p (a; b).
Proposition be an elliptic curve over the prime field F p . Suppose
that #E p (a; b) is exactly divisible by a prime factor l p . If F div (l p ) denotes the
number of elements of order divisible by l p , then
l p
Proof: See Appendix B.
This proposition indicates that if we randomly pick up an element in E p (a; b), it
has order divisible by l p with probability OE(l p )=l . When l p is large, this
probability is non-negligible (i.e. really "nearly 1").
RSA-type cryptosystems over elliptic curves are constructed on groups of the form
En (a; b), which can be considered as E p (a; b) \Theta E q (a; b) by Chinese remaindering.
In the sequel, we will suppose that #E p (a; b) (resp. #E q (a; b)) contains a large
prime factor l p (resp. l q ). With high probability, a random point
will have order divisible by l p (resp. l q ). Therefore a random
point P on En (a; b) (represented by by Chinese remaindering) will
have order divisible by l p and l q with high probability.
As we discussed in Paragraph 4.2, the cycling attack is reduced to find a polynomial
P and an integer g with c
this attack becomes "Find a polynomial P and an integer g so that [P(g)]C = On
for some ciphertext C 2 En (a; b)". Equivalently, this can be formulated by an expression
of the form of Eq. (17). Since the order of ciphertext C is supposed to be
divisible by l p and l q with high probability, we must have
to mount a successful cycling attack. Williams and Schmid [31]
estimated that these relations are rarely fulfilled except when
k. So, we have thus to take care whether or not
and similarly for prime q. Letting ord l p (e) for the smallest integer satisfying
Eq. (20), k must be a multiple of ord l p (e). Consequently, the cycling attack will be
useless if ord l p (e) is large.
Note 1 In his fast generation algorithm of secure keys, Maurer [15] suggested to verify
that e (l p \Gamma1)=r ip 6j 1 (mod l p) for
p is the prime
decomposition of l p \Gamma 1. This criteria implies that ord l p (e) must be large and the cycling
attack is not applicable. Another method is to impose that l contains a large prime
factor rp . The probability that ord l p (e) is divisible by rp will be then
Proof: Let l
be the prime decomposition
of l p \Gamma 1. The number of elements in Z
l p whose order divisible by rp is given by
rp
rp
rp
l p .
This is known as the strong primes criteria.
Through this section, we have proven some conditions to preclude cycling attacks.
Putting all together, we have:
Theorem 4 The cycling attack does not apply against KMOV if the secret prime
p has the following properties: (i) p+1 has a large prime factor l p , and (ii) ord l p (e)
is large; and similarly for prime q.
Theorem 5 The cycling attack does not apply against Demytko's system if the
elliptic curves over F p have the following properties: (i) #E p (a; b) has a large prime
factor l p and #E p (a; b) has a large prime factor l 0
are large; and similarly for prime q.
5. Factoring the RSA-modulus
5.1. Relation between unconcealed message and cycling attack
For a given ciphertext C 2 En (a; b), the cycling attack detects an integer k satisfying
This is equivalent to the message-concealing problem where the
message is now a ciphertext instead of a cleartext. If E
, from Theorem 2, we know
that there are
Fix(En (a; b); e k
unchanged ciphertexts C via encryption by e k . Moreover, by Eq. (20), [e k
yields l p prime l p dividing #E p (a;
similarly for prime q. So the number of unchanged ciphertexts C is larger than
l p l q .
Suppose that primes p and q were chosen so that both #E p (a; b) and #E q (a; b)
contain a large prime factor l p and l q , respectively. Then, there may be many
ciphertexts C such that [e k and the corresponding cleartexts can be re-
covered. This means that a cycling attack is really effective when applicable. To
prevent this attack, the designer has also to verify that ord l p (e) (resp. ord l q (e)) is
large (see Theorems 4 and 5).
5.2. Factoring by means of fixed points
In Section 4, we explained how the cycling attack can recover a plaintext. Here, we
will show that the knowledge of a unchanged ciphertext enables still more, i.e. to
completely break the system by factoring the RSA-modulus
This can be illustrated by the elliptic curve factoring method (ECM) [13] introduced
by Lenstra. It can basically be described as follows. Suppose that n is the
product of two primes p and q. Consider an elliptic curve En (a; b) over the ring Zn .
Assume that #E p (a; b) or #E q (a; b) is B-smooth. Then define
and choose a random P 2 En (a; b)-note that
[r]P in En (a; b) (and not in E p (a; b) \Theta E q (a; b) because p and q are unknown).
As mentioned in Section 2, some points are not "realizable" because En (a; b) is
not a group. During the computation of [r]P, at step i, three situations can oc-
cur: (i) [r i
and [r i
In cases (i) and (ii), the denominator of - in the chord-and-tangent formulas (see
Eq. (2)) will have a non-trivial factor with n. So n is factored. In case (iii), [r]P
is correctly computed, we obtain of n is found and we then
re-iterate the process with another point P or with other parameters a and b.
Let ord p (P) and ord q (P) be the order of point P in E p (a; b) and E q (a; b), re-
spectively. Let - be a prime. We can write ord p
Hence, if we
know an integer r of the form must
have On in En (a; b). If f p 6= f q , or without loss of generality f
we define r
equivalently
and we find a non-trivial factor of n similarly as in ECM.
The message-concealing problem or the cycling attack is due to the presence of
fixed points P 2 En (a; b) such that
message-concealing problem, and for the cycling attack. The
knowledge of a fixed point P gives [r \Gamma On . We are then in the conditions
of ECM and the RSA-modulus can be factored with some probability as follows.
[Step Choose a prime power factor - of r \Gamma 1, i.e. - t
[Step 3] Compute [r 0 ]P in En (a; b).
If an error occurs (i.e. Eq. (22) is satisfied 4 ), then n is factored. Otherwise,
go to Step 2i; if then go to Step 1.
The next theorem says more about the probability of factoring the RSA-modulus
using one iteration of this method.
Theorem 6 Consider KMOV or Demytko's system. Let E
and - is prime. Let fl - denotes the probability that F p +F q - 2
know a fixed point P 6= On such that [r \Gamma On and if r \Gamma 1 is divisible by -,
then we can factor the RSA-modulus with probability at least fl -
Proof: See in Appendix C.
Assume for example that 0:5 and that we know a point P such that
On . If 2 (which is the most probable case), then our algorithm will find
the secret factors of n with probability at least 15%. Otherwise, we re-iterate the
algorithm with another prime factor - of r \Gamma 1.
5.3. Remark on efficiency
Reconsider the cycling attack [e k n). From Eq. (20), k must be a
multiple of both ord l p (e) and ord l q (e) to apply the attack. However, what we ultimately
need to factor the modulus n is to find an integer r 0 such that, for example,
O q (mod q) (see Eq. (22)); or equivalently, such
that q). This means that a cycling
attack just modulo p (or modulo q) rather than modulo both primes simultaneously
enables to factor n. Therefore, k needs to be just a multiple of ord l p (e) or of
ord l q (e), not of both of them. This results in a higher probability of success.
6. Concluding remarks
In Section 4, we proved that if the conditions of Theorems 4 and 5 are fulfilled,
then cycling attacks are useless for elliptic curve-based RSA systems. This is the
elliptic version of the well-known strong primes criteria. For RSA, Rivest and
Silverman [25] claimed that this criteria is not required. They said:
"Strong primes offer little protection beyond that offered by random primes."
We will now analyze more accurately how valid this assertion is, and if it remains
valid for elliptic curve-based systems. The analogue of Theorems 4 and 5 for original
RSA is:
Theorem 7 Let pq be the RSA modulus and let e be the public encryption
exponent. The cycling attack does not apply against RSA if the secret prime p has
the following properties: (i) has a large prime factor l p , and (ii) l
large prime factor r p (cf Note 1); and similarly for prime q.
A prime p satisfying conditions (i) and (ii) of the previous theorem is said to be
a strong prime. Some authors also recommend that (iii) p + 1 has a large prime
factor. Condition (iii) is required in order to protect against the
algorithm [30].
In their paper, Rivest and Silverman only consider the primes p and q. They did
not take into account the second condition of Theorem 7. 5 Our analysis is based
on a previous work of Knuth and Trabb-Pardo [11] (see also [22, pp. 161-163]),
whom rigorously calculated the distribution of the largest, second largest, . prime
factors of random numbers. Also, they have tabulated:
Table
1. Proportion ae(ff) of (large) numbers N whose largest prime factor is - N 1=ff .
ff 1.5 2.0 2.5 3.0 4.0 5.0 6.0 8.0
We can now more precisely quantify what "large" means in Theorem 7 in order
to prevent cycling attacks. A cycling attack remains to find an integer k such that
(mod n) for some ciphertext c, where e is the public encryption key and
is the RSA-modulus. From k, the plaintext m corresponding to c is then
given by
mod n. However, we noticed in x5.3 that it just suffices to mount
a cycling attack modulo p (instead of modulo n) to factor the RSA-modulus. For
RSA, the secret prime factors are recovered as follows. Suppose that there exists
an integer k such that c e k
will give p; and hence Knowing p and q, the secret key d is computed as
and the plaintext m is then given by
From Eqs (17) and (20), if l p denotes the largest prime factor of must be
(with probability a multiple of ord l p (e) to apply the cycling attack modulo
we thus have k - ord l p (e) with probability at least 1 \Gamma 1=l p . From Knuth and
Trabb-Pardo's results, we can derive how does a typical integer k look. We note that
an average case analysis makes a sense since the distribution of the largest prime
factor, the second largest prime factor, . is monotone. The average size of l p is
similarly, the average size of the largest prime factor r p
of l (Note that we suppose that l behave
like random numbers. This assumption was confirmed by experimental results using
the LiDIA package [14]: over 1 000 000 random 100-bit primes l p , 423 were such that
smooth number, that is, a proportion of 0:000423 - 10 \Gamma3:37 .
This has to be compared with .) The average size of the second
largest prime factor r 0
p of l
divides
(see Note 1), we have
with probability at least (1 \Gamma 1=l . For a
512-bit RSA modulus probability is already greater than
and is greater than modulus. In summary, we have:
Table
2. Lower bound K on a typical value for k
such that c e k
(mod p) for a t-bit RSA modulus
t 512 bits 768 bits 1024 bits
Lower
Albeit very high, the estimation of the bound K (see Table 2) is quite pessimistic;
in practice, k will be much larger than K and a cycling attack (modulo p) will
have thus fewer chances to be effective. Indeed, if we take into account the third
largest prime r 00
p of l p , we have k - r p r 0
with probability at least -
example, for a 1024-bit RSA modulus, we have with probability at least
importantly, we only take into account the largest prime factor
l p of
p be the second largest prime factor of its average size is
. The ciphertext c has its order divisible l p l 0
with probability
at least (1 \Gamma 1=l
. Therefore, from Eq. (17) (see also
Eq. (20)), k is very likely (i.e., with probability
a multiple of lcm(ord l p (e); ord l 0
(e)). The largest prime factor s p of l 0
has an
average size of (l 0
with a probability of
at least (1 \Gamma 1=p 0:210 example, for a 1024-bit RSA
modulus, we have with probability at least 1
Consequently, k is expected to be very large, and a cycling attack will thus have
very little chance to be successful.
Hasse's Theorem [27, Theorem 1.1] indicates that #E p (a; b) 2 [p
and we can thus consider that #E p (a; O(p) and p have the same
bit-size. Therefore, from Theorems 4 and 5, the previous discussion still applies
to elliptic curve-based cryptosystems and the conclusion of Rivest and Silverman
remains valid, i.e. the use of strong primes offers (quasi) no additional security
against cycling attacks.
However, as remarked by Pinch [21], a user might intentionally choose a "weak"
RSA-modulus. Suppose that a user chooses his public RSA-modulus
that a cycling attack is possible. In that case, this user can repudiate a document
by asserting that an intruder has discovered by chance (the probability of a cycling
attack is negligible) the weakness. If the use of strong primes is imposed in
standards [8], such arguments cannot be used for contesting documents in court.
In conclusion, from a mathematical point of view, strong primes are not needed,
but they may be useful for other purposes (e.g., legal issues). On the other hand,
since the generation of strong primes is only just a little bit more time consuming,
there is no reason to not use them.
Acknowledgments
We are grateful to Jean-Marc Couveignes for providing useful comments on a previous
version of this work. We also thank Markus Maurer for teaching us how to
use LiDIA.
Notes
1. OE(n) is the Euler's totient function and denotes the number of positive integers not greater
than and relatively prime to n.
2. Note that this expression slightly differs from Eq. (11). This is because Eq. (11) counts the
number of x-fixed points; here we have to count the number of x-coordinates that are unchanged
by Demytko's encryption.
3. This condition is equivalent to -e in G is a permutation map (see Lemma 1).
4. Or if [r 0 ]P 6= Op (mod p) and [r 0
5. See [25] on p. 17: "Suppose r does not divide ord(e) mod -(N )". Note also the typo, N should
be replaced by -(N).
6. This is the probability that l p divides ord Z
(c) (see Note 1).
--R
Factoring via superencryption.
Security of number theoretic cryptosystems against random attack
A new elliptic curve based analogue of RSA.
Strong RSA keys.
Strong primes are easy to find.
Critical remarks on some public-key cryptosystems
International Organization for Standardization.
Elliptic curve cryptosystems.
New public-key schemes based on elliptic curves over the ring Zn
Analysis of a simple factorization algorithm.
Efficient cryptosystems over elliptic curves based on a product of form-free primes
The LiDIA Group.
Fast generation of secure RSA-moduli with almost maximal diversity
Fast generation of prime numbers and secure public-key cryptographic param- eters
Elliptic curve public key cryptosystems.
Handbook of applied cryptography
Use of elliptic curves in cryptography.
Protocol failures in cryptosystems.
On using Carmichael numbers for public-key encryption systems
Prime Numbers and Computer Methods for Factorization.
Remarks on a proposed cryptanalytic attack on the M.
"Critical remarks on some public-key cryptosystems"
Are 'strong' primes needed for RSA.
A method for obtaining digital signatures and public-key cryptosystems
The Arithmetic of Elliptic Curves.
Fast generation of random
Preliminary comment on the M.
Some remarks concerning the M.
--TR
Strong primes are easy to find
Use of elliptic curves in cryptography
Fast generation of secure RSA-moduli with almost maximal diversity
Prime numbers and computer methods for factorization (2nd ed.)
A new elliptic curve based analogue of RSA
A method for obtaining digital signatures and public-key cryptosystems
Handbook of Applied Cryptography
New Public-Key Schemes Based on Elliptic Curves over the Ring Zn
On Using Carmichael Numbers for Public Key Encryption Systems | RSA-type cryptosystems;strong primes;cycling attacks;elliptic curves |