name
stringlengths 1
3
| title
stringlengths 17
118
| abstract
stringlengths 268
2.12k
| fulltext
stringlengths 8.6k
78.1k
| keywords
stringlengths 28
1.35k
|
---|---|---|---|---|
89 | Evolutionary Learning with Kernels: A Generic Solution for Large Margin Problems | In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the most prominent representative of this class of learning methods, namely into Support Vector Machines (SVM). In contrast to former applications of evolutionary algorithms to SVMs we do not only optimize the method or kernel parameters . We rather use both evolution strategies and particle swarm optimization in order to directly solve the posed constrained optimization problem. Transforming the problem into the Wolfe dual reduces the total runtime and allows the usage of kernel functions. Exploiting the knowledge about this optimization problem leads to a hybrid mutation which further decreases convergence time while classification accuracy is preserved. We will show that evolutionary SVMs are at least as accurate as their quadratic programming counterparts on six real-world benchmark data sets. The evolutionary SVM variants frequently outperform their quadratic programming competitors. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite kernel functions and for several, possibly competing, performance criteria. | INTRODUCTION
In this paper we will discuss how evolutionary algorithms
can be used to solve large margin optimization problems.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
GECCO'06, July 812, 2006, Seattle, Washington, USA.
Copyright 2006 ACM 1-59593-186-4/06/0007 ...
$
5.00.
We explore the intersection of three highly active research
areas, namely machine learning, statistical learning theory,
and evolutionary algorithms. While the connection between
statistical learning and machine learning was analyzed before
, embedding evolutionary algorithms into this connection
will lead to a more generic algorithm which can deal
with problems today's learning schemes cannot cope with.
Supervised machine learning is often about classification
problems. A set of data points is divided into several classes
and the machine learning method should learn a decision
function in order to decide into which class an unseen data
point should be classified.
The maximization of a margin between data points of different
classes, i. e. the distance between a decision hyperplane
and the nearest data points, interferes with the ideas
of statistical learning theory. This allows the definition of an
error bound for the generalization error. Furthermore, the
usage of kernel functions allows the learning of non-linear
decision functions. We focus on Support Vector Machines
(SVM) as they are the most prominent representatives for
large margin problems. Since SVMs guarantee an optimal
solution for the given data set they are currently one of the
mostly used learning methods. Furthermore, many other
optimization problems can also be formulated as large margin
problem [26]. The relevance of large margin methods
can be measured by the number of submissions to the main
machine learning conferences over the past years
1
.
Usually, the optimization problem posed by SVMs is solved
with quadratic programming. However, there are some drawbacks
.
First, for kernel functions which are not positive
semidefinite no unique global optimum exists. In these cases
quadratic programming is not able to find satisfying solutions
at all. Moreover, most implementations do not even
terminate [8]. There exist several useful non-positive kernels
[15], among them the sigmoid kernel which simulates a
neural network [3, 23]. A more generic optimization scheme
should allow such non-positive kernels without the need for
omitting the more efficient dual optimization problem [17].
Second, SVMs should be able to optimize several performance
measures at the same time. Traditional SVMs try
to maximize the prediction accuracy alone. However, depending
on the application area other specific performance
criteria should be optimized instead of or additionally to
prediction accuracy. Although first attempts were made to
incorporate multivariate performance measures into SVMs
[13], the problem is not generally solved and no solution exist
1
More than 30% of all accepted papers for ICML 2005 dealt
with SVMs and other large margin methods.
1553
for competing criteria. This problem as well as the general
trade-off between training error and capacity could be easily
solved by an (multi-objective) evolutionary optimization
approach.
Former applications of evolutionary algorithms to SVMs
include the optimization of method and kernel parameters
[6, 19], the selection of optimal feature subsets [7], and the
creation of new kernel functions by means of genetic programming
[10]. The latter is particularly interesting since
it cannot be guaranteed that the resulting kernel functions
are again positive semi-definite.
Replacing the traditional optimization techniques by evolution
strategies or particle swarm optimization can tackle
both problems mentioned above. We will extract as much
information as possible from the optimization problem at
hand and develop and compare different search point operations
. We will show that the proposed implementation
leads to as good results as traditional SVMs on all real-world
benchmark data sets. Additionally, the optimization
is more generic since it also allows non-positive semi-definite
kernel functions and the simultaneous optimization of different
, maybe competing, criteria.
1.1
Outline
In Section 2 we give a short introduction into the concept
of structural risk minimization and the ideas of statistical
learning theory. We will also discuss an upper bound for
the generalization error. This allows us to formalize the optimization
problem of large margin methods in Section 3.
We will introduce SVMs for the classification of given data
points in Section 3.1 and extend the separation problem to
non-separable datasets (see Section 3.2) with non-linear hyperplanes
(see Section 3.3). This leads to a constrained optimization
problem for which we utilize evolution strategies
and particle swarm optimization in Section 4. We discuss
several enhancements and a new type of mutation before
we evaluate the proposed methods on real-world benchmark
datasets in Section 5.
STRUCTURAL RISK MINIMIZATION
In this section we discuss the idea of structural risk minimization
. Machine learning methods following this paradigm
have a solid theoretical foundation and it is possible to define
bounds for prediction errors.
Let X IR
m
be a real-valued vector of random variables.
Let Y IR be another random variable. X and Y obey a
fixed but unknown probability distribution P (X, Y ). Machine
Learning tries to find a function f(x, ) which predict
the value of Y for a given input x X. The function class
f depends on a vector of parameters , e. g. if f is the
class of all polynomials, might be the degree. We define
a loss function L(Y, f(X, )) in order to penalize errors
during prediction [9]. Every convex function with arity 2,
positive range, and L(x, x) = 0 can be used as loss function
[22]. This leads to a possible criterion for the selection of a
function f, the expected risk:
R() =
Z
L(y, f(x, ))dP (x, y).
(1)
Since the underlying distribution is not known we are not
able to calculate the expected risk.
However, instead of
estimating the probability distribution in order to allow this
calculation, we directly estimate the expected risk by using
a set of known data points T = {(x
1
, y
1
) , . . . , (x
n
, y
n
)
}
X Y . T is usually called training data. Using this set of
data points we can calculate the empirical risk :
R
emp
() = 1
n
n
X
i
=1
L (y
i
, f (x
i
, )) .
(2)
If training data is sampled according to P (X, Y ), the empirical
risk approximates the expected risk if the number of
samples grows:
lim
n
R
emp
() = R().
(3)
It is, however, a well known problem that for a finite number
of samples the minimization of R
emp
() alone does not
lead to a good prediction model [27]. For each loss function
L, each candidate , and each set of tuples T X
Y with T T
=
exists another parameter vector
so that L(y, f(x, )) = L(y, f(x, )) for all x T and
L(y, f(x, )) > L(y, f(x, )) for all x T . Therefore, the
minimization of R
emp
() alone does not guarantee the optimal
selection of a parameter vector for other samples
according to the distribution P (X, Y ). This problem is often
referred to as overfitting.
At this point we use one of the main ideas of statistical
learning theory. Think of two different functions perfectly
approximating a given set of training points. The first function
is a linear function, i. e. a simple hyperplane in the considered
space IR
m
. The second function also hits all training
points but is strongly wriggling in between. Naturally, if we
had to choose between these two approximation functions,
we tend to select the more simple one, i. e. the linear hyperplane
in this example. This derives from the observation
that more simple functions behave better on unseen examples
than very complicated functions. Since the mere minimization
of the empirical risk according to the training data
is not appropriate to find a good generalization, we incorporate
the capacity
2
of the used function into the optimization
problem (see Figure 1). This leads to the minimization of
the structural risk
R
struct
() = R
emp
() + ().
(4)
is a function which measures the capacity of the function
class f depending on the parameter vector . Since the
empirical risk is usually a monotonically decreasing function
of , we use to manage the trade-off between training error
and capacity. Methods minimizing this type of risk function
are known as shrinkage estimators [11].
2.1
Bound on the generalization performance
For certain functions the structural risk is an upper
bound for the empirical risk.
The capacity of the function
f for a given can for example be measured with help
of the Vapnik-Chervonenkis dimension (VC dimension) [27,
28]. The VC dimension is defined as the cardinality of the
biggest set of tuples which can separated with help of f in all
possible ways. For example, the VC dimension of linear hyperplanes
in an m-dimensional space is m+1. Using the VC
dimension as a measure for capacity leads to a probabilistic
bound for the structural risk [27]. Let f be a function class
with finite VC dimension h and f() the best solution for the
2
Although not the same, the capacity of a function resembles
a measurement of the function complexity. In our example
we measure the ability to "wriggle". More details in [27].
1554
X
Y
Figure 1: The simultaneous minimization of empirical
risk and model complexity gives a hint which
function should be used in order to generalize the
given data points.
empirical risk minimization for T with |T | = n. Now choose
some such that 0 1. Then for losses smaller than
some number B, the following bound holds with probability
1
- :
R() R
emp
() + B
s
h `log
2l
h
+ 1
- log
4
l
.
(5)
Surprisingly, this bound is independent of P (X, Y ). It only
assumes that both the seen and the unseen data points are
independently sampled according to some P (X, Y ). Please
note that this bound also no longer contains a weighting
factor or any other trade-off at all. The existence of a
guaranteed error bound is the reason for the great success of
structural risk minimization in a wide range of applications.
LARGE MARGIN METHODS
As discussed in the previous section we need to use a class
of functions whose capacity can be controlled. In this section
we will discuss a special form of structural risk minimization
, namely large margin approaches. All large margin
methods have one thing in common: they embed structural
risk minimization by maximizing a margin between a linear
function and the nearest data points. The most prominent
large margin method for classification tasks is the Support
Vector Machine (SVM).
3.1
Support Vector Machines
We constrain the number of possible values of Y to 2,
without loss of generality these values should be
-1 and
+1. In this case, finding a function f in order to decide
which of both predictions is correct for an unseen data point
is referred to as classification learning for the classes
-1
and +1. We start with the simplest case: learning a linear
function from perfectly separable data. As we shall see in
Section 3.2 and 3.3, the general case - non-linear functions
derived from non-separable data - leads to a very similar
problem.
If the data points are linearly separable, a linear hyperplane
must exist in the input space IR
m
which separates
both classes. This hyperplane is defined as
H = {x| w, x + b = 0} ,
(6)
H
w
Margin
Origin
-b
|w|
+1
-1
Figure 2: A simple binary classification problem for
two classes
-1 (empty bullets) and +1 (filled bullets).
The separating hyperplane is defined by the vector
w and the offset b. The distance between the nearest
data point(s) and the hyperplane is called
margin.
where w is normal to the hyperplane, |b|/||w|| is the perpendicular
distance of the hyperplane to the origin (offset
or bias), and
||w|| is the Euclidean norm of w. The vector w
and the offset b define the position and orientation of the hyperplane
in the input space. These parameters correspond
to the function parameters . After the optimal parameters
w and b were found, the prediction of new data points can
be calculated as
f(x, w, b) = sgn ( w, x + b) ,
(7)
which is one of the reasons why we constrained the classes
to
-1 and +1.
Figure 2 shows some data points and a separating hyperplane
. If all given data points are correctly classified by the
hyperplane at hand the following must hold:
i : y
i
( w, x
i
+ b) 0.
(8)
Of course, an infinite number of different hyperplanes exist
which perfectly separate the given data points. However,
one would intuitively choose the hyperplane which has the
biggest amount of safety margin to both sides of the data
points.
Normalizing w and b in a way that the point(s)
closest to the hyperplane satisfy
| w, x
i
+ b| = 1 we can
transform equation 8 into
i : y
i
( w, x
i
+ b) 1.
(9)
We can now define the margin as the perpendicular distance
of the nearest point(s) to the hyperplane.
Consider two
points x
1
and x
2
on opposite sides of the margin. That is
w, x
1
+b = +1 and w, x
2
+b = -1 and w, (x
1
-x
2
) = 2.
The margin is then given by 1/||w||.
It can be shown, that the capacity of the class of separating
hyperplanes decreases with increasing margin [21].
Maximizing the margin of a hyperplane therefore formalizes
the structural risk minimization discussed in the previous
section. Instead of maximizing 1/||w|| we could also minimize
1
2
||w||
2
which will result into more simple equations
later. This leads to the optimization problem
minimize
1
2
||w||
2
(10)
subject to
i : y
i
( w, x
i
+ b) 1.
(11)
1555
Function 10 is the objective function and the constraints
from equation 11 are called inequality constraints.
They
form a constrained optimization problem. We will use a Lagrangian
formulation of the problem. This allows us to replace
the inequality constraints by constraints on the Lagrange
multipliers which are easier to handle. The second
reason is that after the transformation of the optimization
problem, the training data will only appear in dot products.
This will allow us to generalize the optimization to the non-linear
case (see Section 3.3). We will now introduce positive
Lagrange multipliers
i
, i = 1, . . . , n, one for each of the
inequality constraints. The Lagrangian has the form
L
P
(w, b, ) = 12||w||
2
n
X
i
=1
i
y
i
( w, x
i
+ b) .
(12)
Finding a minimum of this function requires that the derivatives
L
P
(w,b,)
w
= w n
P
i
=1
i
y
i
x
i
(13)
L
P
(w,b,)
b
=
n
P
i
=1
i
y
i
(14)
are zero, i. e.
w =
n
P
i
=1
i
y
i
x
i
(15)
0 =
n
P
i
=1
i
y
i
.
(16)
The Wolfe dual, which has to be maximized, results from
the Lagrangian by substituting 15 and 16 into 12, thus
L
D
(w, b, ) =
n
X
i
=1
i
- 12
n
X
i
=1
n
X
j
=1
y
i
y
j
i
j
x
i
, x
j
.
(17)
This leads to the dual optimization problem which must
be solved in order to find a separating maximum margin
hyperplane for given set of data points:
maximize
n
P
i
=1
i
1
2
n
P
i
=1
n
P
j
=1
y
i
y
j
i
j
x
i
, x
j
(18)
subject to
i
0 for all i = 1, . . . , n
(19)
and
n
P
i
=1
i
y
i
= 0.
(20)
From an optimal vector
we can calculate the optimal
normal vector w
using equation 15. The optimal offset can
be calculated with help of equation 11. Please note, that w
is a linear combination of those data points x
i
with
i
= 0.
These data points are called support vectors, hence the name
support vector machine. Only support vectors determine the
position and orientation of the separating hyperplane, other
data points might as well be omitted during learning. In
Figure 2 the support vectors are marked with circles. The
number of support vectors is usually much smaller than the
total number of data points.
3.2
Non-separable data
We now consider the case that the given set of data points
is not linearly separable.
The optimization problem discussed
in the previous section would not have a solution
since in this case constraint 11 could not be fulfilled for all
i. We relax this constraint by introducing positive slack
variables
i
, i = 1, . . . , n. Constraint 11 becomes
i : y
i
( w, x
i
+ b) 1 i
.
(21)
In order to minimize the number of wrong classifications
we introduce a correction term C P
n
i
=1
i
into the objective
function. The optimization problems then becomes
minimize
1
2
||w||
2
+ C
n
P
i
=1
i
(22)
subject to
i : y
i
( w, x
i
+ b) 1 i
.
(23)
The factor C determines the weight of wrong predictions as
part of the objective function. As in the previous section
we create the dual form of the Lagrangian. The slacking
variables
i
vanish and we get the optimization problem
maximize
n
P
i
=1
i
1
2
n
P
i
=1
n
P
j
=1
y
i
y
j
i
j
x
i
, x
j
(24)
subject to 0
i
C for all i = 1, . . . , n
(25)
and
n
P
i
=1
i
y
i
= 0.
(26)
It can easily be seen that the only difference to the separable
case is the additional upper bound C for all
i
.
3.3
Non-linear learning with kernels
The optimization problem described with equations 24,
25, and 26 will deliver a linear separating hyperplane for
arbitrary datasets. The result is optimal in a sense that no
other linear function is expected to provide a better classification
function on unseen data according to P (X, Y ). However
, if the data is not linearly separable at all the question
arises how the described optimization problem can be gener-alized
to non-linear decision functions. Please note that the
data points only appear in the form of dot products x
i
, x
j
.
A possible interpretation of this dot product is the similarity
of these data points in the input space IR
m
. Now consider a
mapping : IR
m
H into some other Euclidean space H
(called feature space) which might be performed before the
dot product is calculated. The optimization would depend
on dot products in this new space H, i. e. on functions of
the form (x
i
) , (x
j
) . A function k : IR
m
IR
m
IR
with the characteristic
k (x
i
, x
j
) = (x
i
) , (x
j
)
(27)
is called kernel function or kernel. Figure 3 gives a rough
idea how transforming the data points can help to solve
non-linear problems with the optimization in a (higher dimensional
) space where the points can be linearly separated.
A fascinating property of kernels is that for some mappings
a kernel k exists which can be calculated without
actually performing . Since often the dimension of H is
greater than the dimension m of the input space and H
sometimes is even infinite dimensional, the usage of such
kernels is a very efficient way to introduce non-linear decision
functions into large margin approaches. Prominent examples
for such efficient non-linear kernels are polynomial
kernels with degree d
k (x
i
, x
j
) = ( x
i
, x
j
+ )
d
,
(28)
radial basis function kernels (RBF kernels)
k (x
i
, x
j
) = e
||xi
-xj||2
22
(29)
1556
H
R
m
Figure 3: After the transformation of all data points
into the feature space H the non-linear separation
problem can be solved with a linear separation algorithm
. In this case a transformation in the space
of polynomials with degree 2 was chosen.
for a > 0, and the sigmoid kernel
k (x
i
, x
j
) = tanh ( x
i
, x
j
- )
(30)
which can be used to simulate a neural network. and
are scaling and shifting parameters. Since the RBF kernel
is easy interpretable and often yields good prediction performance
, it is used in a wide range of applications. We will
also use the RBF kernel for our experiments described in
section 5 in order to demonstrate the learning ability of the
proposed SVM.
We replace the dot product in the objective function by
kernel functions and achieve the final optimization problem
for finding a non-linear separation for non-separable data
points
maximize
n
P
i
=1
i
1
2
n
P
i
=1
n
P
j
=1
y
i
y
j
i
j
k (x
i
, x
j
)
(31)
subject to 0
i
C for all i = 1, . . . , n
(32)
and
n
P
i
=1
i
y
i
= 0.
(33)
It can be shown that if the kernel k, i. e. it's kernel matrix
, is positive definite, the objective function is concave
[2]. The optimization problem therefore has a global unique
maximum. However, in some cases a specialized kernel function
must be used to measure the similarity between data
points which is not positive definite, sometimes not even
positive semidefinite [21]. In these cases the usual quadratic
programming approaches might not be able to find a global
maximum in feasible time.
EVOLUTIONARY COMPUTATION FOR LARGE MARGIN OPTIMIZATION
Since traditional SVMs are not able to optimize for non-positive
semidefinite kernel function, it is a very appealing
idea to replace the usual quadratic programming approaches
by an evolution strategies (ES) approach [1] or by particle
swarm optimization (PSO) [14]. In this section we will describe
both a straightforward application of these techniques
and how we can exploit some information about our optimization
problem and incorporate that information into our
search operators.
4.1
Solving the dual problem and other sim-plifications
The used optimization problem is the dual problem for
non-linear separation of non-separable data developed in the
last sections (equations 31, 32, and 33). Of course it would
also be possible to directly optimize the original form of
our optimization problem depicted in equations 22 and 23.
That is, we could directly optimize the weight vectors and
the offset. As mentioned before, there are two drawbacks:
first, the costs of calculating the fitness function would be
much higher for the original optimization problem since the
fulfillment of all n constraints must be recalculated for each
new hyperplane. It is a lot easier to check if all 0
i
C apply. Second, it would not be possible to allow non-linear
learning with efficient kernel functions in the original
formulation of the problem. Furthermore, the kernel matrix
K with K
ij
= k (x
i
, x
j
) can be calculated beforehand and
the training data is never used during optimization again.
This further reduces the needed runtime for optimization
since the kernel matrix calculation is done only once.
This is a nice example for a case, where transforming the
objective function beforehand is both more efficient and allows
enhancements which would not have been possible before
. Transformations of the fitness functions became a very
interesting topic recently [25].
Another efficiency improvement can be achieved by formulating
the problem with b = 0. All solution hyperplanes
must then contain the origin and the constraint 33 will vanish
. This is a mild restriction for high-dimensional spaces
since the number of degrees of freedom is only decreased by
one. However, during optimization we do not have to cope
with this equality constraint which would take an additional
runtime of O(n).
4.2
EvoSVM and PsoSVM
We developed a support vector machine based on evolution
strategies optimization (EvoSVM). We utilized three
different types of mutation which will be described in this
section.
Furthermore, we developed another SVM based
on particle swarm optimization (PsoSVM) which is also described
.
The first approach (EvoSVM-G, G for Gaussian mutation
) merely utilizes a standard ES optimization. Individuals
are the real-valued vectors and mutation is performed
by adding a Gaussian distributed random variable with standard
deviation C/10. In addition, a variance adaptation is
conducted during optimization (1/5 rule [18]). Crossover
probability is high (0.9). We use tournament selection with
a tournament size of 0.25 multiplied by the population size.
The initial individuals are random vectors with 0
i
C.
The maximum number of generations is 1000 and the optimization
is terminated if no improvement occurred during
the last 5 generations. The population size is 10.
The second version is called EvoSVM-S (S for switching
mutation). Here we utilize the fact that only a small amount
of input data points will become support vectors (sparsity ).
On the other hand, one can often observe that non-zero
alpha values are equal to the upper bound C and only a very
small amount of support vectors exists with 0 <
i
< C.
Therefore, we just use the well known mutation of genetic
algorithms and switch between 0 and C with probability
1/n for each
i
. The other parameters are equal to those
described for the EvoSVM-G.
1557
for i = 1 to n do {
if (random(0, 1) < 1/n) do {
if (alpha_i > 0) do {
alpha_i = 0;
} else do {
alpha_i = random(0, C);
}
}
}
Figure 4: A simple hybrid mutation which should
speed-up the search for sparser solutions.
It contains
elements from standard mutations from both
genetic algorithms and evolution strategies.
Using this switching mutation inspired by genetic algorithms
only allow
i
= 0 or
i
= C. Instead of a complete
switch between 0 and C or a smooth change of all values
i
like the Gaussian mutation does, we developed a hybrid
mutation combining both elements. That means that we
check for each
i
with probability 1/n if the value should be
mutated at all. If the current value
i
is greater than 0,
i
is
set to 0. If
i
is equal to 0,
i
is set to a random value with
0
i
C. Figure 4 gives an overview over this hybrid
mutation. The function random(a, b) returns an uniformly
distributed random number between a and b. The other parameters
are the same as described for the EvoSVM-G. We
call this version EvoSVM-H (H for hybrid).
As was mentioned before, the optimization problem usually
is concave and the risk for local extrema is small. Therefore
, we also applied a PSO technique. It should be inves-tigated
if PSO, which is similar to the usual quadratic programming
approaches for SVMs in a sense that the gradient
information is exploited, is able to find a global optimum in
shorter time. We call this last version PsoSVM and use a
standard PSO with inertia weight 0.1, local best weight 1.0,
and global best weight 1.0. The inertia weight is dynami-cally
adapted during optimization [14].
EXPERIMENTS AND RESULTS
In this section we try to evaluate the proposed evolutionary
optimization SVMs. We compare our implementation to
the quadratic programming approaches usually applied to
large margin problems. The experiments demonstrate the
competitiveness in terms of classification error minimization,
runtime, and robustness.
We apply the discussed EvoSVM variants as well as the
PsoSVM on six real-world benchmark datasets. We selected
these datasets from the UCI machine learning repository
[16] and the StatLib dataset library [24], because they already
define a binary classification task, consist of real-valued
numbers only and do not contain missing values.
Therefore, we did not need to perform additional prepro-cessing
steps which might introduce some bias. The properties
of all datasets are summarized in Table 1. The default
error corresponds to the error a lazy default classifier would
make by always predicting the major class. Classifiers must
produce lower error rates in order to learn at all instead of
just guessing.
In order to compare the evolutionary SVMs described
in this paper with standard implementations we also applied
two other SVMs on all datasets. Both SVMs use a
Dataset
n
m
Source
Default
Liver
346
6
UCI
0.010
42.03
Ionosphere
351
34
UCI
1.000
35.90
Sonar
208
60
UCI
1.000
46.62
Lawsuit
264
4
StatLib
0.010
7.17
Lupus
87
3
StatLib
0.001
40.00
Crabs
200
7
StatLib
0.100
50.00
Table 1: The evaluation datasets.
n is the number
of data points, m is the dimension of the input
space.
The kernel parameter was optimized for
the comparison SVM learner
mySVM. The last column
contains the default error, i. e. the error for
always predicting the major class.
slightly different optimization technique based on quadratic
programming. The used implementations were mySVM [20]
and LibSVM [4]. The latter is an adaptation of the widely
used SV M
light
[12].
We use a RBF kernel for all SVMs and determine the
best parameter value for with a grid search parameter optimization
for mySVM. This ensures a fair comparison since
the parameter is not optimized for one of the evolutionary
SVMs. Possible parameters were 0.001, 0.01, 0.1, 1 and 10.
The optimal value for each dataset is also given in Table 1.
In order to determine the performance of all methods we
perform a k-fold cross validation. That means that the
dataset T is divided into k disjoint subsets T
i
. For each
i {1, . . . , k} we use T \T
i
as training set and the remaining
subset T
i
as test set. If F
i
is the number of wrong predictions
on test set T
i
we calculate the average classification
error
E = 1
k
k
X
i
=1
F
i
|T
i
|
(34)
over all test sets in order to measure the classification performance
. In our experiments we choose k = 20, i. e. for
each evolutionary method the average and standard deviation
of 20 runs is reported. All experiments were performed
with the machine learning environment
Yale [5].
Table 2 summarizes the results for different values of C.
It can be seen that the EvoSVM variants frequently yield
smaller classification errors than the quadratic programming
counterparts (mySVM and LibSVM). For C = 1, a statistical
significant better result was achieved by using LibSVM
only for the Liver data set. For all other datasets the evolutionary
optimization outperforms the quadratic programming
approaches. The same applies for C = 0.1. For rather
small values of C most learning schemes were not able to
produce better predictions than the default classifier. For
C = 0.01, however, PsoSVM at least provides a similar
accuracy to LibSVM. The reason for higher errors of the
quadratic programming approaches is probably a too aggressive
termination criterion. Although this termination
behavior further reduces runtime for mySVM and LibSVM,
the classification error is often increased.
It turns out that the standard ES approach EvoSVM-G
using a mutation adding a Gaussian distributed random
variable often outperforms the other SVMs. However, the
1558
C = 1
Liver
Ionosphere
Sonar
Lawsuit
Lupus
Crabs
Error
T
Error
T
Error
T
Error
T
Error
T
Error
T
EvoSVM-G
34.718.60
68
10.815.71
80
14.034.52
26
2.051.87
52
25.2011.77
8
2.253.72
25
EvoSVM-S
35.376.39
4
8.493.80
9
17.456.64
6
2.401.91
10
30.9212.42
<1
4.054.63
2
EvoSVM-H
34.977.32
7
6.833.87
22
15.416.39
10
2.011.87
14
24.0313.68
1
3.954.31
7
PsoSVM
34.784.95
8
9.904.38
9
16.945.61
7
3.022.83
3
25.22 7.67
<1
3.403.70
2
mySVM
33.624.31
2
8.564.25
4
15.815.59
2
1.892.51
1
25.28 8.58
1
3.003.32
1
LibSVM
32.725.41
2
7.703.63
3
14.604.96
3
2.412.64
1
24.1412.33
1
3.004.58
1
F Test
3.20 (0.01)
9.78 (0.00)
6.19 (0.00)
1.51 (0.19)
11.94 (0.00)
2.25 (0.05)
C = 0.1
Liver
Ionosphere
Sonar
Lawsuit
Lupus
Crabs
Error
T
Error
T
Error
T
Error
T
Error
T
Error
T
EvoSVM-G
33.904.19
74
9.406.14
89
21.726.63
35
2.351.92
50
24.9010.51
7
7.204.36
27
EvoSVM-S
35.573.55
4
7.123.54
18
24.906.62
4
4.472.31
13
25.9812.56
<1
7.955.68
2
EvoSVM-H
34.764.70
5
6.554.61
23
24.406.09
11
4.163.14
19
26.5113.03
1
6.505.02
2
PsoSVM
36.815.04
3
13.967.56
10
24.186.11
3
3.032.83
3
29.8612.84
1
8.156.02
1
mySVM
42.031.46
2
35.901.35
2
46.621.62
2
7.172.55
1
41.256.92
1
6.504.50
1
LibSVM
33.0810.63
2
11.406.52
3
22.406.45
3
4.553.25
1
25.2916.95
1
21.0012.41
1
F Test
34.46 (0.00)
492.88 (0.00)
323.83 (0.00)
20.64 (0.00)
64.83 (0.00)
100.92 (0.00)
C = 0.01
Liver
Ionosphere
Sonar
Lawsuit
Lupus
Crabs
Error
T
Error
T
Error
T
Error
T
Error
T
Error
T
EvoSVM-G
42.031.46
75
35.901.35
86
45.332.20
39
7.172.55
55
40.006.33
7
26.2012.66
27
EvoSVM-S
42.031.46
3
35.901.35
9
46.621.62
4
7.172.55
3
40.006.33
<1
8.584.35
1
EvoSVM-H
42.031.46
3
35.901.35
20
46.271.42
12
7.172.55
3
40.006.33
1
7.004.00
2
PsoSVM
41.398.59
3
35.901.35
4
27.906.28
3
7.172.55
2
31.9412.70
1
10.057.26
1
mySVM
42.031.46
2
35.901.35
2
46.621.62
2
7.172.55
1
40.006.33
1
6.504.50
1
LibSVM
42.031.46
2
35.901.35
3
28.4610.44
2
7.172.55
1
26.1116.44
1
50.000.00
1
F Test
0.52 (0.77)
0.00 (1.00)
442.46 (0.00)
0.00 (1.00)
78.27 (0.00)
1095.94 (0.00)
Table 2: Classification error, standard deviation, and runtime of all SVMs on the evaluation datasets for
parameters C = 1, C = 0.1, and C = 0.01. The runtime T is given in seconds. The last line for each table
depicts the F test value and the probability that the results are not statistical significant.
runtime for this approach is far to big to be feasible in
practical situations. The mere GA based selection mutation
switching between 0 and C converges much faster but
is often less accurate. The remaining runtime differences
between EvoSVM-S and the quadratic programming counterparts
can surely be reduced by code optimization. The
used SVM implementations are matured and have been optimized
over the years whereas the implementations of the
evolutionary approaches follow standard recipes without any
code optimization.
The hybrid version EvoSVM-H combines the best elements
of both worlds. It converges nearly as fast as the
EvoSVM-S and is often nearly as accurate as the EvoSVM-G
. In some cases (Ionosphere, Lupus) it even outperforms
all other SVMs.
PsoSVM on the other hand does not provide the best
performance in terms of classification error. Compared to
the other evolutionary approaches, however, it converged
much earlier than the other competitors.
Please note that the standard deviations of the errors
achieved with the evolutionary SVMs are similar to the standard
deviations achieved with mySVM or LibSVM. We can
therefore conclude that the evolutionary optimization is as
robust as the quadratic programming approaches and differences
mainly derives from different subsets for training and
testing due to cross validation instead of the used random-ized
heuristics.
Therefore, evolutionary SVMs provide an interesting alternative
to more traditional SVM implementations. Beside
the similar results EvoSVM is also able to cope with non-positive
definite kernel functions and multivariate optimization
CONCLUSION
In this paper we connected evolutionary computation with
statistical learning theory. The idea of large margin methods
was very successful in many applications from machine
learning and data mining.
We used the most prominent
representative of this paradigm, namely Support Vector Machines
, and employed evolution strategies and particle swarm
optimization in order to solve the constrained optimization
problem at hand. We developed a hybrid mutation which
decreases convergence time while the classification accuracy
is preserved.
An interesting property of large margin methods is that
the runtime for fitness evaluation is reduced by transforming
the problem into the dual problem. In our case, the algorithm
is both faster and provides space for other improvements
like incorporating a kernel function for non-linear
classification tasks. This is a nice example how a transformation
into the dual optimization problem can be exploited
by evolutionary algorithms.
We have seen that evolutionary SVMs are at least as accurate
as their quadratic programming counterparts. For
practical values of C the evolutionary SVM variants frequently
outperformed their competitors. We can conclude
that evolutionary algorithms proved as reliable as other optimization
schemes for this type of problems. In addition,
beside the inherent advantages of evolutionary algorithms
(e. g. parallelization, multi-objective optimization of train-1559
ing error and capacity) it is now also possible to employ
non positive semidefinite kernel functions which would lead
to unsolvable problems for other optimization techniques.
In our future work we plan to make experiments with such
non positive semidefinite kernel functions. This also applies
for multi-objective optimization of both the margin and the
training error.
It turns out that the hybrid mutation delivers results
nearly as accurate as the Gaussian mutation and has a similar
convergence behavior compared to the switching mutation
known from GAs. Future improvements could start
with a switching mutation and can post-optimize with a
Gaussian mutation after a first convergence. Values always
remaining 0 or C during the first run could be omitted in
the post-optimization step. It is possible that this mutation
is even faster and more accurate then EvoSVM-H.
ACKNOWLEDGMENTS
This work was supported by the Deutsche Forschungsge-meinschaft
(DFG) within the Collaborative Research Center
"Reduction of Complexity for Multivariate Data Structures".
REFERENCES
[1] H.-G. Beyer and H.-P. Schwefel. Evolution strategies:
A comprehensive introduction. Journal Natural
Computing, 1(1):252, 2002.
[2] C. Burges. A tutorial on support vector machines for
pattern recognition. Data Mining and Knowledge
Discovery, 2(2):121167, 1998.
[3] G. Camps-Valls, J. Martin-Guerrero, J. Rojo-Alvarez,
and E. Soria-Olivas. Fuzzy sigmoid kernel for support
vector classifiers. Neurocomputing, 62:501506, 2004.
[4] C.-C. Chang and C.-J. Lin. LIBSVM: a library for
support vector machines, 2001.
[5] S. Fischer, R. Klinkenberg, I. Mierswa, and
O. Ritthoff. Yale: Yet Another Learning Environment
Tutorial. Technical Report CI-136/02, Collaborative
Research Center 531, University of Dortmund,
Dortmund, Germany, 2002.
[6] F. Friedrichs and C. Igel. Evolutionary tuning of
multiple svm parameters. In Proc. of the 12th
European Symposium on Artificial Neural Networks
(ESANN 2004), pages 519524, 2004.
[7] H. Fr
phlich, O. Chapelle, and B. Sch
olkopf. Feature
selection for support vector machines using genetic
algorithms. International Journal on Artificial
Intelligence Tools, 13(4):791800, 2004.
[8] B. Haasdonk. Feature space interpretation of svms
with indefinite kernels. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 27(4):482492,
2005.
[9] T. Hastie, R. Tibshirani, and J. Friedman. The
Elements of Statistical Learning: Data Mining,
Inference, and Prediction. Springer Series in Statistics.
Springer, 2001.
[10] T. Howley and M. Madden. The genetic kernel
support vector machine: Description and evaluation.
Artificial Intelligence Review, 2005.
[11] W. James and C. Stein. Estimation with quadratic
loss. In Proceedings of the Fourth Berkeley Symposium
on Mathematics, Statistics and Probability,
pages 361380, 1960.
[12] T. Joachims. Making large-scale SVM learning
practical. In B. Sch
olkopf, C. Burges, and A. Smola,
editors, Advances in Kernel Methods - Support Vector
Learning, chapter 11. MIT Press, Cambridge, MA,
1999.
[13] T. Joachims. A support vector method for
multivariate performance measures. In Proc. of the
International Conference on Machine Learning
(ICML), pages 377384, 2005.
[14] J. Kennedy and R. C. Eberhart. Particle swarm
optimization. In Proc. of the International Conference
on Neural Networks, pages 19421948, 1995.
[15] H.-T. Lin and C.-J. Lin. A study on sigmoid kernels
for svm and the training of non-psd kernels by
smo-type methods, March 2003.
[16] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI
repository of machine learning databases, 1998.
http://www.ics.uci.edu/
mlearn/MLRepository.html.
[17] C. Ong, X. Mary, S. Canu, and A. J. Smola. Learning
with non-positive kernels. In Proc. of the 21st
International Conference on Machine Learning
(ICML), pages 639646, 2004.
[18] I. Rechenberg. Evolutionsstrategie: Optimierung
technischer Systeme nach Prinzipien der biologischen
Evolution. Frommann-Holzboog, 1973.
[19] T. Runarsson and S. Sigurdsson. Asynchronous
parallel evolutionary model selection for support
vector machines. Neural Information Processing,
3(3):5967, 2004.
[20] S. R
uping. mySVM Manual. Universit
at Dortmund,
Lehrstuhl Informatik VIII, 2000. http://www-ai
.cs.uni-dortmund.de/SOFTWARE/MYSVM/.
[21] B. Sch
olkopf and A. J. Smola. Learning with Kernels
Support Vector Machines, Regularization,
Optimization, and Beyond. MIT Press, 2002.
[22] A. Smola, B. Sch
olkopf, and K.-R. M
uller. General
cost functions for support vector regression. In
Proceedings of the 8th International Conference on
Artificial Neural Networks, pages 7983, 1998.
[23] A. J. Smola, Z. L. Ovari, and R. C. Williamson.
Regularization with dot-product kernels. In Proc. of
the Neural Information Processing Systems (NIPS),
pages 308314, 2000.
[24] Statlib datasets archive.
http://lib.stat.cmu.edu/datasets/.
[25] T. Storch. On the impact of objective function
transformations on evolutionary and black-box
algorithms. In Proc. of the Genetic and Evolutionary
Computation Conference (GECCO), pages 833840,
2005.
[26] B. Taskar, V. Chatalbashev, D. Koller, and
C. Guestrin. Learning structured prediction models: A
large margin approach. In Proc. of the International
Conference on Machine Learning (ICML), 2005.
[27] V. Vapnik. Statistical Learning Theory. Wiley, New
York, 1998.
[28] V. Vapnik and A. Chervonenkis. The necessary and
sufficient conditions for consistency in the empirical
risk minimization method. Pattern Recognition and
Image Analysis, 1(3):283305, 1991.
1560 | Support vector machines;statistical learning theory;kernel methods;SVM;evolution strategies;large margin;particle swarms;machine learning;hybrid mutation;evolutionary computation;kernels |
9 | A Flexible and Extensible Object Middleware: CORBA and Beyond | This paper presents a CORBA-compliant middleware architecture that is more flexible and extensible compared to standard CORBA. The portable design of this architecture is easily integrated in any standard CORBA middleware; for this purpose, mainly the handling of object references (IORs) has to be changed. To encapsulate those changes, we introduce the concept of a generic reference manager with portable profile managers. Profile managers are pluggable and in extreme can be downloaded on demand. To illustrate the use of this approach, we present a profile manager implementation for fragmented objects and another one for bridging CORBA to the Jini world. The first profile manager supports truly distributed objects, which allow seamless integration of partitioning , scalability, fault tolerance, end-to-end quality of service, and many more implementation aspects into a distributed object without losing distribution and location transparency. The second profile manager illustrates how our architecture enables fully transparent access from CORBA applications to services on non-CORBA platforms | INTRODUCTION
Middleware systems are heavily used for the implementation of
complex distributed applications. Current developments like mobile
environments and ubiquitous computing lead to new requirements
that future middleware systems will have to meet. Examples
for such requirements are the support for self-adaptation and self-optimisation
as well as scalability, fault-tolerance, and end-to-end
quality of service in the context of high dynamics. Heterogeneity in
terms of various established middleware platforms calls for cross-platform
interoperability. In addition, not all future requirements
can be predicted today. A proper middleware design should be well-prepared
for such future extensions.
CORBA is a well-known standard providing an architecture for object
-based middleware systems [5]. CORBA-based applications are
built from distributed objects that can transparently interact with
each other, even if they reside on different nodes in a distributed environment
. CORBA objects can be implemented in different programming
languages. Their interface has to be defined in a single,
language-independent interface description language (IDL). Problem
-specific extensions allow to add additional features to the underlying
base architecture.
This paper discusses existing approaches towards a more flexible
middleware infrastructure and proposes a novel modularisation pattern
that leads to a flexible and extensible object middleware. Our
design separates the handling of remote references from the object
request broker (ORB) core and introduces the concept of ORB-independent
portable profile managers, which are managed by a generic
reference manager. The profile managers encapsulate all
tasks related to reference handling, i.e., reference creation, reference
marshalling and unmarshalling, external representation of references
as strings, and type casting of representatives of remote objects
. The profile managers are independent from a specific ORB,
and may even be loaded dynamically into the ORB. Only small
modifications to existing CORBA implementations are necessary
to support such a design.
Our architecture enables the integration of a fragmented object
model into CORBA middleware platforms, which allows transparent
support of many implementation aspects of complex distributed
systems, like partitioning, scalability, fault-tolerance, and end-to-end
quality-of-service guarantees. It also provides a simple mechanism
for the integration of cross-platform interoperability, e.g., the
integration with services running on non-CORBA middleware platforms
, like Jini or .NET remoting. Our design was named AspectIX
and implemented as an extension to the open-source CORBA implementation
JacORB, but is easily ported to other systems.
This paper is organised as follows: The following section discusses
the monolithic design of most current middleware systems in more
detail. It addresses the extension features of CORBA and discusses
Permission to make digial or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistributed to lists, requires prior
specific permission and/or a fee.
SEM 2005, September 2005, Lisbon, Portugal
Copyright 2005 ACM, 1-59593-204-4/05/09...$5.00.
69
2
their lack of flexibility. Section 3 explains our novel approach to
middleware extensibility based on a generic reference manager
with pluggable profile managers. In Section 4, two CORBA extensions
and the corresponding profile managers are presented: One
for integrating the powerful fragmented-object model into the system
, and one for transparently accessing Jini services from CORBA
applications. Section 5 evaluates the implementation effort and run-time
overhead of our approach, and Section 6 presents some concluding
remarks.
MIDDLEWARE ARCHITECTURE AND EXTENSION POINTS
CORBA uses a monolithic object model: CORBA objects have to
reside on a specific node and are transparently accessed by client-side
proxies called stubs. The stubs use an RPC-based communication
protocol to contact the actual object, to pass parameters, and to
receive results from object invocations. A CORBA-based middleware
implementation is free to choose the actual protocol, but has
to support the Internet Inter-ORB Protocol (IIOP) for interoperability
.
CORBA uses interoperable object references (IORs) to address remote
objects. The IOR is a data structure composed of a set of profiles
. According to the standard, each profile may specify contact
information of the remote object for one specific interaction protocol
; for interoperability between ORBs of different vendors, an
IIOP profile needs to be present. In addition to protocol profiles, the
IOR may contain a set of tagged components. Each tagged component
is a name-value pair with a unique tag registered with the OMG
and arbitrary associated data; these components define protocol-independent
information, like a unique object ID.
In standard CORBA, IORs are created internally at the server ORB.
A server application creates a servant instance and registers the
servant with the ORB (or, to be more specific, with an object adapter
of the ORB). Usually this IOR contains an automatically created
IIOP profile that contains hostname, port, object adapter name, and
object ID for accessing the object via IIOP. Additionally, it may
contain other profiles representing alternative ways to access the
object.
An IOR can be passed to remote clients, either implicitly or explicitly
. If a reference to a remote object is passed as a parameter or return
value, the IOR data structure will be implicitly serialised and
transferred. Upon deserialisation, the receiving client ORB automatically
instantiates a local stub for accessing the remote object,
initialised with the information available in the IOR. If multiple
profiles exist in the IOR, the ORB may use a vendor-specific strategy
to select a single profile that is understood by the ORB. The
IIOP profile should be understood by all ORBs. Beside implicit
transfer, an explicit transfer is possible. The server application may
call a
object_to_string
method at the ORB, which serial-ises
the IOR and transforms it to a string, an IOR-URL. This string
can be transferred to a client; the client application may call the local
ORB method
string_to_object
to create the stub for
the remote object referenced by the IOR.
2.2 Status Quo of Extensible Middleware
Many practical tasks--e.g., authorisation, security, load balancing,
fault tolerance, or special communication protocols--require extensions
to the basic CORBA model. For example, fault-tolerant
replication requires that multiple communication addresses (i.e., of
all replicas) are known to a client. Typically this means that these
addresses have to be encoded into the remote reference; binding to
and invoking methods at such a remote object require a more complex
handling at the client side compared to the simple stub-service
design. As a second example, a peer-to-peer-like interaction between
users of a service might sometimes be desired. In this case,
the client-server structure needs to be completely abandoned.
A good example for the lack of extensibility of standard CORBA is
the fault-tolerant CORBA standard (FT-CORBA) [5, Chapter 23].
It was not possible to define this standard in a way that all platform
implementations are portable across different ORBs. Instead, each
FT-CORBA-compliant middleware has its vendor-specific implementation
inside of a single ORB. A system design such as we envision
allows to implement a generic "FT-CORBA plugin" that
makes any ORB, independent of its vendor, aware of fault-tolerance
mechanisms.
Existing concepts for interception, custom object adapters, and
smart proxies provide some mechanisms for such extensions. In
part, they are included in the CORBA standard; in part, they are
only available in non-standardised middleware implementations.
The portable interceptor specification supports interception within
the official CORBA standard. The specification defines request interceptors
and IOR interceptors as standardised way to extend the
middleware functionality. Using request interceptors, several hooks
may be inserted both at client and at server side to intercept remote
method calls. These hooks allow to redirect the call (e.g., for load
balancing), abort it with an exception (e.g., for access restriction),
extract and modify context information embedded in the request,
and perform monitoring tasks. A direct manipulation of the request
is not permitted by the specification. Multiple request interceptors
may be used simultaneously. Interceptors add additional overhead
on each remote method invocation; furthermore, they do not allow
to modify the remote invocations completely. IOR interceptors, on
the other hand, are called when a POA needs to create an IOR for a
service. This allows to insert additional data into the IOR (e.g., a
tagged component for context information that is later used in a request
interceptor).
CORBA allows to define custom POA implementations. The POA
is responsible for forwarding incoming invocation requests to a
servant implementation. This extension point allows to integrate
server-side mechanisms like access control, persistence, and life-cycle
management. It has, however, no influence on the interaction
of clients with a remote service.
Beside the standardised IIOP protocol, any CORBA implementation
may support custom invocation protocols. Additional IOR profiles
may be used for this purpose. However, establishing such extensions
is not standardised and every vendor may include his own
proprietary variant, which limits the interoperability of this approach
.
Smart proxies are a concepts for extending ORB flexibility, which
is not yet standardised by the OMG, but which is implemented by
some ORB vendors, e.g., in the ACE ORB/Tao [11]. They allow to
replace the default CORBA stub by a custom proxy that may implement
some extended functionality. As such, they allow to implement
parts of the object's functionality at the client side.
Closely related to our research is the work on OpenORB at Lancaster
University [1]. This middleware project uses reflection to define
dynamic (re)configuration of componentised middleware services.
The main difference to our design is that it completely restructures
the middleware architecture, whereas our concept with a reference
manager and pluggable profile managers is integrated in any existing
CORBA implementation with only minimal modifications. It
nevertheless provides equal flexibility forreconfigurations. Component
technology could be used in the internal design of complex
profile managers.
70
3
PolyORB [10] is a generic middleware system that aims at providing
a uniform solution to build distributed applications. It supports
several personalities both at the application-programming interface
(API) level and the network-protocol level. The personalities are
compliant to several existing standards. This way, it provides middleware
-to-middleware interoperability. Implementations of personalities
are specific to PolyORB, unlike our profile managers,
which are intended to be portable between different ORB implementations
. We do not address the issue of genericity at the API level
DESIGNING A GENERIC REFERENCE MANAGER
The fundamental extension point of an object middleware is the
central handling of remote references. It is the task of any object
middleware to provide mechanisms to create remote references, to
pass them across host boundaries, and to use them for remote invocations
. As explained above, merely providing extension points at
the invocation level is insufficient for several complex tasks. The
essential point of our work is to provide a very early extension point
by completely separating the reference handling from the middleware
core.
The impact of such a design is not only that a single middleware implementation
gets more flexible. It is also highly desirable to provide
this extensibility in an vendor-independent way. That is, an extension
module should be portable across different middleware implementations
. Furthermore, these extensions should be dynamically
pluggable and, in the extreme, be loaded on demand by the
middleware ORB.
Our design provides such a middleware architecture. It is currently
designed as an extension to standard CORBA, and maintains interoperability
with any legacy CORBA system. Our design represents,
however, a generic design pattern that easily applies to any other
object middleware.
The only prerequisite made is that remote references are represented
by a sufficiently extensible data structure. In CORBA, the Inter-operable
Object Reference (IOR) provides such a data structure.
Each profile of the IOR represents an alternative way to contact the
object. Each profile type has its own data-type definition, described
in CORBA IDL. Hence, at the IOR level, CORBA is open to arbitrary
extensions. The IOR handling, however, is typically encapsulated
in the internals of a CORBA-compliant ORB implementation.
Currently, if a vendor uses the power of IORs for custom extensions
, these will only be implemented internally in the respective
ORB. The extension will not be portable.
3.1 Overview of our Design
Our approach introduces a generic interface for plugging in portable
extension modules for all tasks related to IOR profile handling. This
makes it easy to support extended features like fault-tolerant replication
, a fragmented object model, or transparent interaction with
other middleware systems. This improves the flexibility of a
CORBA middleware. We factored out the IOR handling of the
ORB and put it into pluggable modules. This way, custom handlers
for IOR profiles may be added to the ORB without modifying the
ORB itself. Dynamically downloading and installing such handlers
at run-time further contributes to the richness of this approach.
Factoring out the basic remote-reference handling of the ORB core
into a pluggable module affects five core functions of the middleware
: first, the creation of new object references; second, the marshalling
process, which converts object references into an external-ly
meaningful representation (i.e., a serialised IOR); third, the unmarshalling
process, which has to convert such representation into
a local representation (e.g., a stub, a smart proxy, or a fragment);
and fourth, the explicit binding operation, which turns some symbolic
reference (e.g., a stringified IOR) into a local representation
and vice versa. A fifth function is not as obvious as the others: The
type of a remote object reference can be changed as long as the remote
object supports the new type. In CORBA this is realised by a
special narrow operation. As in some cases the narrow operation
needs to create a new local representation for a remote object, this
operation has to be considered too.
An extension to CORBA will have to change all five functions for
its specific needs. Thus, we collect those function in a module that
we call a profile manager. A profile manager is usually responsible
for a single type of IOR profile, but there may be reasons to allow
profile managers to manage multiple profile types. Profile managers
are pluggable modules. A part of the ORB called reference manager
manages all available profile managers and allows for registration
of new profile-manager modules.
The basic design can be found as a UML class diagram in Fig. Figure
. Sometimes an application needs to access the reference manager
directly. By calling
resolve_initial_references()
, a
generic operation for resolving references to system-dependent objects
, it can retrieve a reference to the reference manager pseudo object
from the ORB. At the reference manager, profile managers can
be registered. As profile managers are responsible for a single or for
multiple IOR profile types, the registration requires a parameter
identifying those profile types. For identification a unique profile
tag is used. Those tags are registered with the OMG to ensure their
uniqueness. With the registration at the reference manager, it is exactly
known which profile managers can handle what profile types.
Several tasks of reference handling are invoked at the reference
manager and forwarded to the appropriate profile manager. The architecture
resembles the chain-of-responsibility pattern introduced
by Gamma et al. [2].
3.2 Refactoring the Handling of References
In the following, we describe the handling of the five core functions
of reference handling in our architecture:
Creating object references.
In traditional CORBA, new object
references are created by registering a servant at a POA of the server
. The POA usually maintains a socket for accepting incoming invocation
requests, e.g., in form of IIOP messages. The POA encodes
the contact address of the socket into an IIOP profile and creates
an appropriate IOR. The details of the POA implementation
ProfileManager
+insertProfile: void
+profileToObject:
CORBA::Object
+objectToIor: IOR
+iriToIor: IOR
+iorToIri: string
+narrow: CORBA::Object
ReferenceManager
+setObjID: void
+getObjID: string
+createNewIor: IOR
+registerProfileManager: void
+getProfileManager: ProfileManager
+getProfileManagers: ProfileManager[]
+iorToObject: CORBA::Object
+objectToIor: IOR
+objectToIri: string
+iorToString: string
+stringToIor: IOR
+narrow: CORBA::Object
ORB
1
n
Figure 1: UML class diagram of the CORBA extension
71
4
varies from ORB to ORB. In general, the registration at the POA
creates an IOR and an internal data structure containing all necessary
information for invocation handling.
To be as flexible as possible our extension completely separates the
creation of the IOR from the handling in a POA. The creation of an
IOR requires the invocation of
createNewIor()
at the reference
manager. As standard CORBA is not able to clearly identify
object references referring to the same object, we added some operations
to the reference manager that allow for integrating a universally
unique identifier (UUID) into the IOR. The UUID is stored as
a tagged component and in principle can be used by any profile
manager.
For filling the IOR with profiles, an appropriate profile manager
must be identified. Operations at the reference manager allow the
retrieval of profile managers being able to handle a specific profile
type. A profile manager has to provide an operation
insertProfile
()
that adds a profile of a specific type into a given IOR. This
operation has manager-dependent parameters so that each manager
is able to create its specific profile. Instead of creating the IOR itself
, the POA of our extended ORB has to create the IOR by asking
the IIOP profile manger for adding the appropriate information
(host, port, POA name and object ID) to a newly created IOR. As
an object may be accessible by multiple profiles, an object adapter
can ask multiple profile managers for inserting profiles into an IOR.
Marshalling of object references
. A CORBA object passed as a
method parameter in a remote invocation needs to be serialised as
an IOR. In classic CORBA, this is done ,,magically" by the ORB by
accessing internal data structures. In the Java language mapping, for
example, a CORBA object reference is represented as a stub that
delegates to a sub-type of
org.omg.CORBA.Delegate
. Instances
of this type store the IOR.
In our architecture, we cannot assume any specific implementation
of an object reference as each profile manager may need a different
one. Thus, there is no generic way to retrieve the IOR to be serialised
. Instead, the reference manager is asked to convert the object
reference into an IOR that in the end can be serialised. Therefore,
the reference manager provides an operation called
objectToIor
()
. The reference manager, in turn, will ask all known profile
managers to do the job. Profile managers will usually check
whether the object reference is an implementation of their middleware
extension. If so, the manager will know how to retrieve the
IOR. If not, the profile manager will return a null reference, and the
reference manager will turn to the next profile manager.
Unmarshalling of object references.
If a standard ORB receives
a serialised IOR as a method parameter or return result, it implicitly
converts it into a local representation and passes this representation
(typically a stub) to the application. This creation of the stub needs
to be factored out of the marshalling system of the ORB to handle
arbitrary reference types.
Our design delegates the object creation within unmarshalling to the
reference manager by calling
iorToObject()
. The reference
manager maintains an ordered list of profile types and corresponding
profile managers. According to this list, each profile manager is
asked to convert the IOR in a local representation by calling
profileToObject
()
. This way the reference manager already analyses
the contents of the IOR and only asks those managers that are
likely to be able to convert the IOR into an object reference. A profile
manager checks the profile and the tagged components, and
tries to create a local representation of the object reference. For example
, an IIOP profile manager will analyse the IIOP profile. A
CORBA-compliant stub is created and initialised with the IOR. The
stub is returned to the reference manager, which returns it to the application
. If a profile manager is not able to convert the profile or
not able to contact the object for arbitrary reasons, it will throw an
exception. In this case the reference manager will follow its list and
ask the next profile manager. If none of the managers can deal with
the IOR, an exception is thrown to the caller. This is compatible
with standard CORBA for the case that no profile is understood by
the ORB.
The order of profile types and managers defines the ORB-dependent
strategy of referencing objects. As the first matching profile
type and manager wins, generic managers (e.g., for IIOP) should be
at the end of the list whereas more specific managers should be at
the beginning.
Explicit binding to remote references.
A user application may
explicitly call the ORB method
string_to_object
, passing
some kind of stringified representation of the references. Usually,
this may either be a string representation of the marshalled IOR, or
a corbaloc or corbaname URI.
This ORB operation can be split into two steps: First, the string is
parsed and converted into an IOR object. As the stringified IOR can
have an extension-specific IRI format--an IRI is the international-ised
version of an URI--each profile manager is asked for conver-Figure
2: Sequence chart for ORB::object_to_string
objectToIor
objectToIor
objectToIor
object_to_string
:ProfileManager2
:ProfileManager1
:ORB
:ReferenceManager
return IOR string
return IOR string
return IOR
return null
72
5
sion by using
iriToIor()
. The generic IOR format will be handled
by the reference manager itself. Second, this IOR object is converted
into a local representation using the same process as used
with unmarshalling.
Calling
object_to_string()
is handled in a similar way.
First, the reference passed as parameter is converted into an IOR
object in the same way as for marshalling. Second, the IOR object
is converted into a string. The IOR is encoded as a hex string representing
an URI in the IOR schema. The complete interaction is
shown as sequence chart in Fig. 2.
As sometimes an application may want to convert an object reference
into a more-readable IRI, our extension also provides an operation
called
objectToIri()
. In a first step, the object reference
is once again converted into an IOR. The second step, the conversion
into an IRI, is done with the help of the profile managers by invoking
iorToIri()
. Those may provide profile-specific URL
schemes that may not be compatible to standard CORBA. As the
conversion of an IRI into an object reference can be handled in the
profile manager this is not a problem.
Narrow operation.
The narrow operation is difficult to implement
. In the Java language mapping the operation is located in a
helper class of the appropriate type, expecting an object reference
of arbitrary other type. The implementation in a standard ORB assumes
an instance compatible to the basic stub class, which knows
a delegate to handle the actual invocations. After successfully
checking the type conformance, the helper class will create a new
stub instance of the appropriate type and connect it to the same delegate
. With any CORBA extension it cannot be assumed that object
references conform to the basic stub class.
In our extension the helper class invokes the operation
narrow()
at the reference manager. Beside the existing object reference a
qualified type name of the new type is passed to the operation as a
string. The reference manager once again will call every profile
manager for the narrow operation. A profile manager can check
whether the object reference belongs to its CORBA extension. If
yes, the manager will take care of the narrow operation. If not, a null
result is returned and the reference manager will turn to the next
profile manager.
As a profile manager has to create a type-specific instance, it can
use the passed type name to create that instance. In languages that
provide a reflection API (e.g., Java) this is not difficult to realise. In
other languages a generic implementation in a profile manager may
be impossible. Another drawback is that reflection is not very efficient
. An alternative implementation is the placement of profile-specific
code into the helper class (or in other classes and functions
of other language mappings). Our own IDL compiler called
IDLflex [8] can easily be adapted to generate slightly different helper
classes. As a compromise helper classes may have profile-specific
code for the most likely profiles, but if other profiles are used, the
above mentioned control flow through reference and profile managers
is used. Thus, most object references can be narrowed very fast
and the ORB is still open to object-reference implementations of
profile managers that may have even be downloaded on demand.
EXTENSIONS TO CORBA BASED ON PROFILE MANAGERS
This section illustrates two applications of our design. The first example
presents the AspectIX profile manager, which integrates
fragmented object into a CORBA middleware. The second examples
provides a transparent gateway from CORBA to Jini.
4.1 AspectIX Profile Manager
The AspectIX middleware supports a fragmented object model
[4,7]. Unlike the traditional RPC-based client-server model, the
fragmented object model does no longer distinguish between client
stubs and the server object. From an abstract point of view, a fragmented
object is an entity with unique identity, interface, behaviour
, and state, as in classic object-oriented design. The implementation
, however, is not bound to a certain location, but may be distributed
arbitrarily over various fragments. Any client that wants to
access the fragmented object needs a local fragment, which provides
an interface identical to that of a traditional stub. This local
fragment may be specific for this object and this client. Two objects
with the same interface may lead to completely different local fragments
.
This internal structure gives a high degree of freedom on where
state and functionality of the object is located and how the interaction
between fragments is done. The internal distribution and interaction
is not only transparent on the external interface, but may
even change dynamically at run-time. Fragmented objects can easily
simulate the traditional client-server structure by using the same
fragment type at all client locations that works as a simple stub.
Similarly, the fragmented object model allows a simple implementation
of smart proxies by using the smart proxy as fragment type
for all clients. Moreover, this object model allows arbitrary internal
configurations that partition the object, migrate it dynamically, or
replicate it for fault-tolerance reasons. Finally, the communication
between fragments may be arbitrarily adjusted, e.g., to ensure quality
-of-service properties or use available special-purpose communication
mechanisms. All of these mechanisms are fully encapsulated
in the fragmented objects and are not directly visible on the outer
interface that all client application use.
Supporting a fragmented object model is clearly an extension to the
CORBA object model. With our architecture it is very easy to integrate
the new model. Just a new profile manager has to be developed
and plugged into our ORB. When a client binds to a fragmented
object, a more complex task than simply loading a local stub is
needed. In our system, the local fragment is internally composed of
three components, as shown in Fig. 3: the View, the Fragment Interface
(FIfc), and the Fragment Implementation (FImpl). The fragment
implementation is the actual code that provides the fragment
behaviour. The fragment interface is the interface that the client uses
. Due to type casts, a client may have more than one interface instance
for the same fragmented object. Interfaces are instantiated in
CORBA
narrow
operations; all existing interfaces delegate invocations
to the same implementation. The View is responsible for all
management tasks; it stores the object ID and IOR, keeps track of
all existing interfaces, and manages dynamical reconfigurations
that exchange the local FImpl. The management needs to update all
Fragment
Client
Fragment
Interface
View
Fragment
Implementation
Figure 3: Internal structure of a fragment
73
6
references from FIfcs to the FImpl and has to coordinate method invocations
at the object that run concurrently to reconfigurations.
To integrate such a model into a profile-manager-aware CORBA
system, it is necessary to create IORs with a special profile for fragmented
objects (APX profile), and to instantiate the local fragment
(View-FImpl-FIfc) when a client implicitly or explicitly binds to
such an IOR.
The IOR creation is highly application specific, thus it is not fully
automatic as in traditional CORBA. Instead, the developer of the
fragmented object may explicitly define, which information needs
to be present in the object. The reference manager creates an empty
IOR for a specified IDL type, and subsequently the APX profile
manager can be used to add a APX profile to this IOR. This profile
usually consists of information about the initial FImpl type that a
client needs to load and contact information on how to communicate
with other fragments of the fragmented object. The initial FImpl
type may be specified as a simple Java class name in a Java-only
environment, or as a DLS name (dynamic loading service, [3]) in a
heterogeneous environment, to dynamically load the object-specific
local fragment implementation. The contact information may,
e.g., indicate a unique ID, which is used to retrieve contact addresses
from a location service) or a multicast address for a fragmented
object that uses network multicast for internal communication.
When an ORB binds to an IOR with an APX profile, the corresponding
profile manager first checks, if a fragment of the specific
object already exists; if so, a reference to the existing local fragment
is returned. Otherwise, a new default view is created and connected
to a newly instantiated FImpl. Profile information specifies how
this FImpl is loaded (direct Java class name for Java-only environments
, a code factory reference, or a unique ID for lookup to the
global dynamic loading service (DLS)). Finally, a default interface
is built and returned to the client application.
4.2 Jini Profile Manager
Jini is a Java-based open software architecture for network-centric
solutions that are highly adaptive to change [9]. It extends the Java
programming model with support for code mobility in the networks
; leasing techniques enables self-healing and self-configuration
of the network. The Jini architecture defines a way for clients
and servers to find each other on the network. Service providers
supply clients with portable Java objects that implement the remote
access to the service. This interaction can use any kind of technology
such as Java RMI, SOAP, or CORBA.
The goal of this CORBA extension is to seamlessly integrate Jini
services into CORBA. Jini services should be accessible to CORBA
clients like any other CORBA object. Jini services offer a Java interface
. This interface can be converted into an IDL interface using
the Java-to-IDL mapping from the OMG [6]. Our CORBA extension
provides CORBA-compatible representatives, special proxy
objects that appear as CORBA object references, but forward invocations
to a Jini service. Such references to Jini services can be registered
in a CORBA naming service and can be passed as parameters
to any other CORBA object. CORBA clients do not have to
know that those references refer to Jini services.
The reference to a Jini service is represented as a CORBA IOR. The
Jini profile manager offers operations to create a special Jini profile
that refers to a Jini service. It provides operations to marshal references
to such services by retrieving the original IOR from the spe-cialised
proxy, to unmarshal IORs to a newly created proxy, and to
type cast a proxy to another IDL type.
The Jini profile stores a Jini service ID, and optionally a group name
and the network address of a Jini lookup service. The profile manager
uses automatic multicast-based discovery to find a set of
lookup services where Jini services usually have to register their
proxy objects. Those lookup services are asked for the proxy of the
service identified by the unique service ID. If an address of a lookup
service is given in the profile, the profile manager will only ask this
service for a proxy. The retrieved proxy is encapsulated in a wrapper
object that on the outside looks like a CORBA object reference.
Inside, it maps parameters from their IDL types to the corresponding
Java types and forwards the invocation to the original Jini
proxy.
Jini services may provide a lease for service usage. In the IOR, a
method can be named that is supposed to retrieve a lease for the
service. This lease will be locally managed by the profile manager
and automatically extended if it is due to expire.
This extension shows that it is possible to encapsulate access to other
middleware platforms inside of profile managers. In case of the
Jini profile manager our implementation is rather simple. So, return
parameters referring to other Jini services are not (yet) converted to
CORBA object references. We also did not yet implement Jini IOR
profiles that encapsulate abstract queries: In this case not a specific
service ID is stored in the profile, but query parameters for the
lookup at the lookup services. This way, it will be possible to create
IORs with abstract meaning, e.g., encapsulating a reference to the
nearest colour printer service (assumed that such services are registered
as Jini services at a lookup service).
EVALUATION
Two aspects need to be discussed to evaluate our design: First, the
effort that is needed to integrate our concept into an existing ORB;
second, the run-time overhead that this approach introduces. As we
used JacORB as basis for our implementation, we compare our implementation
with the standard JacORB middleware.
5.1 Implementation Cost
The integration of a generic reference manager into JacORB version
2.2 affected two classes:
org.jacorb.orb.ORB
and
org.jacorb.orb.CDROutputStream
. In
ORB
, the
methods
object_to_string
,
string_to_object
,
and
_getObject
(which is used for demarshalling) need to be
replaced. In addition, the reference manager is automatically loaded
at ORB initialisation and made available as initial reference. In
CDROutputStream
, the method
write_Object
needs to
be re-implemented to access the reference manager. These changes
amount to less than 100 lines of code (LOC). The generic reference
manager consists of about 500 LOC; the IIOP profile manager contains
150 LOC in addition to the IIOP implementation reused from
JacORB.
These figures show that our design easily integrates into an existing
CORBA ORB. Moreover, the generic reference manager and profile
managers may be implemented fully independent of ORB internals
, making them portable across ORBs of different vendors-with
the obvious restrictions to the same implementation programming
language.
5.2 Run-time Measurements of our Implementation
We performed two experiments to evaluate the run-time cost of our
approach; all tests were done on Intel PC 2.66 GHz with Linux
2.4.27 operating system and Java 1.5.0, connected with a 100 MBit/
s LAN. In all cases, the generic reference manager in our ORB was
connected with three profile managers (IIOP, APX, Jini)
The first experiment examines the binding cost. For this purpose, a
stringified IOR of a simple CORBA servant with one IIOP profile
is generated. Then, this reference is repeatedly passed to the ORB
74
7
method
string_to_object
, which parses the IOR and loads
a local client stub. Table 1 shows the average time per invocation of
100,000 iterations.
The second test analyses the marshalling cost of remote references.
An empty remote method with one reference parameter is invoked
100,000 times. These operation involve first a serialisation of the
reference at client-side and afterwards a deserialisation and binding
at the servant side; all operations are delegated to the reference
manager in our ORB. Table 2 shows the results of this test.
Both experiments show, that the increase in flexibility and extensibility
is paid for with a slight decrease in performance. It is to be
noted that our reference implementation has not yet been optimised
for performance, so further improvement might be possible.
CONCLUSION
We have presented a novel, CORBA-compliant middleware architecture
that is more flexible and extensible than standard CORBA.
It defines a portable reference manager that uses dynamically load-able
profile managers for different protocol profiles.
The concept is more flexible than traditional approaches. Unlike
smart proxies, it does not only modify the client-side behaviour, but
allows to modify the complete system structure. In contrast to vendor
-specific transport protocols, it provides a general extension to
CORBA that allows to implement portable profile managers, which
in the extreme may even be dynamically loaded as plug-in at run-time
. Different from CORBA portable interceptors, it gives the
service developer full control over the IOR creation process, has
less overhead than interceptors, and allows arbitrary modification
of client requests.
The concept itself is not limit to CORBA. In fact, it defines a generic
design pattern for any existing and future middleware platforms.
Extracting all tasks related to the handling of remote references into
an extensible module allows to create middleware platforms that are
easily extended to meet even unanticipated future requirements.
Modern developments like ubiquitous computing, increased scalability
and reliability demands and so on make it likely that such extensions
will be demanded. Our design principle allows to implement
future systems with best efficiency and least implementation
effort.
We have presented in some detail two applications of our architecture
. Besides traditional IIOP for compliance with CORBA, we
have implemented a profile manager for fragmented objects and
one for accessing Jini services transparently as CORBA objects.
Currently, we are working on profile managers to handle fault-tolerant
CORBA and a bridge to Java RMI.
REFERENCES
[1] G. Coulson, G. Blar, M. Clarke, N. Parlavantzas: The design of
a configurable and reconfigurable middleware platform.
Distributed Computing 15(2): 2002, pp 109-126
[2] E. Gamma, R. Helm, R. Johnson, J. Vlissides: Design
patterns. Elements of reusable object-oriented software.
Addison-Wesley, 1995.
[3] R. Kapitza, F. Hauck: DLS: a CORBA service for dynamic
loading of code. Proc. of the OTM'03 Conferences; Springer,
LNCS 2888, 2003
[4] M. Makpangou, Y. Gourhand, K.-P. Le Narzul, M. Shapiro:
Fragmented objects for distributed abstractions. Readings in
Distr. Computing Systems, IEEE Comp. Society Press, 1994,
pp. 170-186
[5] Object Management Group: Common object request broker
architecture: core specification, version 3.0.3; OMG
specification formal/04-03-12, 2004
[6] Object Management Group: Java language mapping to OMG
IDL, version 1.3; OMG specification formal/2003-09-04,
2003
[7] H. Reiser, F. Hauck, R. Kapitza, A. Schmied: Integrating
fragmented objects into a CORBA environment. Proc. of the
Net.ObjectDays, 2003
[8] H. Reiser, M. Steckermeier, F. Hauck: IDLflex: A flexible and
generic compiler for CORBA IDL. Proc. of the Net.Object
Days, Erfurt, 2001, pp 151-160
[9] Sun microsystems: Jini technology architectural overview.
White paper, Jan 1999
[10] T. Vergnaud, J. Hugues, L. Pautet, F. Kordon: PolyORB: a
schizophrenic middleware to build versatile reliable
distributed applications. Proc. of the 9th Int. Conf. on Reliable
Software Technologies Ada-Europe 2004 (RST'04),;
Springer, LNCS 3063, 2004, pp 106-119
[11] N. Wang, K. Parameswaran, D. Schmidt: The design and
performance of meta-programming mechanisms for object
request broker middleware. Proc. of the 6th USENIX
Conference on Object-Oriented Techology and Systems
(COOTS'01), 2001
Table 1: Execution time of ORB::string_to_object invocations
Standard JacORB 2.2.1
0.22 ms
AspectIX ORB with reference manager
0.28 ms
Overhead
0.06 ms (27%)
Table 2: Complete remote invocation time with one reference
parameter
Standard JacORB 2.2.1
0.64 ms
AspectIX ORB with reference manager
0.72 ms
Overhead
0.08 ms (12.5%)
75
| integration;Flexible and extensible object middleware;IIOP;Software architecture for middleware;CORBA;object oriented;Extensions;Extensibility;extensibility;Middleware architecture;Distiributed applications;Ubiquitous computing;Object references;Middleware;extensible and reconfigurable middleware;Interoperability;distributed objects;implementation;Fault tolerant CORBA;Reference manager;profile manager;middleware interoperability;middleware architecture;Profile manager;Remote object;encapsulation;Flexibility;IOR;Middleware platform;Middleware systems |
90 | Fan-out Measuring Human Control of Multiple Robots | A goal of human-robot interaction is to allow one user to operate multiple robots simultaneously. In such a scenario the robots provide leverage to the user's attention. The number of such robots that can be operated is called the fan-out of a human-robot team. Robots that have high neglect tolerance and lower interaction time will achieve higher fan-out. We define an equation that relates fan-out to a robot's activity time and its interaction time. We describe how to measure activity time and fan-out. We then use the fan-out equation to compute interaction effort. We can use this interaction effort as a measure of the effectiveness of a human-robot interaction design. We describe experiments that validate the fan-out equation and its use as a metric for improving human-robot interaction. | INTRODUCTION
As computing becomes smaller, faster and cheaper the
opportunity arises to embed computing in robots that
perform a variety of "dull, dirty and dangerous" tasks that
humans would rather not perform themselves. For the
foreseeable future robots will not be fully autonomous, but
will be directed by humans. This gives rise to the field of
human-robot interaction (HRI). Human-robot interaction
differs from traditional desktop GUI-based direct
manipulation interfaces in two key ways. First, robots must
operate in a physical world that is not completely under
software control. The physical world imposes its own
forces, timing and unexpected events that must be handled
by HRI. Secondly, robots are expected to operate
independently for extended periods of time. The ability for
humans to provide commands that extend over time and can
accommodate unexpected circumstances complicates the
HRI significantly. This problem of developing interfaces
that control autonomous behavior while adapting to the
unexpected is an interesting new area of research.
We are very interested in developing and validating metrics
that guide our understanding of how humans interact with
semiautonomous robots. We believe that such laws and
metrics can focus future HRI development. What we are
focused on are not detailed cognitive or ergonomic models
but rather measures for comparing competing human-robot
interfaces that have some validity. In this paper we look at a
particular aspect of HRI, which is the ability for an
individual to control multiple robots simultaneously. We
refer to this as the fan-out of a human-robot team. We
hypothesize that the following fan-out equation holds,
IT
AT
FO
=
where
FO=fan-out or the number of robots a human can
control simultaneously,
AT=activity time or the time that a robot is actively
effective after receiving commands from a user,
IT=interaction time or the time that it takes for a
human to interact with a given robot.
In this paper we develop the rationale for the fan-out
equation and report several experiments validating this
equation. We show that the equation does describe many
phenomena surrounding HRI but that the relationships are
more complex than this simple statement of fan-out implies.
We also describe the experimental methodologies
developed in trying to understand fan-out. We present them
as tools for evaluating and measuring design progress in
HRI systems.
The robotic task domain that we have focused on is search
and rescue where robots must cover an indoor, urban or
terrain environment in search of victims, threats, problems,
or targets. Although we have restricted our work to this
domain, we are hopeful that our methods and metrics will
extend to other HRI domains.
PRIOR WORK
Others have done work on human-robot interaction.
Sheridan has outlined 5 levels of robot control by users
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
CHI 2004, April 2429, 2004, Vienna,
Austria.
Copyright 2004 ACM 1-58113-702-8/04/0004...$5.00.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
231
[14]. The levels range from teleoperation, where the user is
directly engaged with the actuators of robot, through
various levels of computer intervention between the user,
the sensors and the actuators, to full autonomy with users
merely setting very high-level goals. Fong and Thorpe [8,
9] demonstrated collaborative control where the human and
the robot share the initiative, with the robot seeking
guidance when needed. These with a variety of other
approaches are characterized by their system architecture.
Although human-robot interfaces are provided, there is little
study of the nature of that interface nor on how to evaluate
the quality of the interface.
There have been a number of proposals for new modalities
for controlling robots including haptics, gestures, PDAS
[7]. Others have looked at the visualization and context
memory problems that arise when driving robots. The
Egosphere is one such solution [6].
There is also a great deal of work on using multiple robots
on a task. There are fully autonomous swarming approaches
such as Bruemmer, et al [3]. These have very little human
intervention because the desired task is preprogrammed.
Other autonomous robot teams have done janitorial tasks,
box pushing and earth moving [12, 13]. All of these teams
have used very little human intervention. Other multi-robot
systems have robots operating in formations [2, 4, 16] or
according to predefined deployment behaviors [15] These
approaches allow users to direct the work of a number of
robots simultaneously. Fong et. al. [10] point out the
problems with dividing human attention among multiple
robots and propose a collaborative control model for
driving. In essence their proposals increase the neglect and
activity time of the robots to achieve higher fan-out. Others
have used a "select and command" model for controlling
multiple robots [11].
However, none of these have been carefully evaluated as to
the advantages or decrease in effort afforded by the various
user interface designs. In most cases the control architecture
is intertwined with the human-robot interface making it
hard to distinguish which part of the solution is contributing
to progress. In this paper we describe a model for isolating
and measuring the human-robot interface for teams of
robots.
SAMPLE ROBOT WORLD
To explain our fan-out ideas, we pose the example robot
world shown in figure 1. In this world there are robots,
targets and obstacles (trees & rocks). The task is for all
targets to be touched by robots as soon as possible. This is
an abstraction of a search task.
We can assume a simple-minded robot that accepts a
direction and a distance from its user and will move in that
direction until it either travels the indicated distance or
encounters an obstacle, in which case it stops. In figure 1
the robot has three legs to its journey each characterized by
a different user command. However, the robot's guidance
may not be perfect. It may drift to the left on the first leg,
run into the trees and stop early. Its odometry may be faulty
and it may overrun the end of leg one necessitating
additional commands from the user to extricate it from the
dead-end of rocks and trees.
T
1
2
3
Figure 1 Simple Robot World
This example illustrates two measures that are important to
our model of fan-out. The first is neglect-time. That is the
time the robot can run while being ignored by its user.
Neglect time is a measure of a robot's autonomy. This is
very similar to Crandall's neglect tolerance [5]. Unlike
Crandall's work, we are interested in multiple robots rather
than efficient interfaces to a single robot. The second
measure is activity-time, which is the time the robot will
make effective progress before it either gets a new
command from the user, it stops making effective progress
or it completes the command the operator gave it. Neglect
time and activity time are not the same. For example, if the
user does not trust the odometry, he may watch the robot to
make certain it does not overshoot the end of leg 1. The
robot is independently active, but is not being neglected.
This difference has an important impact on multiplexing
human attention among multiple robots.
The relationship between activity time (AT) and neglect
time (NT) is determined by the amount of overlap (O)
between robot activity and interaction time (IT). Overlap is
the percentage of the interaction time where the robot is
also active.
NT
IT
O
AT
+
=
*
This relationship is illustrated by driving a car. The
interaction time and the activity time of a car are almost
completely overlapped (O=1.0). A car is almost always
moving when the driver is steering it. In addition, the
neglect time for a car is very small, therefore AT is not
much larger than IT. Plugging this into the fan-out equation
we see that a person cannot drive more than one car at once.
In the case of a manufacturing robot, the robot is not at all
active during setup (O=0.0) but the robot will run for days
or months after a day of setup. Thus AT is many times
larger than IT and the fan-out is quite high. The
experimental models that we finally used are based on AT
and IT. The relationship between O, NT and AT does not
impact our comparisons of various human-robot interfaces.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
232
In our simple robot world we can give the robot more
intelligence. If it encounters an obstacle it can bounce off
and continue trying to make progress towards its next check
point. Thus the robot will operate longer without
intervention (increased AT) and the user can trust it more
(increased NT). Adding some local vision and planning, the
robot might also detect the cul-de-sac of trees and rocks and
not enter there without more explicit instructions. Again the
robot can be trusted more and NT can increase. Increasing a
robot's trusted intelligence can increase its neglect time and
thus increase fan-out.
RATIONALE FOR FAN-OUT
The primary reason for our interest in fan-out is that it is a
measure of how much leverage of human attention is being
provided by a robot's automated capabilities. Autonomy is
not an end unto itself. We may study it in that way, but in
reality we want automation because it expands human
capacity. Fan-out is a measure of the leverage of human
attention.
The ability for a human to control multiple robots depends
upon how long that robot can be neglected. If a robot makes
no progress while being neglected, the human will have no
attention to devote to other robots. However, as will be
shown, it is difficult to measure neglect time. Instead we
measure activity time, which is an average amount of time
that a robot functions between instructions from the user. If
we divide the average activity time by the amount of time a
user must interact with each robot, then we get the fan-out
equation.
IT
AT
FO
=
However, the relationships are not a simple as this analysis
might indicate. We will discuss these interrelationships
along with the experimental data. The key point, we
believe, in understanding these complexities is that IT is not
monolithic. Our current hypothesis is that there are at least
4 components to interaction time. They are:
1. Robot Monitoring and Selection reviewing the
state of all robots and deciding which robot needs
the user's attention next.
2. Context Switching when switching attention
between robots the user must spend time
reaquiring the goals and problems of the new
robot.
3. Problem Solving having reaquired the situation
the user must analyze the problem and plan a
course of action for the robot.
4. Command Expression the user must manipulate
input devices tell the robot what to do.
Traditional direct-manipulation/desktop interfaces generally
exhibit only components 3 and 4. The experiments that we
have performed show the effects of some of these
components in the ways that the data deviates from the
predictions of the fan-out equation.
Having broken down IT, it seems that IT should increase
with FO. This is because the more robots there are to
control, the greater the monitoring and robot selection time.
Also the more diverse situations the robots find themselves
in the greater the context-switching time. As we will see in
the data smarter robots can offload some of the problem
solving time from the user and thus reduce IT.
MEASURING HRI
Our hypothesis is that the fan-out equation provides a
model for how humans interact with multiple robots. The
challenge in validating the fan-out equation is that
interaction time (consisting of planning, monitoring and
solving) occurs mostly in the user's mind and is therefore
difficult to measure directly.
Measuring Neglect Time(NT) and Activity Time(AT)
The properties of NT and AT are characteristics of a robot's
ability to function in a world of given complexity. These
times are functions of robot ability, task complexity and the
user's understanding of the robot's ability.
In measuring either NT or AT we must control for the
complexity of the task being posed. If the task posed in
figure 1 had half as many trees, or no rocks, the task itself
would be simpler and the robot could be safely neglected
for a longer time. In essence, the more challenges a robot
must face, for which it does not have sufficient autonomy,
the lower NT and AT will be. The nature of the challenges
will also have an impact. Therefore any measurements of
NT or AT must control for the nature of the tasks being
posed. We term this task complexity.
Our first approach to measuring NT ignored the role of the
user in determining the robot's activity. We assumed that
there was some measurement of NT in the context of a
given task complexity. To measure NT we would randomly
place a robot at some location in the world, give it a random
goal and then measure the average time that the robot
would operate before reaching the goal or failing to make
progress.
However, this approach failed to produce data that was
consistent with the fan-out equation. After reviewing the
videotapes of actual usage we found that this a priori
measurement consistently overestimated NT. We identified
three problems with this approach. The first is demonstrated
on leg 1 of the robot route in figure 1. The robot could
feasibly be neglected until it ran into the cul-de-sac of trees
and rocks. However, users regularly saw such situations and
redirected the robot early to avoid them. The second reason
was that users frequently did not trust the robot to work out
low-level problems. Users would regularly give the robots
shorter-term goals that the user believed were possible.
Thirdly, we did not have a good measure for how much a
robot's activity overlapped the user's attention to the robot.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
233
All of these failing led to NT predictions that were much
larger than actual usage.
A simpler and more accurate measure we found was
activity time (AT). We measure the time from a user's
command of a robot until that robot either stops making
progress or another use command is issued to it. We
average these times across all robots in an experiment. This
measure of activity time fit better with the fan-out equation
and was much easier to measure.
This activity time measure is dependent on determining
when a robot is making effective progress and when it is
not. In our simple robot world, robots stop when they reach
an obstacle. Thus a robot is active when it is moving.
Because of the nature of our test user interface robots are
always getting closer to the goal if they are moving.
Therefore our simplistic measure of effective progress
works in this case. For more intelligent robots this is more
complicated. An intelligent robot must balance goal
progress with obstacle or threat avoidance. This can lead to
interesting feedback and deadlock problems, which cannot
always be detected. These issues form the basis for many of
the conundrums of Isasc Asimov's robopsychology[1]. In
many situations, however, we can detect lack of progress
and thus the end of an activity.
Measuring FO
Our next challenge is to measure fan-out (FO). This, of
course, is the measure that we want to maximize because it
is an estimate of the leverage provided by our human-robot
teams. Our first approach to fan-out measurement was the
fan-out plateau. For a given task, we must have a measure
of task effectiveness. In our simple robot world, this might
be the total time required to locate all N targets. In other
scenarios it might be the total amount of terrain surveyed or
the total number of purple hummingbirds sighted. Parker
has identified a variety of task effectiveness metrics for use
with robot teams[13]. The fan-out plateau is shown in
figure 2. As we give a user more robots to work with, the
task effectiveness should rise until it reaches a point where
the user's attention is completely saturated. At this point
adding more robots will not increase task effectiveness and
may in some situations cause a decrease if superfluous
robots still demand user attention.
The attractiveness of the fan-out plateau measure is that it
directly measures the benefits of more robots on actual task
accomplishment. The disadvantage is that it is very
expensive to measure. We might hypothesize that for a
given HRI team, the fan-out plateau would be between 4
and 12 robots. We then must take 8 experimental runs to
find the plateau (if our hypothesis was correct). Individual
differences in both users and tasks require that we must take
many runs at each of the 8 possibilities in order to develop a
statistically significant estimate of the fan-out plateau.
Since realistic tasks take 20 minutes to several hours to
accomplish, this measurement approach rapidly consumes
an unrealistic amount of time.
1
2
3
4
5
6
7
8
9
10
11
Number of Robots
Total
Task E
ffecti
veness
Fan-out
Plateau
Figure 2 Fan-out Plateau
An alternative measure of FO comes from out ability to
determine which robots are active and which are not. If we
have knowledge of activity as needed by our AT
measurement, then we can sample all of the robots at
regular time intervals and compute the average number of
active robots. What we do to measure FO is to give the user
many more robots than we think is reasonable and then
measure the average number of active robots across the
task. This gives us a strong estimate of actual fan-out that is
relatively easy to measure. Note that this is only an estimate
of fan-out because the large number of robots introduces its
own cognitive load. We believe, however, that ignoring
unneeded robots will not impact the value of the metrics for
comparisons among competing HRI solutions.
Task saturation
A key problem that we have discovered in measuring fan-out
is the concept of task saturation. This is where the task
does not warrant as many robots as the user could
effectively control. A simple example is in figure 1. If we
add another robot to the task, the effectiveness will not go
up because one robot can reach the target just as fast as two
or three. The problem is that the task does not justify more
workers. We will see this effect in the experiments.
Measuring IT
To improve human-robot interaction (HRI) what we really
want is a measure of the interaction time (IT). IT is the
measure that will tell us if the user interface is getting better
or just the robotic automation. Our problem, however, is
that we do not have a way to directly measure IT. There are
so many things that can happen in a user's mind that we
cannot tap into. To measure the Fitt's law effects or
keystroke effects will only measure the command
expression component of the interface. Our experience is
that in a multi-robot scenario, command expression is a
minor part of the interaction time.
Solving the fan-out equation for IT can give us a method
for its measurement.
FO
AT
IT
=
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
234
However, this measure of IT is only valid if the fan-out
equation is valid and the FO and AT measures are true
measures. As has been shown in the preceding discussion
we have good estimates for both AT and FO but it would be
too strong of a claim to say that we had accurately
measured either value. Our approach then is to replace IT
with what we call interaction effort (IE). Interaction effort
is a unitless value that is defined as:
FO
AT
IE
=
This is obviously derived from the fan-out equation but it
makes no claims of being an exact time. It is a measure of
how much effort is required to interact as part of a human-robot
team. Unlike interaction time, interaction effort does
not give us an absolute scale for measuring interaction-time
. The interaction effort measure does give us a way to
compare the interactive efficiency of two HRI designs. A
comparison tool is sufficient as a measure of progress in
HRI design.
Validating the fan-out equation
Our model of IE depends upon the validity of the fan-out
equation, which is difficult to prove without measuring IT
or IE directly.
Our approach to validating the fan-out equation is as
follows. If we have 1) a set of robots that have varying
abilities and thus varying neglect times, 2) all robots have
identical user interfaces and 3) we use the various types of
robots on similar tasks of the same task complexity, then if
the fan-out equation is valid, the measure of IE should be
constant across all such trials. This should be true because
the user interface is constant and IE should be determined
by the user interface. The experiments described in the
remainder of this paper will show where this does and does
not hold.
ROBOT SIMULATIONS
As a means of validating the fan-out equations we chose
robot simulations rather than actual robots. We did this for
several reasons. The first is that it is much easier to control
the task conditions. When trying to validate the fan-out
equation we need careful controls. These are hard to
achieve with real robots in real situations. Secondly we are
trying to discover laws that model how humans interact
with multiple independent robot agents. The physical
characteristics of those agents should not change the laws.
Third we want to test robots with a variety of levels of
intelligence. Changing a simulated robot's sensory and
reasoning capacity is much simpler than building the
corresponding robots. To perform the experiments that we
did, we would have needed a fleet of 15 robots (5 each of 3
types), with identical interfaces.
There is one way in which the real world differs sharply
from our simulated world. In the real word, robots crash
into obstacles, fall into holes, and run into each other.
Safety is a real issue and lack of safety reduces the user's
trust. As discussed earlier reduced trust leads to reduced
activity times. In our simulations
,
robots never crash or fail
therefore trust is higher than reality. However, we believe
that this will be reflected in different activity times and
should not affect the validity of the fan-out equation.
The task
For our fan-out experiments we chose a maze-searching
task. We built a random maze generator that can
automatically generate tasks of a given complexity. We
defined task complexity as the dimensions of the maze,
density of obstacles and number of targets. Using our
random maze generator we were able to create a variety of
tasks of a given complexity. After random placement of
obstacles and targets the maze was automatically checked
to make certain that all targets were reachable. Our measure
of task effectiveness was the time required for all targets to
be touched by a robot.
All robots had the same user interface as shown in figure 3.
The user controls are quite simple and the same for all
experiments. Each robot has a goal represented by a small
square that the user can drag around. The robot will attempt
to reach the goal. The variation in robots is in how they deal
with obstacles. For less intelligent robots the user can set a
series of very short-term goals with no obstacles. For more
intelligent robots more distant goals can be used with the
robot working out the intervening obstacles.
Figure 3 Dragging Robots
A major variation of this user interface, that we used in
most of our tests, obscures all regions of the maze that have
not been visited by robots, as in figure 4. The idea is that
until a robot reaches an area and broadcasts what it finds,
the terrain is unknown.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
235
Figure 4 Obscured World
Three types of robots
To test our fan-out theory of constant IE for a given user-interface
we developed three types of simulated robots. The
first type (simple) heads directly towards its current goal
until it reaches the goal or runs into an obstacle. This is a
relatively simple robot with little intelligence.
The second type (bounce) bounces off obstacles and
attempts to get closer to the goal even if there is no direct
path. It never backs up and thus gets trapped in cul-de-sacs.
The bouncing technique solves many simple obstacle
avoidance problems but none that require any global
knowledge. This robot stops whenever it cannot find a local
movement that would get it closer to the goal than its
current position.
The third type of robot (plan) has a "sensor radius". It
assumes that the robot can "see" all obstacles within the
sensor radius. It then uses a shortest path algorithm to plan
a way to reach the point on its sensor perimeter that was
closest to the goal. This planning is performed after every
movement. This robot stops whenever its current position
was closer to the goal than any reachable point in its sensor
perimeter. This robot can avoid local dead-ends, but not
larger structures where the problems are larger than its
sensor radius.
We measured average neglect time for each of the types of
robots using the random placement/task method. As robot
intelligence increased, neglect time increased also. This
gave us three types of simulated robots with identical tasks
and user interfaces.
VALIDATING THE FAN-OUT EQUATION
To validate the fan-out equation we performed a number of
experiments using our simulated robot world. Our
experimental runs were of two types. In our initial runs
individual university students were solicited and
compensated to serve as test drivers. They were each given
about 30 minutes of training and practice time with each
type of robot and then given a series of mazes to solve
using various types of robots.
Task Saturation
Task saturation showed up in the early tests. In our first
tests we started all robots in the upper left hand corner of
the world. Since this is a search task there is an expanding
"frontier" of unsearched territory. This frontier limits the
number of robots that can effectively work in an area. Fan-out
was low because the problem space was too crowded to
get many robots started and once lead robots were out of
the way users tended to focus on the lead robots rather than
bring in others from behind as the frontier expanded.
Because none of our users worked for more than 2 hours on
the robots, there was no time to teach or develop higher-level
strategies such as how to marshal additional workers.
We resolved the frontier problem by evenly distributing
robots around the periphery of the world. This is less
realistic for a search scenario, but eliminated the frontier
problem.
We originally posited two interface styles, one with the
world entirely visible (light worlds) and the other with areas
obscured when not yet visited by a robot (dark worlds). We
thought of this as two UI variants. Instead it was two task
variants. In the dark worlds the task is really to survey the
world. Once surveyed, touching the targets is trivial. In the
light worlds the problem was path planning to the targets.
Since reaching known targets is a smaller problem than
searching a world, task saturation occurred much earlier.
Because of this all of our races were run with dark worlds
(Figure 4).
Figure 5 shows the relationship between fan-out and
remaining targets. The dark thin line is the number of
remaining targets not yet touched and the lighter jagged line
is the average number of active robots. This graph is the
average of 18 runs from 8 subjects using planning type
robots. Other experiments showed similar graphs. In the
very early part of the run it takes time to get many robots
moving. Then as targets are located, the problem becomes
smaller and the fan-out reduces along with it. The crossover
occurs because in a dark world the fact that one or two
targets remain is not helpful because any of the unsearched
areas could contain those targets.
Type 3 Robot
-2.00
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0
100
200
300
400
500
600
700
Tim e
Avg. Targets
Avg. Robots
Figure 5 Task Saturation
Test Data
These individual tests gave us good feedback and helped us
refine our ideas about fan-out and interaction time.
However, unmotivated subjects distorted the results. We
had a number of subjects who just spent the time and
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
236
collected their money without seriously trying to perform
the tasks quickly. It was clear that attempting to supervise
multiple robots is more mentally demanding than only one.
In many cases fan-out was not high even though from
viewing the videotapes, the subjects were easily capable of
doing better. To resolve this issue we held a series of "robo-races"
. Groups of 8 people were assembled in a room each
with identical workstations and problem sets. Each trial was
conducted as a race with monetary prizes to first, second
and third place in each trial. The motivation of subjects was
better and the fan-out results were much higher and more
uniform.
In our first race there were 8 participants all running 8 races
using the dark worlds. The density of obstacles was 35%
with 18 robots available and 10 targets to find. We ran 2
races with simple robots and 3 races each for the bounce
and plan robots for a total of 64 trial runs. The measured
fan-out and activity time along with the computed
interaction time is shown in figure 6. Analysis of variance
shows that there is no statistical difference in the interaction
times across the three robot types. This supports our fan-out
equation hypothesis.
Robot Type Mean
Fan-out
Mean
Activity Time
Computed
Interaction Effort
Simple 1.46 4.36
3.06
Bounce 2.94
7.82
2.77
Plan 5.11 14.42
2.88
Figure 6 Test 1 - 18 robots, 10 targets, 35% obstacles
To evaluate our hypothesis that activity time and thus fan-out
is determined by task complexity we ran a second
identical competition except that the obstacle density was
22%. The data is shown in figure 7. Activity time clearly
increases with a reduction in task complexity along with
fan-out, as we predicted. The interaction time computations
are not statistically different as we hypothesized.
Robot Type Mean
Fan-out
Mean
Activity Time
Computed
Interaction Effort
Simple 1.84 4.99
2.88
Bounce 3.36 11.36
3.38
Plan 9.09 24.18
2.69
Figure 7 Test 2 - 18 robots, 10 targets, 22% obstacles
One of our goals in this work was to develop a measure of
interaction effort that could serve as a measure of the
effectiveness of a human-robot interface. To test this we ran
a third competition of 8 subjects in 8 races. Test 3 was the
same as test 1 except that we reduced the resolution of the
display from 1600x1200 to 800x600. This meant that the
mazes would not fit on the screen and more scrolling would
be required. This is obviously an inferior interface to the
one used in test 1. Figure 8 compares the fan-out and the
interaction effort of tests 1 and 3.
Mean Interaction Effort
Robot Type
no scroll
scrolled
diff
Simple 3.06 4.48
46%
Bounce 2.77 3.47
25%
Plan 2.88 3.63
26%
Figure 8 Compare scrolled and unscrolled interfaces
Test 3 shows that inferior interfaces produce higher
interaction effort, which is consistent with our desire to use
interaction effort as a measure of the quality of a human-robot
interface.
However, figure 8 also shows a non-uniform interaction
effort across robot types for the scrolling condition. This is
not consistent with our fan-out equation hypothesis. Since
all three robots had the same user interface they should
exhibit similar interaction effort measures. Analysis of
variance shows that the bounce and plan robots have
identical interaction effort but that the simple robot is
different from both of them.
We explain the anomaly in the simple robots by the fact
that interaction effort masks many different components as
described earlier and fan-out partially determins those
components. Figure 9 shows the fan-out measures for test 3.
The fan-out for the simple robots is barely above 1
indicating that the user is heavily engaged with a single
robot. We watched the user behavior and noticed that the
interaction is dominated by expressing commands to the
robot with very little planning. With the bounce and plan
robots the fan-out is much higher and users spend more
time planning for the robot and less time trying to input
commands to them.
Robot Type
Mean Fan-out
Simple 1.12
Bounce 2.47
Plan 3.97
Figure 9 Fan-out for Test 3 (scrolled world)
It appears that when fan-out drops very low the nature of
the human-robot interaction changes and the interaction
effort changes also. To understand this effect better we ran
a fourth competition where we varied the speed of the
robots. Varying the speed of the robot will change its
neglect time without changing either the robot's
intelligence or the user interface. A slower robot will take
longer to run into an obstacle and therefore can be
neglected longer. We used the same worlds and interface as
in test 1, but we varied speeds across each run with only
two robot types. The results are shown in figure 10.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
237
Robot
Type
Robot
Speed
Mean
Fan-out
Mean
Activity
Time
Computed
Interaction
Effort
Simple 3 2.54 7.21
3.05
Simple 6 1.21 3.51
3.09
Simple 9 0.89 3.26
3.67
Bounce 3 4.44 13.54
3.51
Bounce 6 3.11 9.60
3.10
Bounce 9 1.97 5.76
2.94
Bounce 12 1.82 4.42
2.51
Bounce 15 1.62 4.04
2.53
Figure 10 Test 4 - Varying Robot Speed
For each class of robot, increasing the robot's speed
decreases activity time and correspondingly reduces fan-out
. Again with the fastest simple robots, the fan-out drops
very low (the user cannot even keep one robot going all the
time) and the interaction effort is quite different from the
slower robots. This confirms the change in interaction when
fan-out drops. This indicates that the fan-out equation does
not completely capture all of the relationships between fan-out
and interaction. This is also confirmed when we look at
the interaction effort for the bounce robots. As speed
increases, fan-out drops, as we would expect. However,
interaction effort also drops steadily by a small amount.
This would confirm the robot monitoring and context
switching effort that we hypothesized. As fan-out is
reduced, these two components of interaction effort should
correspondingly reduce while other interactive costs remain
constant. This would explain the trend in the data from test
4.
CONCLUSIONS
It is clear from the test that the fan-out equation does model
many of the effects of human interaction with multiple
robots. The experiments also indicate that interaction effort,
as computed from activity time and fan-out can be used to
compare the quality of different HRI designs. This gives us
a mechanism for evaluating the user interface part of
human-robot interaction. However, it is also clear that fan-out
has more underlying complexity that the equation
would indicate. This is particularly true with very low fan-out
REFERENCES
1. Asimov,
I,
I, Robot, Gnome Press, 1950.
2. Balch, T. and Arkin, R.C.. "Behavior-based Formation
Controlfor Multi-robot Teams." IEEE Transactions on
Robotics and Automation, 14(6), 1998.
3. Bruemmer, D. J., Dudenhoeffer, D. D., and McKay, M.
D. "A Robotic Swarm for Spill Finding and Perimeter
Formation," Spectrum: International Conference on
Nuclear and Hazardous Waste Management, 2002.
4. Chen, Q. and Luh, J.Y.S.. "Coordination and Control
of a Group of Small Mobile Robots." In Proceedings of
the IEEE International Conference on Robotics and
Automation, pp. 2315-2320, San Diego CA, 1994.
5. Crandall, J.W. and Goodrich, M. A., "Characterizing
Efficiency of Human-Robot Interaction: A Case Study
of Shared-Control Teleoperation." Proceedings of the
2002 IEEE/RSJ International Conference Intelligent
Robotics and Systems, 2002.
6. Kawamura, K., Peters, R. A., Johnson, C., Nilas, P.,
and Thongchai, S. "Supervisory Control of Mobile
Robots using Sensory Egosphere" IEEE International
Symposium on Computational Intelligence in Robotics
and Automation, 2001.
7. Fong, T, Conti, F., Grange, S., and Baur, C. "Novel
Interfaces for Remote Driving: Gesture, Haptic and
PDA," SPIE Telemanipulator and Telepresence
Technolgies VII , 2000.
8. Fong, T., Thorpe, C., and Baur, C., "Collaboration
Dialogue, and Human-Robot Interaction," Proceedings
of the 10th International Symposium of Robotics
Research, 2001.
9. Fong, T., and Thorpe, C. "Robot as Partner: Vehicle
Teleoperation with Collaborative Control," Workshop
on Multi-Robot Systems, Naval Research Laboratory,
Washington, D.C, 2002.
10. Fong, T., Grange, S., Thorp, C., and Baur, C., "Multi-Robot
Remote Driving with Collaborative Control"
IEEE International Workshop on Robot-Human
Interactive Communication, 2001.
11. Jones, C., and Mataric, M.J., "Sequential Task
Execution in a Minimalist Distributed Robotic System"
Proceedings of the Simulation of Adaptive Behavior,
2002.
12. Parker, C. and Zhang, H., "Robot Collective
Construction by Blind Bulldozing," IEEE Conference
on Systems, Cybernetics and Man, 2002.
13. Parker, L. E., "Evaluating Success in Autonomous
Multi-Robot Teams: Experiences from ALLIANCE
Architecture," Implementations Journal of Theoretical
and Experimental Artificial Intelligence, 2001.
14. Sheridan, T.B., Telerobotics, Automation and Human
Supervisory Control MIT Press, 1992
15. Simmons, R., Apefelbaum, D., Fox, D., Goldman, R.
P., Haigh, K. Z., Musliner, D. J., Pelican, M., and
Thrun, S., "Coordinated Deployment of Multiple,
Heterogeneous Robots," Proceedings of the Conference
on Intelligent Robots and Systems, 2000.
16. Wang, P.K.C., "Navigation Strategies for Multiple
Autonomous Robots Moving in Formation." Journal of
Robotic Systems, 8(2), pp. 177-195, 1991.
CHI 2004
Paper
24-29 April
Vienna, Austria
Volume 6, Number 1
238 | multiple robots;Human-robot interaction;interaction time;fan-out;interaction effort;human-robot interaction;neglect time;user interface;fan-out equation;activity time |
91 | Fast String Sorting Using Order-Preserving Compression | We give experimental evidence for the benefits of order-preserving compression in sorting algorithms . While, in general, any algorithm might benefit from compressed data because of reduced paging requirements, we identified two natural candidates that would further benefit from order-preserving compression, namely string-oriented sorting algorithms and word-RAM algorithms for keys of bounded length. The word-RAM model has some of the fastest known sorting algorithms in practice. These algorithms are designed for keys of bounded length, usually 32 or 64 bits, which limits their direct applicability for strings. One possibility is to use an order-preserving compression scheme, so that a bounded-key-length algorithm can be applied. For the case of standard algorithms, we took what is considered to be the among the fastest nonword RAM string sorting algorithms, Fast MKQSort, and measured its performance on compressed data. The Fast MKQSort algorithm of Bentley and Sedgewick is optimized to handle text strings. Our experiments show that order-compression techniques results in savings of approximately 15% over the same algorithm on noncompressed data. For the word-RAM, we modified Andersson's sorting algorithm to handle variable-length keys. The resulting algorithm is faster than the standard Unix sort by a factor of 1.5X . Last, we used an order-preserving scheme that is within a constant additive term of the optimal HuTucker, but requires linear time rather than O(m log m), where m = | | is the size of the alphabet. | INTRODUCTION
In recent years, the size of corporate data collections has grown rapidly. For
example, in the mid-1980s, a large text collection was in the order of 500 MB.
Today, large text collections are over a thousand times larger. At the same
time, archival legacy data that used to sit in tape vaults is now held on-line in
large data warehouses and regularly accessed. Data-storage companies, such
as EMC, have emerged to serve this need for data storage, with market capitalizations
that presently rival that of all but the largest PC manufacturers.
Devising algorithms for these massive data collections requires novel techniques
. Because of this, over the last 10 years there has been renewed interest
in research on indexing techniques, string-matching algorithms, and very large
database-management systems among others.
Consider, for example, a corporate setting, such as a bank, with a large collection
of archival data, say, a copy of every bank transaction ever made. Data
is stored in a data-warehouse facility and periodically accessed, albeit perhaps
somewhat unfrequently. Storing the data requires a representation that is succinct
, amenable to arbitrary searches, and supports efficient random access.
Aside from savings in storage, a no less important advantage of a succinct
representation of archival data is a resulting improvement in performance of
sorting and searching operations. This improvement is twofold: First, in general
, almost any sorting and searching algorithm benefits from operating on
smaller keys, as this leads to a reduced number of page faults as observed by
Moura et al. [1997]. Second, benefits can be derived from using a word-RAM
(unit-cost RAM) sorting algorithm, such as that of [Andersson 1994; Andersson
and Nilsson 1998]. This algorithm sorts n w-bits keys on a unit-cost RAM with
word size w in time O(n log n). As one can expect, in general, this algorithm
cannot be applied to strings as the key length is substantially larger than the
word size. However, if all or most of the keys can be compressed to below the
word size, then this algorithm can be applied--with ensuing gains in performance
.
There are many well-known techniques for compressing data, however, most
of them are not order preserving and do not support random access to the data
as the decoding process is inherently sequential [Bell et al. 1990; Knuth 1997].
Hence, it is important that the compression technique be static as well as order
preserving. This rules out many of the most powerful compression techniques,
such as those based on the ZivLempel method [Ziv and Lempel 1978], which
are more attuned to compression of long text passages in any event (we note,
however, that it is possible to implement certain search operations on Lempel
Ziv encoded text as shown by Farach and Thorup [1998], and K arkk ainen and
Ukknonen [1996]). The need to preserve order also eliminates many dictionary
techniques, such as Huffman codes [Knuth 1997; Antoshenkov 1997].
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
Fast String Sorting Using Order-Preserving Compression
3
Hence, we consider a special type of static code, namely, order-preserving
compression schemes, which are similar to Huffman codes. More precisely, we
are given an alphabet
with an associated frequency p
i
for each symbol s
i
.
The compression scheme E :
{0, 1}
maps each symbol into an element
of a set of prefix-free strings over
{0, 1}
. The goal is to minimize the entropy
s
i
p
i
|E(s
i
)
| with the added condition that if s
i
< s
j
then E(s
i
)
< E(s
j
) in the
lexicographic ordering of
{0, 1}
. This is in contrast to Huffman codes, which,
in general, do not preserve the order of the initial alphabet. Such a scheme is
known as order preserving.
Optimal order-preserving compression schemes were introduced by Gilbert
and Moore [1959] who gave an O(n
3
) algorithm for computing the optimal code.
This was later improved by Hu and Tucker, who gave a
(n log n) algorithm,
which is optimal [Hu 1973]. In this paper we use a linear time-encoding
algorithm that approximates the optimal order-preserving compression scheme
to within a constant additive term to produce a compressed form of the data.
The savings are of significance when dealing with large alphabets, which arise
in applications, such as DNA databases and compression, on words. We test
the actual quality of the compression scheme on real string data and obtain
that the compressed image produced by the linear algorithm is within 0.4,
and 5.2% of the optimal, in the worst case. The experiments suggest that
the compression ratio is, in practice, much better than what is predicted by
theory.
Then, using the compressed form, we test the performance of the sorting
algorithm against the standard Unix sort in the Sun Solaris OS. Using data
from a 1-GB world wide web crawl, we study first the feasability of compressing
the keys to obtain 64-bit word-length keys. In this case we obtain that only 2%
of the keys cannot be resolved within the first 64 significant bits. This small
number of keys are flagged and resolved in a secondary stage. For this modified
version of Andersson's, we report a factor of 1.5X improvement over the timings
reported by Unix sort.
As noted above, an important advantage of order-preserving compression
schemes is that they support random access. As such we consider as a likely
application scenario that the data is stored in compressed format. As an example
, consider a database-management system (DBMS). Such systems often use
sorting as an intermediate step in the process of computing a
join statement.
The DBMS would benefit from an order-preserving compression scheme, first,
by allowing faster initial loading (copying) of the data into memory and, second,
by executing a sorting algorithm tuned for compressed data, which reduces both
processing time and the amount of paging required by the algorithm itself if
the data does not fit in main memory. The contribution of each of these aspects
is highlighted later in Tables V and VI.
The paper is laid out as follows. In Section 2, we introduce the linear time
algorithm for encoding and observe that its approximation term follows from
a theorem of Bayer [1975]. In Section 3, we compare the compression ratio
empirically for string data using the Calgary corpus. In Section 4, we compare
the performance of sorting algorithms aided by order-preserving data
compression.
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
4
A. L opez-Ortiz et al.
ORDER-PRESERVING COMPRESSION
We consider the problem of determining code-words (encoded forms) such that
the compression ratio is as high as possible. Code-words may or may not have
the prefix property. In the prefix property case, the problem reduces to finding
an optimum alphabetic binary tree.
2.1 Problem Definition
Formally, the problem of finding optimal alphabetic binary trees can be stated
as follows: Given a sequence of n positive weights w
1
, w
2
,
, w
n
, find a binary
tree in which all weights appear in the leaves such that
r The weights on the leaves occur in order when traversing the tree from left
to right. Such a tree is called an alphabetic tree.
r The sum
1
in
w
i
l
i
is minimized, where l
i
is the depth (distance from root)
of the ith leave from left. If so, this is an optimal alphabetic tree.
If we drop the first condition, the problem becomes the well-known problem of
building Huffman trees, which is known to have the same complexity as sorting.
2.2 Previous Work
Mumey introduced the idea of finding optimal alphabetic binary tree in linear
time for some special classes of inputs in Mumey [1992]. One example is a simple
case solvable in O(n) time when the values of the weights in the initial sequence
are all within a term of two. Mumey showed that the region-based method,
described in Mumey [1992], exhibits linear time performance for a significant
variety of inputs. Linear time solutions were discovered for the following special
cases: when the input sequence of nodes is sorted sequence, bitonic sequence,
weights exponentially separated, and weights within a constant factor [see
Larmore and Przytycka 1998].
Moura et al. [1997] considered the benefits of constructing a suffix tree
over compressed text using an order-preserving code. In their paper, they observe
dramatic savings in the construction of a suffix tree over the compressed
text.
2.3 A Simple Linear-Approximation Algorithm
Here, we present an algorithm that creates a compression dictionary in linear
time on the size of the alphabet and whose compression ratio compares very
favorably to that of optimal algorithms which have
(n log n) running time,
where n is the number of symbols or tokens for the compression scheme. For the
purposes of presentation, we refer to the symbols or tokens as characters in an
alphabet
. In practice these "characters" might well correspond to, for example,
entire English words or commonly occurring three-or four-letter combinations.
In this case, the "alphabet" can have tens of thousands of tokens, and, hence,
the importance of linear-time algorithms for creating the compression scheme.
The idea of the proposed algorithm is to divide the set of weights into two
almost equal size subsets and solve the problem recursively for them. As we
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
Fast String Sorting Using Order-Preserving Compression
5
show in Section 2.4, this algorithm finds a compression scheme within an additive
term of 2 bits of the average code length found by Huffman or HuTucker
algorithms.
2.4 Algorithm
Let w
1
, w
2
,
. . . , w
n
be the weights of the alphabet characters or word codes to
be compressed in alphabetical order. The procedure Make(i, j ) described below,
finds a tree in which tokens with weights w
i
, w
i
+1
,
. . . , w
j
are in the leaves:
Procedure Make(i, j )
1. if (i == j ) return a tree with one node containing w
i
.
2. Find k such that (w
i
+ w
i
+1
+ + w
k
)
- (w
k
+1
+ + w
j
) is minimum.
3. Let T
1
= Make(i, k) and T
2
= Make(k
+ 1, j )
4. Return tree T with left subtree T
1
and right subtree T
2
.
In the next two subsections we study (a) the time complexity of the proposed
algorithm and (b) bounds on the quality of the approximation obtained.
2.5 Time Complexity
First observe that, aside from line 2, all other operations take constant time.
Hence, so long as line 2 of the algorithm can be performed in logarithmic time,
then the running time T (n) of the algorithm would be given by the recursion
T (n)
= T(k)+T(n-k)+ O(log k), and, thus, T(n) = O(n). Therefore, the critical
part of the algorithm is line 2: how to divide the set of weights into two subsets
of almost the same size in logarithmic time.
Suppose that for every k, the value of a
k
= b
i
+ w
i
+ w
i
+1
+ + w
k
is given
for all k
i, where b
i
is a given integer to be specified later. Notice that the
a
j
's form an increasing sequence as a
i
< a
i
+1
< < a
j
. Now the expression
in line 2 of the algorithm can be rewritten as follows:
|(w
i
+ w
i
+1
+ + w
k
)
- (w
k
+1
+ + w
j
)
| = |a
j
- 2a
k
+ b
i
|
Thus, given the value of a
k
, for all k, one can easily find the index u for which
|a
j
-2a
u
+b
i
| is minimum using a variation of a one-sided binary search, known
as galloping. Define a
k
:
=
k
i
=1
w
k
and, hence, b
i
= a
i
-1
=
i
-1
=1
w and modify
the algorithm as follows:
Procedure Make(i, j , b)
1. If (i == j ) return a tree with one node containing w
i
.
2. Let k
= Minimize(i, j , b, 1).
3. Let T
1
= Make(i, k, b) and T
2
= Make(k + 1, j , a
k
)
4. Return tree T with left subtree T
1
and right subtree T
2
.
where the Minimize procedure is a one-sided binary search for the element
closest to zero in the sequence
{a
j
- 2a
k
- 2b}
k
. More precisely:
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
6
A. L opez-Ortiz et al.
Procedure Minimize(i, j , b, g )
1. if ( j
- i 1) return min{|a
j
- 2a
i
- 2p|, |a
j
+ 2b|}.
2. let
= i + g, u = j - g.
3. if (a
j
- 2a - 2b) > 0 and (a
j
- 2a
u
- 2b) < 0 then return Minimize(i, j ,b,2g).
4. if (a
j
- 2a - 2b) < 0 then return Minimize( - g/2, ,b,1).
5. if (a
j
- 2a
u
- 2b) > 0 then return Minimize(u,u + g/2,b,1).
The total time taken by all calls to Minimize is given by the recursion T (n)
=
T (k)
+ T(n - k) + log k, if k n/2 and T(n) = T(k) + T(n - k) + log(n - k)
otherwise, where n is the number of elements in the entire range and k is
the position of the element found in the first call to Make. This recursion has
solution T (n)
2n - log n - 1, as can easily be verified:
T (n)
= T(k) + T(n - k) + log k 2n - 1 - log(n - k) - 1 2n - 1 - log n
when k
n/2. The case k > n/2 is analogous.
To compute total time, we have that, at initialization time, the algorithm
calculates a
k
for all k and then makes a call to Make(1, n, 0). The total cost of
line 2 is linear and, hence, the entire algorithm takes time O(n).
2.6 Approximation Bounds
Recall that a binary tree T can be interpreted as representing a coding for symbols
corresponding to its leaves by assigning 0/1 labels to left/right branches,
respectively. Given a tree T and a set of weights associated to its leaves, we
denote as E(T ), the expected number of bits needed to represent each of these
symbols using codes represented by the tree. More precisely, if T has n leaves
with weights w
1
,
. . . , w
n
and depths l
1
,
. . . , l
n
, then
E(T )
=
n
i
=1
w
i
l
i
W (T )
where W (T ) is defined as the total sum of the weights
n
i
=1
w
i
. Note that
W (T )
= 1 for the case of probability frequency distributions.
T
HEOREM
2.1.
Let T be the tree generated by our algorithm, and let T
OPT
be
the optimal static binary order-preserving code. Then
E(T )
E(T
OPT
)
+ 2
This fact can be proved directly by careful study of the partition mechanism
, depending on how large the central weight w
k
is. However we observe
that a much more elegant proof can be derived from a rarely cited work
by Paul Bayer [1975]. Consider a set of keys k
1
,
. . . , k
n
, with probabilities
p
1
,
. . . , p
n
for successful searches and q
0
, q
1
,
. . . , q
n
for unsuccessful searches.
Let H
=
n
i
=1
-p
i
lg p
i
+
n
i
=0
-q
i
lg q
i
denote the entropy of the associated
probability distribution. Observe that from Shannon's source-coding theorem
[Shannon 1948], we know that H
E(T) for any tree T.
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
Fast String Sorting Using Order-Preserving Compression
7
Definition 2.2.
A weight-balanced tree is an alphabetic binary search tree
constructed recursively from the root by minimizing the difference between the
weights of the left and right subtrees.
That is, a weight-balanced tree minimizes
|W (L) - W (R)| in a similar fashion
to procedure Make above.
T
HEOREM
2.3 [B
AYER
1975].
Let S
OPT
denote the optimal alphabetic binary
search tree, with keys in internal nodes and unsuccessful searches in external
nodes. Let S denote a weight-balanced tree on the same keys, then
E(S)
H + 2 E(S
OPT
)
+ 2
With this theorem at hand, we can now proceed with the original proof of
Theorem 2.1.
P
ROOF OF
T
HEOREM
2.1. We are given a set of symbols s
i
and weights w
i
with
1
i n. Consider the weight-balanced alphabetical binary search tree on
n
- 1 keys with successful search probabilities p
i
= 0 and unsuccessful search
probabilities q
i
= w
i
-1
. Observe that there is a one-to-one mapping between
alphabetic search trees for this problem and order-preserving codes. Moreover,
the cost of the corresponding trees coincide. It is not hard to see that the tree
constructed by Make corresponds to the weight-balanced tree, and that the
optimal alphabetical binary search tree S
OPT
and the optimum HuTucker code
tree T
OPT
also correspond to each other. Hence from Bayer's theorem we have
E(T )
H + 2 E(S
OPT
)
+ 2 = E(T
OPT
)
+ 2
as required.
This shows that in theory the algorithm proposed is fast and has only a small
performance penalty in terms of compression over both the optimal encoding
method and the information theoretical lower bound given by the entropy.
EXPERIMENTS ON COMPRESSION RATIO
In this section we experimentally compare the performance of the algorithm in
terms of compression against other static-compression codes. We compare three
algorithms: Huffman, HuTucker, and our algorithm on a number of random
frequency distributions. We compared alphabets of size n, for variable n. In
the case of English, this corresponds to compression on words, rather than on
single characters. Each character was given a random weight between 0 and
100, 000, which is later normalized. The worst-case behavior of our algorithm
in comparison with HuTucker and Huffman algorithms is shown in Table I.
For each sample, we calculated the expected number of bits required by each
algorithm on that frequency distribution and reported the ratio least favorable
among those reported.
We also compared the performance of the proposed linear-time algorithm
with Huffman and HuTucker compression using the Calgary corpus, a common
benchmark in the field of data compression. This is shown in Table II.
We report both the compression ratio of each of the solutions as well as the
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
8
A. L opez-Ortiz et al.
Table I. Comparison of the Three Algorithms
Alphabet size n
Linear / Huffman
HuTucker /Huffman
Linear/HuTucker
n
= 26 (10000 tests)
1.1028
1.0857
1.0519
n
= 256 (10000 tests)
1.0277
1.0198
1.0117
n
= 1000 (3100 tests)
1.0171
1.0117
1.0065
n
= 2000 (1600 tests)
1.0147
1.0100
1.0053
n
= 3000 (608 tests)
1.0120
1.0089
1.0038
Table II. Comparison Using the Calgary Corpus
File
Size (in bits)
Huff. (%)
Lin. (%)
H-T (%)
Lin./Huff.
Lin./H-T
H-T/Huff.
bib.txt
890088
65
68
67
1.0487
1.0140
1.0342
book1.txt
6150168
57
61
59
1.0727
1.0199
1.0518
book2.txt
4886848
60
63
62
1.0475
1.0159
1.0310
paper1.txt
425288
62
65
64
1.0378
1.0075
1.0301
paper2.txt
657592
57
60
60
1.0520
1.0098
1.0418
paper3.txt
372208
58
61
60
1.0421
1.0099
1.0318
paper4.txt
106288
59
63
61
1.0656
1.0321
1.0324
paper5.txt
95632
62
65
64
1.0518
1.0151
1.0361
paper6.txt
304840
63
66
64
1.0495
1.0198
1.0290
progc.txt
316888
65
68
66
1.0463
1.0315
1.0143
progl.txt
573168
59
63
61
1.0637
1.0324
1.0302
progp.txt
395032
61
64
63
1.0583
1.0133
1.0443
trans.txt
749560
69
72
70
1.0436
1.0243
1.0187
news.txt
3016872
65
67
67
1.0403
1.0103
1.0296
geo
819200
70
72
71
1.0173
1.0098
1.0074
obj1
172032
74
76
75
1.0220
1.0149
1.0070
obj2
1974512
78
80
80
1.0280
1.0103
1.0175
pic
4105728
20
21
21
1.0362
1.0116
1.0242
comparative performance of the linear-time solution with the other two well-known
static methods. As we can see, the penalty on the compression factor
of the linear-time algorithm over Huffman, which is not order-preserving, or
HuTucker, which takes time O(n log n), is minimal.
It is important to observe that for the data set tested, the difference between
the optimal HuTucker and the linear compression code was, in all cases, below
0.2 bits, which is much less than the worst-case additive term of 2 predicted by
Bayer's theorem.
STRING SORTING USING A WORD-RAM ALGORITHM
In this section, we compare the performance of sorting on the compressed text
against the uncompressed form [as in Moura et al. 1997], including Bentley
and Sedgewicks's FastSort [Bentley and Sedgewick 1997], as well as Andersson's
word-RAM sort [Andersson 1994]. Traditionally, word-RAM algorithms
operate on unbounded length keys, such as strings, by using radix/bucket-sort
variants which iteratively examine the keys [Andersson and Nilsson 1994]. In
contrast, our method uses order-preserving compression to first reduce the size
of the keys, then sort by using fixed-key size word-RAM algorithms [Andersson
et al. 1995], sorting keys into buckets. We observed experimentally that this
suffices to sort the vast majority of the strings when sorting 100 MB files of
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
Fast String Sorting Using Order-Preserving Compression
9
Table III. Percentage of Buckets Multiply Occupied (Web Crawl)
Size in
% Words Sharing a Bucket
File
Tokens
Uncompressed
HuTucker
Linear
Alphanumeric
515277
16
2
2
Alpha only
373977
21
3
3
web-crawled text data. In this case, each of the buckets contained very few elements
, in practice. We also tested the algorithms on the Calgary corpus, which
is a standard benchmark in the field of text compression.
We consider the use of a word-RAM sorting algorithm to create a dictionary
of the words appearing in a given text. The standard word-RAM algorithms
have as a requirement that the keys fit within the word size of the RAM machine
being used. Modern computers have word sizes of 32 or 64 bits. In this
particular example, we tested Andersson's 32 bit implementation [Andersson
and Nilsson 1998] of the O(n log log n) algorithm by Andersson et al. [1995]. We
also tested the performance of Bentley and Sedgewick's [1997] MKQSort running
on compressed data using order-preserving compression. The algorithm is
a straightforward implementation of the code in Bentley and Sedgewick [1997].
Observe that one can use a word-RAM algorithm on keys longer than w bits
by initially sorting on the first w bits of the key and then identifying "buckets"
where two or more strings are "tied," i.e., share the first w bits. The algorithm
proceeds recursively on each of these buckets until there are no further ties.
This method is particularly effective if the number of ties is not too large.
To study this effect, we consider two word dictionaries and a text source. One
is collected from a 1-GB crawl of the world wide web, the second from all the
unique words appearing in the Calgary corpus; the last one is a more recent
3.3-GB crawl of the world wide web. This is motivated by an indexing application
for which sorting a large number of words was required. In principle, the
result is equally applicable to other settings, such as sorting of alphanumeric
fields in a database.
In the case of the web crawl we considered two alternative tokenization
schemes. The first one tokenizes on alphanumeric characters while the second
ignores numbers in favor of words only. Table III shows the number of buckets
that have more than one element after the first pass in the uncompressed
and the compressed form of the text. We report both the figure for the proposed
linear algorithm and for the optimal HuTucker scheme. Observe the
dramatic reduction in the number of buckets that require further processing in
the compressed data.
In fact the numbers of ties in the compressed case is sufficiently small that
aborting the recursion after the first pass and using a simpler sorting algorithm
on the buckets is a realistic alternative. In comparison, in the uncompressed
case the recursion reaches depth three in the worst case before other types of
sorting become a realistic possibility.
In the case of the Calgary corpus, the number of buckets with ties in the
uncompressed form of the text ranged from 3 to 8%. After compressing the text,
the number of ties, in all cases, rounded to 0
.0%. The specific figures for a subset
of the Calgary corpus are shown in Table IV.
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
10
A. L opez-Ortiz et al.
Table IV. Percentage of Buckets Multiply Occupied
on the Text Subset of the Calgary Corpus
Percentage Words Tied
File
Uncompressed
HuTucker
Linear
paper1
5
0
0
paper2
4
0
0
paper3
6
0
0
paper4
4
0
0
paper5
3
0
0
paper6
4
0
0
book1
3
0
0
book2
8
0
0
bib
7
0
0
Notice that in all cases the performance of the optimal HuTucker algorithm
and the linear algorithm is comparable. We should also emphasize that while
the tests in this paper used compression on alphanumeric characters only, the
compression scheme can be applied to entire words [see, for example, Mumey
1992]. In this case, the size of the code dictionary can range in the thousands
of symbols, which makes the savings of a linear-time algorithm particularly
relevant.
Last, we considered a series of ten web crawls from Google, each of approximately
100 MB in size (3.3 GB in total). In this case, we operate under the
assumption that the data is stored in the suitable format to the corresponding
algorithm. We posit that it is desirable to store data in compressed format, as
this results also in storage savings while not sacrificing searchability because of
the order-preserving nature of the compression. We tokenized and sorted each of
these files to create a dictionary, a common preprocessing step for some indexing
algorithms. The tokenization was performed on nonalphanumeric characters.
For this test, we removed tokens larger than 32 bits from the tokenized file. In
practice, these tokens would be sorted using a second pass, as explained earlier.
We first studied the benefits of order-preserving compression alone by comparing
the time taken to sort the uncompressed and compressed forms of the text.
The tokens were sorted using the Unix
sort routine, Unix qsort, Fast MKQSort,
and Andersson's sort algorithm [Andersson and Nilsson 1998]. Table V shows a
comparison of CPU times among the different sorting algorithms, while Table
VI shows the comparison in performance including I/O time for copying the
data into memory. Note that there are observed gains both in CPU time alone
and in CPU plus I/O timings as a result of using the data-compressed form.
We report timings in individual form for the first three crawls as well as
the average sorting time across all 10 files together with the variance. On the
data provided, Fast MKQSort is the best possible choice with the compressed
variant, being 20% faster than the uncompressed form. These are substantial
savings for such a highly optimized algorithm.
While, in this case, we focused on key lengths below w bits, the savings from
compression can be realized by most other sorting, searching, or indexing mechanisms
, both by the reduction of the key length field and by the reduced demands
in terms of space. To emphasize, there are two aspects of order-preserving
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
Fast String Sorting Using Order-Preserving Compression
11
Table V. CPU Time (in Seconds) as Reported by the Unix
Time Utility
Algorithm
Data 1
Data 2
Data 3
Average
Variance
QSort
2.41
2.29
2.33
2.33
0.04
QSort on compressed
2.17
2.18
2.20
2.20
0.05
Andersson
0.80
0.82
0.81
0.81
0.01
Fast Sort
0.88
0.89
0.86
0.88
0.02
Fast Sort on compressed
0.73
0.75
0.75
0.74
0.02
Binary Search
1.27
1.29
1.26
1.28
0.02
Binary Search on compressed
1.08
1.09
1.09
1.09
0.02
Table VI. Total System Time as Reported by the Unix
time Utility
Algorithm
Data 1
Data 2
Data 3
Average
Variance
Unix Sort
5.53
5.40
5.30
5.41
0.07
Unix Sort compressed
5.43
5.43
5.53
5.42
0.06
QSort
2.78
2.64
2.68
2.69
0.05
QSort on compressed
2.45
2.47
2.48
2.48
0.05
Andersson
3.63
3.60
3.67
3.61
0.04
Fast Sort
1.24
1.25
1.22
1.24
0.02
Fast Sort on compressed
1.00
1.04
1.02
1.03
0.02
Binary Search
1.63
1.64
1.62
1.65
0.02
Binary Search on compressed
1.36
1.38
1.37
1.38
0.02
compression, which have a positive impact on performance. The first is that
when comparing two keys byte-per-byte, we are now, in fact, comparing more
than key at once, since compressed characters fit at a rate of more than 1 per
byte. Second, the orginal data size is reduced. This leads to a decrease in the
amount of paging to external memory, which is often the principal bottleneck
for algorithms on large data collections.
CONCLUSIONS
In this work, we studied the benefits of order-preserving compression for
sorting strings in the word-RAM model. First, we propose a simple linear-approximation
algorithm for optimal order-preserving compression, which acts
reasonably well in comparison with optimum algorithms, both in theory and
in practice. The approximation is within a constant additive term of both the
optimum scheme and the information theoretical ideal, i.e., the entropy of the
probabilistic distribution associated to the character frequency. We then test
the benefits of this algorithm using the sorting algorithm of Andersson for the
word-RAM, as well as Bentley and Sedgewick's fast MKQSort. We present experimental
data based on a 1-GB web crawl, showing that Fast MKQSort and
Andersson are more efficient for compressed data.
ACKNOWLEDGEMENTS
We wish to thank Ian Munro for helpful discussions on this topic, as well
as anonymous referees of an earlier version of this paper for their helpful
comments.
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006.
12
A. L opez-Ortiz et al.
REFERENCES
A
NDERSSON
, A. 1994. Faster deterministic sorting and searching in linear space. In Proceedings of
the 37th Annual IEEE Symposium on Foundations of Computer Science (FOCS 1996). 135141.
A
NDERSSON
, A.
AND
N
ILSSON
, S. 1994.
A new efficient radix sort. In FOCS: IEEE Symposium on
Foundations of Computer Science (FOCS).
A
NDERSSON
, A.
AND
N
ILSSON
, S. 1998.
Implementing radixsort. ACM Journal of Experimental
Algorithms 3, 7.
A
NDERSSON
, A., H
AGERUP
, T., N
ILSSON
, S.,
AND
R
AMAN
, R. 1995.
Sorting in linear time? In STOC:
ACM Symposium on Theory of Computing (STOC).
A
NTOSHENKOV
, G. 1997.
Dictionary-based order-preserving string compression. VLDB Journal:
Very Large Data Bases 6, 1 (Jan.), 2639. (Electronic edition.)
B
AYER
, P. J. 1975.
Improved bounds on the costs of optimal and balanced binary search trees.
Master's thesis. Massachussets Institute of Technology (MIT), Cambridge, MA.
B
ELL
, T. C., C
LEARY
, J. G.,
AND
W
ITTEN
, I. H. 1990.
Text compression. Prentice Hall, Englewood
Cliffs, NJ.
B
ENTLEY
, J. L.
AND
S
EDGEWICK
, R. 1997.
Fast algorithms for sorting and searching strings. In
Proceedings of 8th ACM-SIAM Symposium on Discrete Algorithms (SODA '97). 360369.
F
ARACH
, M.
AND
T
HORUP
, M. 1998.
String matching in lempel-ziv compressed strings. Algorith-mica
20, 4, 388404.
G
ILBERT
, E. N.
AND
M
OORE
, E. F. 1959.
Variable-length binary encoding. Bell Systems Technical
Journal 38, 933968.
H
U
, T. C. 1973.
A new proof of the T -C algorithm. SIAM Journal on Applied Mathematics 25, 1
(July), 8394.
K
ARKK
AINEN
, J.
AND
U
KKNONEN
, E. 1996.
Lempel-ziv parsing and sublinear-size index structures
for string matching. In Proceedings of the 3rd South American Workshop on String Processing
(WSP '96). 141155.
K
NUTH
, D. E. 1997.
The art of computer programming: Fundamental algorithms, 3rd ed, vol. 1.
AddisonWesley, Reading, MA.
L
ARMORE
, L. L.
AND
P
RZYTYCKA
, T. M. 1998. The optimal alphabetic tree problem revisited. Journal
of Algorithms 28, 1 (July), 120.
M
OURA
, E., N
AVARRO
, G.,
AND
Z
IVIANI
, N. 1997.
Indexing compressed text. In Proceedings of the 4th
South American Workshop on String Processing (WSP '97). Carleton University Press, Ottawa,
Ontario. 95111.
M
UMEY
, B. M. 1992.
Some new results on constructing optimal alphabetic binary trees. Master's
thesis, University of British Columbia, Vancouver, British Columbia .
S
HANNON
, C. E. 1948.
A mathematical theory of communication. Bell Syst. Technical Journal 27,
379423, 623656.
Z
IV
, J.
AND
L
EMPEL
, A. 1978.
Compression of individual sequences via variable-rate coding. IEEE
Trans. Inform. Theory, Vol.IT-24 5.
Received June 2004; revised January 2006; accepted October 2004 and January 2006
ACM Journal of Experimental Algorithmics, Vol. 10, Article No. 1.4, 2006. | Order-preserving compression;String sorting;random access;Order-preserving compression scheme;linear time algorithm;Sorting algorithms;word-RAM sorting algorithm;RAM;sorting;unit-cost;compression scheme;compression ratio;word-RAM;data collection;Keys of bounded length |
92 | FBRAM: A new Form of Memory Optimized for 3D Graphics | FBRAM, a new form of dynamic random access memory that greatly accelerates the rendering of Z-buffered primitives, is presented . Two key concepts make this acceleration possible. The first is to convert the read-modify-write Z-buffer compare and RGBα blend into a single write only operation. The second is to support two levels of rectangularly shaped pixel caches internal to the memory chip. The result is a 10 megabit part that, for 3D graphics, performs read-modify-write cycles ten times faster than conventional 60 ns VRAMs. A four-way interleaved 100 MHz FBRAM frame buffer can Z-buffer up to 400 million pixels per second. Working FBRAM prototypes have been fabricated. | INTRODUCTION
One of the traditional bottlenecks of 3D graphics hardware has been
the rate at which pixels can be rendered into a frame buffer. Modern
interactive 3D graphics applications require rendering platforms
that can support 30 Hz animation of richly detailed 3D scenes. But
existing memory technologies cannot deliver the desired rendering
performance at desktop price points.
The performance of hidden surface elimination algorithms has been
limited by the pixel fill rate of 2D projections of 3D primitives.
While a number of exotic architectures have been proposed to improve
rendering speed beyond that achievable with conventional
DRAM or VRAM, to date all commercially available workstation
3D accelerators have been based on these types of memory chips.
This paper describes a new form of specialized memory, Frame
Buffer RAM (FBRAM). FBRAM increases the speed of Z-buffer
operations by an order of magnitude, and at a lower system cost
than conventional VRAM. This speedup is achieved through two
architectural changes: moving the Z compare and RGB
blend operations
inside the memory chip, and using two levels of appropri-ately
shaped and interleaved on-chip pixel caches.
PREVIOUS WORK
After the Z-buffer algorithm was invented [3], the first Z-buffered
hardware systems were built in the 1970's from conventional
DRAM memory chips. Over time, the density of DRAMs increased
exponentially, but without corresponding increases in I/O bandwidth
. Eventually, video output bandwidth requirements alone exceeded
the total DRAM I/O bandwidth.
Introduced in the early 1980's, VRAM [18][20] solved the video
output bandwidth problem by adding a separate video port to a
DRAM. This allowed graphics frame buffers to continue to benefit
from improving bit densities, but did nothing directly to speed rendering
operations. More recently, rendering architectures have
bumped up against a new memory chip bandwidth limitation: faster
rendering engines have surpassed VRAM's input bandwidth. As a
result, recent generations of VRAM have been forced to increase
the width of their I/O busses just to keep up. For the last five years,
the pixel fill (i.e. write) rates of minimum chip count VRAM frame
buffers have increased by less than 30%.
Performance gains have mainly been achieved in commercially
available systems by brute force. Contemporary mid-range systems
have employed 10-way and 20-way interleaved VRAM designs
[1][14]. Recent high-end architectures have abandoned VRAM altogether
in favor of massively interleaved DRAM: as much as 120-way
interleaved DRAM frame buffers [2]. But such approaches do
not scale to cost effective machines.
More radical approaches to the problem of pixel fill have been ex-plored
by a number of researchers. The most notable of these is the
pixel-planes architecture [9][16], others include [7][8][11][4]. [12]
and [10] contain a good summary of these architectures. What these
architectures have in common is the avoidance of making the rendering
of every pixel an explicit event on external pins. In the limit,
only the geometry to be rendered need enter the chip(s), and the final
pixels for video output exit.
These research architectures excel at extremely fast Z-buffered fill of
large areas. They achieve this at the expense of high cost, out-of-order
rendering semantics, and various overflow exception cases. Many of
these architectures ([16][11][4]) require screen space pre-sorting of
primitives before rendering commences. As a consequence, intermediate
geometry must be sorted and stored in large batches.
FBRAM: A new Form of Memory
Optimized for 3D Graphics
Michael F Deering, Stephen A Schlapp, Michael G Lavelle
Sun Microsystems Computer Corporation
Unfortunately, the benefits from the fast filling of large polygons
are rapidly diminishing with today's very finely tessellated objects.
That is, the triangles are getting smaller [6]. The number of pixels
filled per scene is not going up anywhere near as quickly as the total
number of polygons. As 3D hardware rendering systems are finally
approaching motion fusion rates (real time), additional improvements
in polygon rates are employed to add more fine detail, rather
than further increases in frame rates or depth complexity.
Z-BUFFERING AND OTHER PIXEL PROCESSING OPERATIONS
Fundamental to the Z-buffer hidden surface removal algorithm are
the steps of reading the Z-buffer's old Z value for the current pixel
being rendered, numerically comparing this value with the new one
just generated, and then, as an outcome of this compare operation,
either leaving the old Z (and RGB) frame buffer pixel values alone,
or replacing the old Z (and RGB) value with the new.
With conventional memory chips, the Z data must traverse the data
pins twice: once to read out the old Z value, and then a second time to
write the new Z value if it wins the comparison. Additional time must
be allowed for the data pins to electrically "turn around" between
reading and writing. Thus the read-modify-write Z-buffer transaction
implemented using a straightforward read-turn-write-turn operation
is four times longer than a pure write transaction. Batching of reads
and writes (n reads, turn, n writes, turn) would reduce the read-modify
-write cost to twice that of a pure write transaction for very large n,
but finely tessellated objects have very small values of n, and still suffer
a 3-4
penalty.
This is the first problem solved by FBRAM. Starting with a data
width of 32 bits per memory chip, FBRAM now makes it possible
for the Z comparison to be performed entirely inside the memory
chip. Only if the internal 32 bit numeric comparison succeeds does
the new Z value actually replace the old value. Thus the fundamental
read-modify-write operation is converted to a pure write operation
at the data pins.
Because more than 32-bits are needed to represent a double buffered
RGBZ pixel, some way of transmitting the results of the Z
comparison across multiple chips is required. The Z comparison result
is communicated on a single external output signal pin of the
FBRAM containing the Z planes, instructing FBRAM chips containing
other planes of the frame buffer whether or not to write a
new value.
The Z-buffer operation is the most important of the general class of
read-modify-write operations used in rendering. Other important
conditional writes which must be communicated between
FBRAMs include window ID compare [1] and stenciling.
Compositing functions, rendering of transparent objects, and antialiased
lines require a blending operation, which adds a specified
fraction of the pixel RGB value just generated to a fraction of the
pixel RGB value already in the frame buffer. FBRAM provides four
8-bit 100 MHz multiplier-adders to convert the read-modify-write
blending operation into a pure write at the pins. These internal
blend operations can proceed in parallel with the Z and window ID
compare operations, supported by two 32-bit comparators. One of
the comparators supports magnitude tests (
>, , <, , =, ), the other
supports match tests (
=, ). Also, traditional boolean bit-operations
(for RasterOp) are supported inside the FBRAM. This collection of
processing units is referred to as the pixel ALU.
Converting read-modify-write operations into pure write operations
at the data pins permits FBRAM to accept data at a 100 MHz
rate. To match this rate, the pixel ALU design is heavily pipelined,
and can process pixels at the rate of 100 million pixels per second.
Thus in a typical four-way interleaved frame buffer design the maximum
theoretical Z-buffered pixel fill rate of an FBRAM based system
is 400 mega pixels per second. By contrast, comparable frame
buffers constructed with VRAM achieve peak rates of 33-66 mega
pixels per second [5][14].
Now that pixels are arriving and being processed on-chip at
100 MHz, we next consider the details of storing data.
DRAM FUNDAMENTALS
Dynamic memory chips achieve their impressive densities (and
lower costs) by employing only a single transistor per bit of storage.
These storage cells are organized into pages; typically there are several
thousand cells per page. Typical DRAM arrays have hundreds
or thousands of pages. Per bit sense amplifiers are provided which
can access an entire page of the array within 120 ns. These sense
amplifiers retain the last data accessed; thus they function as a several
thousand bit page buffer. The limited number of external I/O
pins can perform either a read or a write on a small subset of the
page buffer at a higher rate, typically every 40 ns.
FBRAM starts with these standard DRAM components, and adds a
multiported high speed SRAM and pixel ALU. All of this is organized
within a caching hierarchy, optimized for graphics access patterns
, to address the bandwidth mismatch between the high speed
pins and the slow DRAM cells.
PIXEL CACHING
The cache system design goal for FBRAM is to match the 100 MHz
read-modify-write rate of the pixel ALU with the 8 MHz rate of the
DRAM cells. Figure 1 illustrates this cache design challenge.
Caches have long been used with general purpose processors; even
a small cache can be very effective [17]. But caches have been
much less used with graphics rendering systems.
The data reference patterns of general purpose processors exhibit
both temporal and spatial locality of reference. Temporal locality is
exhibited when multiple references are made to the same data within
a short period of time. Spatial locality is exhibited when multiple
references within a small address range are made within a short period
of time. Caches also reduce the overall load on the memory bus
by grouping several memory accesses into a single, more efficient
block access.
Graphics hardware rendering does not exhibit much temporal locality
, but does exhibit spatial locality with a vengeance. Raster rendering
algorithms for polygons and vectors are a rich source of spatial
locality.
Although the bandwidth available inside a dynamic memory chip is
orders of magnitude greater than that available at the pins, this in-Dynamic
Memory
ALU
?
32-bits @ 100MHz
2
32-bits @ 100MHz
10,240-bits @ 8MHz
Figure 1. Bandwidth mismatch between pixel ALU and DRAM.
FBRAM
ternal bandwidth is out of reach for architectures in which the pixel
cache is external to the memory chips. Others have recognized the
potential of applying caching to Z-buffered rendering [13], but they
were constrained to building their caches off chip. Such architectures
can at best approach the rendering rate constrained by the
memory pin bandwidth. As a result, these caching systems offer little
or no performance gain over SIMD or MIMD interleaved pixel
rendering.
With FBRAM, by contrast, the pixel caches are internal to the individual
memory chips. Indeed, as will be seen, two levels of internal
caches are employed to manage the data flow. The miss rates are
minimized by using rectangular shaped caches. The miss costs are
reduced by using wide and fast internal busses, augmented by an
aggressive predictive pre-fetch algorithm.
Each successive stage from the pins to the DRAM cells has slower
bus rates, but FBRAM compensates for this with wider busses. Because
the bus width increases faster than the bus rate decreases,
their product (bus bandwidth) increases, making caching a practical
solution.
FBRAM INTERNAL ARCHITECTURE
Modern semiconductor production facilities are optimized for a
certain silicon die area and fabrication process for a given generation
of technology. FBRAM consists of 10 megabits of DRAM, a
video buffer, a small cache, and a graphics processor, all implemented
in standard DRAM process technology. The result is a die
size similar to a 16 megabit DRAM. A 10 megabit FBRAM is
320
102432 in size; four FBRAMs exactly form a standard
1280
102432 frame buffer.
Figure
2 is an internal block diagram of a single FBRAM [15]. The
DRAM storage is broken up into four banks, referred to as banks
A,B,C, and D. Each bank contains 256 pages of 320 words (32 bits
per word). Each bank is accessed through a sense amplifier page
buffer capable of holding an entire 320 word page (10,240 bits).
Banks can be accessed at a read-modify-write cycle time of 120 ns.
Video output pixels can be copied from the page buffer to one of
two ping-pong video buffers, and shifted out to the display.
FBRAM has a fast triple-ported SRAM register file. This register
file is organized as eight blocks of eight 32-bit words. Capable of
cycling at 100 MHz, two of the ports (one read, one write) of the
register file allow 10 ns throughput for pipelined 32-bit read-modi-DRAM
Bank
B
Video Buffer
Video Buffer
DRAM Bank
A
DRAM Bank
C
DRAM Bank
D
SRAM
2Kb
ALU
256
640
640
640
640
16
32
Video
Data
Global Bus
32
32
Render
Data
Page Buffer
Page Buffer
Page Buffer
Page Buffer
2.5Mb
10Kb
10Kb
10Kb
10Kb
2.5Mb
2.5Mb
2.5Mb
FBRAM
Figure 2. Internal block diagram of a single FBRAM.
fy-write ALU operations: Z-buffer compare, RGB
blend, or boolean
-operations. The third port allows parallel transfer of an entire
block (8 words) to or from a page buffer at a 20 ns cycle time via a
256-bit "Global Bus".
FBRAM has two independent sets of control and address lines: one
for the two ALU ports of the SRAM register file; the other for operations
involving a DRAM bank. This allows DRAM operations
to proceed in parallel with SRAM operations. The cache control
logic was intentionally left off-chip, to permit maximum flexibility
and also to keep multiple chips in lock step.
FBRAM AS CACHE
Internally, the SRAM register file is a level one pixel cache (L1$),
containing eight blocks. Each block is a 2 wide by 4 high rectangle
of (32-bit) pixels. The cache set associativity is determined external
to the FBRAM, permitting fully associative mapping. The L1$ uses
a write back policy; multiple data writes to each L1$ block are ac-cumulated
for later transfer to the L2$.
Taken together, the four sense amplifier page buffers constitute a
level two pixel cache (L2$). The L2$ is direct mapped; each page
buffer is mapped to one of the pages of its corresponding DRAM
bank. Each L2$ entry contains one page of 320 32-bit words shaped
as a 20 wide by 16 high rectangle of pixels. The L2$ uses a write
through policy; data written into a L2$ entry goes immediately into
its DRAM bank as well.
The Global Bus connects the L1$ to the L2$. A 2
4 pixel block can
be transferred between the L1$ and L2$ in 20 ns.
Four parallel "sense amplifier buses" connect the four L2$ entries
to the four DRAM banks. A new 20
16 pixel DRAM page can be
read into a given L2$ entry from its DRAM bank as often as every
120 ns. Reads to different L2$ entries can be launched every 40 ns.
FOUR WAY INTERLEAVED FBRAM FRAME BUFFER
The previous sections described a single FBRAM chip. But to fully
appreciate FBRAM's organization, it is best viewed in one of its
natural environments: a four way horizontally interleaved three
chip deep 1280
102496-bit double buffered RGB Z frame buffer.
Figure 3 shows the chip organization of such a frame buffer, with
two support blocks (render controller and video output). Figure 4 is
a logical block diagram considering all 12 chips as one system. The
rgb
A
rgb
B
Z
rgb
A
rgb
B
Z
rgb
A
rgb
B
Z
rgb
A
rgb
B
Z
Rendering Controller
Video Output
Figure 3. A four-way interleaved frame buffer system composed
of 12 FBRAMs (1280
1024, double buffered 32-bit RGB plus
32-bit Z).
discussions of the operations of FBRAM to follow are all based on
considering all 12 memory chips as one memory system.
Horizontally interleaving four FBRAMs quadruples the number of
data pins; now four RGBZ pixels can be Z-buffered, blended, and
written simultaneously. This interleaving also quadruples the size
of the caches and busses in the horizontal dimension. Thus the L1$
can now be thought of as eight cache blocks, each 8 pixels wide by
4 pixels high. Taken together, the individual Global Buses in the 12
chips can transfer an 8
4 pixel block between the L1$ and L2$. The
four L2$ entries are now 80 pixels wide by 16 pixels high (see Figure
4).
All three levels of this memory hierarchy operate concurrently.
When the addressed pixels are present in the L1$, the four way interleaved
FBRAMs can process 4 pixels every 10 ns. On occasion,
the L1$ will not contain the desired pixels (an "L1$ miss"), incurring
a 40 ns penalty ("L1$ miss cost"): 20 ns to fetch the missing
block from the L2$ for rendering, 20 ns to write the block back to
the L2$ upon completion. Even less often, the L2$ will not contain
the block of pixels needed by the L1$ (an "L2$ miss"), incurring a
40-120 ns penalty ("L2$ miss cost") depending upon the scheduling
status of the DRAM bank.
This example four way interleaved frame buffer will be assumed
for the remainder of this paper.
RECTANGULAR CACHES REDUCE MISS RATE
The organization so far shows pixels moving between fast, narrow
data paths to slow, wide ones. As can be seen in Figure 4, there is
sufficient bandwidth between all stages to, in theory, keep up with
the incoming rendered pixels, so long as the right blocks and pages
are flowing. We endeavor to achieve this through aggressive prefetching
of rectangular pixel regions.
Level 2 Cache
DRAM
Bank
Level 1 Cache
ALU(4)
read-modify-write
4
1 pixels@10 ns =
800 Mpixels/second
read or write
8
4 pixels@20 ns =
1600 Mpixels/second
write-modify-read
80
16 pixels@120 ns =
10600 Mpixels/second
per bank;
Can overlap
banks @40 ns =
32 Gpixels/second
Global Bus
Figure 4. A logical representation of a four-way horizontally
interleaved frame buffer composed of 12 FBRAMs.
write (or read)
4
1 pixels@10 ns =
400 Mpixels/second
8 wide
4 high
8 block
80 wide
16 high
1 page per bank
Locality of reference in graphics rendering systems tends to be to
neighboring pixels in 2D. Because of this, graphics architects have
long desired fast access to square regions [19]. Unfortunately, the
standard VRAM page and video shift register dimensions result in
efficient access only to long narrow horizontal regions. FBRAM
solves this problem by making both caches as square as possible.
Because the L1$ blocks are 8
4 pixels, thin line rendering algorithms
tend to pass through four to eight pixels per L1$ block, resulting
in a L1$ miss every fourth to eighth pixel (a "miss rate" of
1/4 to 1/8). Parallel area rendering algorithms can aim to utilize all
32 pixels in a block, approaching a miss rate of 1/32.
Similarly, because the L2$ blocks are 80
16 pixels, L2$ miss rates
are on the order of 1/16 to 1/80 for thin lines, and asymptotically approach
1/1280 for large areas.
These simplistic miss rate approximations ignore fragmentation effects
: lines may end part way through a block or page, polygon edges
usually cover only a fraction of a block or page. In addition, fragmentation
reduces the effective pin bandwidth, as not all four horizontally
interleaved pixels ("quads") can be used every cycle.
FBRAM's block and page dimensions were selected to minimize
the effects of fragmentation. Table 1 displays the average number
of L1$ blocks (B), and L2$ pages (P) touched when rendering various
sizes of thin lines and right isosceles triangles (averaged over
all orientations and positions), for a range of alternative cache aspect
ratios. The white columns indicate FBRAM's dimensions.
Note that smaller primitives consume more blocks and pages per
rendered pixel, due to fragmentation effects. Although the table implies
that a page size of 40
32 is better than 8016, practical limitations
of video output overhead (ignored in this table, and to be discussed
in section 13), dictated choosing 80
16.
OPERATING THE FRAME BUFFER
For non-cached rendering architectures, theoretical maximum performance
rates can be derived from statistics similar to Table 1.
This is pessimistic for cached architectures such as FBRAM. Because
of spatial locality, later primitives (neighboring triangles of a
strip) will often "re-touch" a block or page before it is evicted from
the cache, requiring fewer block and page transfers. Additional
simulations were performed to obtain the quad, page, and block
transfer rates. The left half of Table 2 shows the results for
FBRAM's chosen dimensions.
Equation 1 can be used to determine the upper bound on the number
of primitives rendered per second using FBRAM. The performance
Average Pages/Prim
Average Blocks/Prim
320
4
160
8
80
16
40
32
32
1
16
2
8
4
10 Pix Vec
2.61
1.84
1.48
1.36
7.57
4.58
3.38
20 Pix Vec
4.21
2.68
1.97
1.71
14.1
8.15
5.76
50 Pix Vec
9.02
5.20
3.42
2.78
33.8
18.9
12.9
100 Pix Vec
17.1
9.42
5.85
4.57
66.6
36.8
24.9
25 Pix Tri
2.96
2.02
1.60
1.46
9.75
6.12
4.68
50 Pix Tri
3.80
2.45
1.89
1.67
13.8
8.72
6.67
100 Pix Tri
4.97
3.05
2.24
1.94
20.0
12.8
9.89
1000 Pix Tri
14.2
8.05
5.41
4.49
82.5
59.6
50.5
Table
1 Average number of Pages or Blocks touched
per primitive
is set by the slowest of the three data paths (quads at the pins and
ALU, blocks on the global bus, pages to DRAM):
(1)
where the denominators Q, B, and P are obtained from the left half
of Table 2, and the numerators R
Q
, R
B
, R
P
are the bus rates for
quads, blocks and pages. Referring again to Figure 4, R
Q
is 100 million
quads/sec through the ALU (4 pixels/quad), R
B
is 25 million
blocks/sec (40 ns per block, one 20 ns prefetch read plus one 20 ns
writeback) and R
P
is 8.3 million pages/sec (120 ns per page).
The right half of Table 2 gives the three terms of Equation 1. The
white columns indicate the performance limit (the minimum of the
three for each case).
Equation 1 assumes that whenever the L1$ is about to miss, the rendering
controller has already brought the proper block in from the L2$ into
the L1$. Similarly, whenever the L2$ is about to miss, the rendering
controller has already brought the proper page in from the DRAM bank
into the L2$. To achieve such clairvoyance, the controller must know
which pages and blocks to prefetch or write back. The FBRAM philosophy
assumes that the rendering controller queues up the pixel operations
external to the FBRAMs, and snoops this write queue to predict
which pages and blocks will be needed soon. These needed pages and
blocks are prefetched using the DRAM operation pins, while the
SRAM operation pins are used to render pixels into the L1$ at the same
time. Cycle accurate simulation of such architectures have shown this
technique to be quite effective.
Although pages can only be fetched to one L2$ entry every 120 ns,
it is possible to fetch pages to different L2$ entries every 40 ns. To
reduce the prefetching latency, banks A, B, C and D are interleaved
in display space horizontally and vertically, as shown in Figure 5,
ensuring that no two pages from the same bank are adjacent horizontally
, vertically, or diagonally. This enables pre-fetching any
neighboring page while rendering into the current page.
As an example, while pixels of vector b in Figure 5 are being rendered
into page 0 of bank A, the pre-fetch of page 0 of bank C can
be in progress. Usually the pre-fetch from C can be started early
enough to avoid idle cycles between the last pixel in page 0 of bank
A and the first pixel in page 0 of bank C.
The key idea is that even for vertical vectors, such as vector d, we
can pre-fetch pages of pixels ahead of the rendering as fast as the
rendering can cross a page. Even though vector c rapidly crosses
three pages, they can still be fetched at a 40ns rate because they are
Average
Misses/Prim
Million Prim/sec
Q
uad
B
lock
P
age
Q
uad
Perf
B
lock
Perf
P
age
Perf
10 Pix Vec
8.75
2.35
0.478
11.4
10.6
17.4
20 Pix Vec
16.4
4.71
0.955
6.10
5.31
8.72
50 Pix Vec
38.9
11.8
2.40
2.57
2.12
3.47
100 Pix Vec
76.7
23.4
4.83
1.30
1.07
1.72
25 Pix Tri
11.6
1.70
0.308
8.62
14.7
27.0
50 Pix Tri
20.2
3.04
0.422
4.95
8.22
19.7
100 Pix Tri
36.1
6.54
0.605
2.77
3.82
13.8
1000 Pix Tri
286.
46.7
4.37
0.350
0.535
1.91
Table
2 FBRAM Performance Limits
primitives/sec
min R
Q
Q
------- R
B
B
------- R
P
P
,
,
(
)
=
from three different banks. Appendix A gives a detailed cycle by
cycle example of rendering a 10 pixel vector.
When vectors are chained (vector e), the last pixel of one segment
and the first pixel of the next segment will almost always be in the
same bank and page. Even when segments are isolated, the probability
is 75% that the last pixel of one segment and first pixel of next
segment will be in different banks, thus enabling overlapping of
DRAM bank fetches to L2$.
PIXEL RECTANGLE FILL OPERATINGS
As fast as the FBRAM pixel write rate is, it is still valuable to provide
optimizations for the special case of large rectangle fill. These
specifically include clearing to a constant value or to a repeating
pattern. Fast clearing of the RGBZ planes is required to achieve
high frame rates during double buffered animation.
FBRAM provides two levels of acceleration for rectangle filling of
constant data. Both are obtained by bypassing the bandwidth bottlenecks
shown in Figure 4.
In the first method, once an 8
4 L1$ block has been initialized to a
constant color or pattern, the entire block can be copied repeatedly
to different blocks within the L2$ at global bus transfer rates. This
feature taps the full bandwidth of the global bus, bypassing the pin/
ALU bandwidth bottleneck. Thus regions can be filled at a 4
higher
rate (1.6 billion pixels per second, for a four-way interleaved
frame buffer).
The second method bypasses both the pin/ALU and the Global Bus
bottlenecks, effectively writing 1,280 pixels in one DRAM page cycle
. First, the method described in the previous paragraph is used to
initialize all four pages of the L2$, then these page buffers are rapidly
copied to their DRAM banks at a rate of 40 ns per page. Thus
for large areas, clearing to a constant color or screen aligned pattern
can proceed at a peak rate of 32 billion pixels per second (0.25 ter-abytes/sec
), assuming a four-way interleaved design.
WINDOW SYSTEM SUPPORT
The most important feature of FBRAM for window system support
is simply its high bandwidth; however two window system specific
optimizations are also included.
Full read-modify-write cycles require two Global Bus transactions:
a prefetching read from the L2$, and copyback write to the L2$.
Most window system operations do not require read-modify-write
cycles when rendering text and simple 2D graphics. For such write-0
16
32
48
64
0
80
A:0
x
y
160
80
96
C:0
A:8
C:8
B:0
D:0
B:8
D:8
a
b
c
d
e
f
g
A:16
C:16
A:24
C:24
B:16
D:16
B:24
D:24
112
h
A:1
C:1
A:9
C:9
A:17
C:17
A:25
C:25
Figure 5.
The upper left corner of the frame buffer, showing
pages 0-255, and example primitives a-h.
vertically and horizontally interleaved banks A-D,
only operations, the number of Global Bus transactions can be cut
in half, improving performance. This is accomplished by skipping
the pre-fetching read of a new block from the L2$ to L1$.
Vertical scrolling is another frequent window system operation accelerated
by FBRAM. This operation is accelerated by performing
the copy internal to the FBRAM. This results in a pixel scroll rate
of up to 400 million pixels per second.
VIDEO OUTPUT
VRAM solved the display refresh bandwidth problem by adding a
second port, but at significant cost in die area. FBRAM also provides
a second port for video (see Figure 2), but at a smaller area penalty.
Like VRAM, FBRAM has a pair of ping-pong video buffers, but
unlike VRAM, they are much smaller in size: 80 pixels each for a
four-way interleaved FBRAM frame buffer vs. 1,280 pixels each
for a five-way interleaved VRAM frame buffer. These smaller buffers
save silicon and enable a rectangular mapping of pages to the
display, but at the price of more frequent video buffer loads.
The FBRAM video buffers are loaded directly from the DRAM
bank page buffers (L2$, 80
16 pixels), selecting one of the 16 scan
lines in the page buffer. The cost of loading a video buffer in both
FBRAM and VRAM is typically 120-200 ns.
To estimate an upper bound for FBRAM video refresh overhead for
a 1280
1024 76Hz non-interlaced video display, assume that all
rendering operations cease during the 200 ns video buffer load interval
. During each frame, a grand total of 3.28 ms
(200 ns
12801024 pixels / 80 pixels) of video buffer loads are
needed for video refresh. Thus 76 Hz video refresh overhead could
theoretically take away as much as 25% of rendering performance.
The actual video overhead is only 5-10% for several reasons. First,
the pixel ALU can still access its side of the L1$ during video refresh
, because video transfers access the L2$. Second, although one
of the four banks is affected by video refresh, global bus transfers
to the other three banks can still take place. Finally, it is usually possible
to schedule video transfers so that they do not conflict with
rendering, reducing the buffer load cost from 200 to 120 ns.
For high frame rate displays, the raster pattern of FBRAM video
output refresh automatically accomplishes DRAM cell refresh, imposing
no additional DRAM refresh tax.
FBRAM PERFORMANCE
The model developed in section 10 gave theoretical upper bounds
on the performance of a four-way interleaved FBRAM system. But
to quantify the performance obtainable by any real system built
with FBRAM, a number of other factors must be considered.
First, a 10% derating of the section 10 model should be applied to
account for the additional overhead due to video and content refresh
described in section 13.
The sophistication of the cache prediction and scheduling algorithm
implemented will also affect performance. Equation 1 assumed that
the cache controller achieves complete overlap between the three
data paths; this is not always possible. More detailed simulations
show that aggressive controllers can achieve 75% (before video
tax) of the performance results in table 2.
Taking all of these effects into account, simulations of buildable
four-way interleaved FBRAM systems show sustained rates of 3.3
million 50 pixel Z-buffered triangles per second, and 7 million 10
pixel non-antialiased Z-buffered vectors per second. FBRAM systems
with higher external interleave factor can sustain performances
in the tens of millions of small triangles per second range.
All of our simulations assume that the rest of the graphics system
can keep up with the FBRAM, delivering four RGB
Z pixels every
10 ns. While this is a formidable challenge, pixel interpolation and
vertex floating point processing ASICs are on a rapidly improving
performance curve, and should be able to sustain the desired rates.
FBRAM performance can be appreciated by comparing it with the
pixel fill rate of the next generation Pixel Planes rasterizing chips
[16], although FBRAM does not directly perform the triangle ras-terization
function. The pixel fill rate for a single FBRAM chip is
only a factor of four less than the peak (256 pixel rectangle) fill rate
of a single Pixel Planes chip, but has 400 times more storage capacity
.
Next let us contrast the read-modify-write performance of FBRAM
to a 60 ns VRAM. Assuming no batching, VRAM page mode requires
in excess of 125 ns to do what FBRAM does in 10 ns; a
12.5
speed difference.
Batching VRAM reads and writes to minimize bus-turns, as described
in section 3, does not help as much as one might think. Typical
VRAM configurations have very few scan lines per page,
which causes fragmentation of primitives, limiting batch sizes. Table
1 shows that for a 320
4 page shape, a 50 pixel triangle touches
3.8 pages, averaging 13 pixels per page. For a five way interleaved
frame buffer, an average of only 2.6 pixels can be batched per chip.
OTHER DRAM FFSHOOTS
A veritable alphabet soup of new forms of DRAM are at various stages
of development by several manufactures: CDRAM, DRAM, FBRAM,
RAMBUS, SDRAM, SGRAM, SVRAM, VRAM, and WRAM. For
3D graphics, FBRAM is distinguished as the only technology to directly
support Z-buffering, alpha blending, and ROPs. Only FBRAM converts
read-modify-write operations into pure write operations; this
alone accounts for a 3-4
performance advantage at similar clock rates.
Other than CDRAM, only FBRAM has two levels of cache, and efficient
support of rectangular cache blocks. It is beyond the scope of this
paper to derive precise comparative 3D rendering performance for all
these RAMs, but FBRAM appears to be several times faster than any
of these alternatives.
FUTURES
The demand for faster polygon rendering rates shows no sign of
abating for some time to come. However, as was observed at the
end of section 2, the number of pixels filled per scene is not going
up anywhere near as rapidly. Future increases in pixel resolution,
frame rate, and/or depth complexity are likely to be modest.
Future predictions of where technology is going are at best approximations
, and their use should be limited to understanding trends.
With these caveats in mind, Figure 6 explores trends in polygon
rendering rate demand vs. memory technologies over the next several
years. The figure shows the projected pixel fill rate (including
fragmentation effects) demanded as the polygon rate increases over
time (from the data in [6]). It also displays the expected delivered
pixel fill rates of minimum chip count frame buffers implemented
using FBRAM and VRAM technologies (extrapolating from Equation
1 and from the systems described in [14][5]). The demand
curve is above that achievable inexpensively with conventional
VRAM or DRAM, but well within the range of a minimum chip
count FBRAM system.
The trend curve for FBRAM has a steeper slope because, unlike
VRAM, FBRAM effectively decouples pixel rendering rates from
the inherently slower DRAM single transistor access rates. This
will allow future versions of FBRAM to follow the more rapidly increasing
SRAM performance trends. FBRAM still benefits from
the inherently lower cost per bit of DRAM technology.
The "excess" pixel fill rate shown for FBRAM in Figure 6 combined
with FBRAM's high bit density will permit cost-effective,
one pass, full scene antialiasing using super-sampled frame buffers.
CONCLUSIONS
In the past, the bandwidth demands of video output led to the creation
of VRAM to overcome DRAM's limitations. In recent years,
the demands of faster and faster rendering have exceeded VRAM's
bandwidth. This led to the creation of FBRAM, a new form of random
access memory optimized for Z-buffer based 3D graphics rendering
and window system support. A ten fold increase in Z-buffered
rendering performance for minimum chip count systems is
achieved over conventional VRAM and DRAM. Given statistics on
the pixel fill requirements of the next two generations of 3D graphics
accelerators, FBRAM may remove the pixel fill bottleneck from
3D accelerator architectures for the rest of this century.
ACKNOWLEDGEMENTS
FBRAM is a joint development between SMCC and Mitsubishi
Electric Corporation. The authors would like to acknowledge the
efforts of the entire Mitsubishi team, and in particular K. Inoue, H.
Nakamura, K. Ishihara, Charles Hart, Julie Lin, and Mark Perry.
On the Sun side, the authors would like to thank Mary Whitton, Scott
Nelson, Dave Kehlet, and Ralph Nichols, as well as all the other engineers
who reviewed drafts of this paper.
REFERENCES
1.
Akeley, Kurt and T. Jermoluk. High-Performance Polygon
Rendering, Proceedings of SIGGRAPH '88 (Atlanta, GA, Aug
1-5, 1988). In Computer Graphics 22, 4 (July 1988), 239-246.
2.
Akeley, Kurt. Reality Engine Graphics. Proceedings of SIGGRAPH
`93 (Anaheim, California, August 1-6, 1993). In
Computer Graphics, Annual Conference Series, 1993, 109-116
.
3.
Catmull, E. A Subdivision Algorithm for Computer Display of
Curved Surfaces, Ph.D. Thesis, Report UTEC-CSc-74-133,
Computer Science Dept., University of Utah, Salt Lake City,
UT, Dec. 1974.
Figure 6. Pixel fill rate needed to match anticipated triangle fill
rate demand compared with anticipated delivered pixel fill rate
delivered by minimum chip count FBRAM and VRAM systems.
1993
2001
Year
10B
1B
100M
10M
10M
100M
1B
1M
Triangles/sec
VRAM
Demand
FBRAM
Pixels/sec
4.
Deering, Michael, S. Winner, B. Schediwy, C. Duffy and N.
Hunt. The Triangle Processor and Normal Vector Shader: A
VLSI system for High Performance Graphics. Proceedings of
SIGGRAPH '88 (Atlanta, GA, Aug 1-5, 1988). In Computer
Graphics 22, 4 (July 1988), 21-30.
5.
Deering, Michael, and S. Nelson. Leo: A System for Cost Effective
Shaded 3D Graphics. Proceedings of SIGGRAPH `93
(Anaheim, California, August 1-6, 1993). In Computer
Graphics, Annual Conference Series, 1993, 101-108.
6.
Deering, Michael. Data Complexity for Virtual Reality:
Where do all the Triangles Go? Proceedings of IEEE VRAIS
`93 (Seattle, WA, Sept. 18-22, 1993). 357-363.
7.
Demetrescu, S. A VLSI-Based Real-Time Hidden-Surface
Elimination Display System, Master's Thesis, Dept. of Computer
Science, California Institute of Technology, Pasadena
CA, May 1980.
8.
Demetrescu, S. High Speed Image Rasterization Using Scan
Line Access Memories. Proceedings of 1985 Chapel Hill Conference
on VLSI, pages 221-243. Computer Science Press,
1985.
9.
Fuchs, Henry, and J. Poulton. Pixel Planes: A VLSI-Oriented
Design for a Raster Graphics Engine. In VLSI Design, 2,3
(3rd quarter 1981), 20-28.
10. Foley, James, A. van Dam, S. Feiner and J Hughes. Computer
Graphics: Principles and Practice, 2nd ed., Addison-Wesley
, 1990.
11. Gharachorloo, Nader, S. Gupta, E. Hokenek, P. Bala-subramanina
, B. Bogholtz, C. Mathieu, and C. Zoulas.
Subnanosecond Rendering with Million Transistor Chips.
Proceedings of SIGGRAPH '88 (Boston, MA, July 31, Aug 4,
1989). In Computer Graphics 22, 4 (Aug. 1988), 41-49.
12. Gharachorloo, Nader, S. Gupta, R. Sproull, and I. Sutherland
. A Characterization of Ten Rasterization Techniques.
Proceedings of SIGGRAPH '89 (Boston, MA, July 31, Aug 4,
1989). In Computer Graphics 23, 3 (July 1989), 355-368.
13. Goris, A., B. Fredrickson, and H. Baeverstad. A Config-urable
Pixel Cache for Fast Image Generation. In IEEE CG&A
7,3 (March 1987), pages 24-32, 1987.
14. Harrell, Chandlee, and F. Fouladi. Graphics Rendering Architecture
for a High Performance Desktop Workstation. Proceedings
of SIGGRAPH `93 (Anaheim, California, August 16
, 1993). In Computer Graphics, Annual Conference Series,
1993, 93-100.
15. M5M410092 FBRAM Specification. Mitsubishi Electric,
1994.
16. Molnar, Steven, J. Eyles, J. Poulton. PixelFlow: High-Speed
Rendering Using Image Composition. Proceedings of SIGGRAPH
'92 (Chicago, IL, July 26-31, 1992). In Computer
Graphics 26, 2 (July 1992), 231-240.
17. Patterson, David, and J. Hennessy. Computer Architecture:
a Quantitative Approach, Morgan Kaufmann Publishers, Inc.,
1990.
18. Pinkham, R., M. Novak, and K. Guttag. Video RAM Excels
at Fast Graphics. In Electronic Design 31,17, Aug. 18, 1983,
161-182.
19. Sproull, Robert, I. Sutherland, and S. Gupta. The 8 by 8
Display. In ACM Transactions on Graphics 2, 1 (Jan 1983),
35-56.
20. Whitton, Mary. Memory Design for Raster Graphics Displays
. In IEEE CG&A 4,3 (March 1984), 48-65, 1984.
APPENDIX A: Rendering a 10 pixel Vector
This appendix demonstrates the detailed steps involved in scheduling
FBRAM rendering, using the 10 pixel long, one pixel wide vertical
Z-buffered vector shown in Figure 7. This figure shows the
memory hierarchy elements touched by the vector at three levels of
detail: the coarsest (left most) shows banks (A..D) and pages
(0..255), the intermediate detail (middle) shows blocks in the L2$,
and the finest (right most) shows pixel quads.
The example vertical vector starts at x=1, y=10, and ends at y=19.
Table 3 gives the bank, page, L2$ block, and quad for each pixel in
the vector. Note the spatial locality of pixels.
Table 4 below shows the schedule of commands and data issued to
the FBRAM, and the resulting internal activities. Note that independent
controls are available, and permit parallel L1$ and L2$ activities
. The following abbreviations are used in Table 4:
L1$[n]: Block n of the L1$.
L2$[n]: Block n of the L2$.
ACP: Access page (DRAM to L2$ transfer).
A:17
C:17
A:25
C:25
A:33
0
4
8
12
16
20
24
0
4
8
12
0
1
2
3
4
5
6
7
0
1
4
5
A:0
Figure 7. A 10 pixel vector near the upper
0
2
4
6
0
2
4
6
0
2
4
6
0
2
4
6
1
3
5
7
1
3
5
7
1
3
5
7
1
3
5
7
C:0
B:0
D:0
A:8
C:8
B:8
D:8
A:16
C:16
B:16
D:16
A:24
C:24
B:24
D:24
A:32B:32
A:1
C:1
A:9
C:9
Bank:Page
Block
Quad
0
4
8
12
16
20
0
80
160
0
16
32
48
64
80
96
left corner of the screen, at 3 levels of detail.
RDB: Read block (L2$
L1$ transfer).
MWB: Masked write block (L1$
L2$ transfer).
PRE: Precharge bank (free L2$ entry).
read x: Read pixel x from L1$ to ALU.
write x: Write pixel x from ALU to L1$.
We follow the first pixel at (1, 10) through the cache hierarchy. The
pixel's page (page 0 of bank A) is transferred to the L2$ entry A in
cycles 1 to 4 (notice that the next 5 pixels are transferred too). The
pixel's block is then transferred from L2$ entry A to L1$[0] in cycles
5 and 6 (the next pixel is transferred too). The pixel is read from
the L1$[0] to the pipelined ALU in cycle 7. The old and new pixels
are merged (Z-buffered, blended) during cycles 8 to 11. The resulting
pixel is written back to the L1$[0] in cycle 12. The pixel's block
is transferred from the L1$[0] back to the L2$ entry A (and DRAM
page 0 of bank A) in cycles 14 and 15.
The second pixel at (1,11) hits in both L1$ and L2$, and can follow
one cycle behind the first pixel, arriving back in the L1$ in cycle 13.
The pixel at (1,12) misses in the L1$, but hits in the L2$, requiring
an RDB from L2$ entry A to L1$[1]. The pixel at (1,16) misses in
both caches, requiring a L2$ access of bank C, and followed by a
transfer from L2$ entry C to L1$[2]. All the other pixels hit in both
caches, and are scheduled like the second pixel.
X
Y
Bank
Page
L2$
Block
Quad
L2$
L1$
1
10
A
0
2
4
miss
miss
1
11
6
hit
hit
1
12
3
0
hit
miss
1
13
2
hit
hit
1
14
4
hit
hit
1
15
6
hit
hit
1
16
C
0
0
0
miss
miss
1
17
2
hit
hit
1
18
4
hit
hit
1
19
6
hit
hit
Table
3 Bank, Page, L2$ Block, and Quad for each
pixel in the vector
Table 4. Schedule of operations for rendering a 10 pixel vector
Merge data with Quad 4 of L1$[0]
Merge data with Quad 6 of L1$[0]
Merge data with Quad 0 of L1$[1]
Merge data with Quad 2 of L1$[1]
Merge data with Quad 4 of L1$[1]
Merge data with Quad 6 of L1$[1]
Merge data with Quad 0 of L1$[2]
Merge data with Quad 2 of L1$[2]
Merge data with Quad 4 of L1$[2]
Merge data with Quad 6 of L1$[2]
Access Page 0 of Bank A
Access Page 0 of Bank C
L2$[2] of Bank A
L1$[0]
L2$[0] of Bank C
L1$[2]
L2$[3] of Bank A
L1$[1]
L2$[2] of Bank A
L1$[0]
L2$[3] of Bank A
L1$[1]
L2$[0] of Bank C
L1$[2]
Precharge Bank A
23
22
21
20
19
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
ACP
ACP
PRE
RDB
RDB
RDB
MWB
MWB
MWB
RDB
RDB
RDB
MWB
MWB
MWB
read 0
read 2
read 4
read 6
read 4
read 6
read 0
read 2
read 4
read 6
write 4
write 6
write 4
write 6
write 4
write 6
write 0
write 2
write 0
write 2
L1$ Command and Data
L2$ Command
L2$ Activities
L1$ Activities
A
C
B
D
3 4 5 6 7
0
1
2
Internal Activities
Commands and Data to FBRAM | FBRAM;Video output bandwidth;Dynamic random access memory;RGBa blend;dynamic memory;DRAM;Rendering rate;Z buffer;rendering;graphics;caching;memory;Dynamic memory chips;Pixel processing;3D graphics hardware;Acceleration;3D graphics;Z-buffer;Optimisation;Z-compare;pixel caching;VRAM;FBRam;Z-buffering;parallel graphics algorithms;Video buffers;pixel processing;Pixel Cache;Frame buffer;SRAM;Caches |
94 | Focused Named Entity Recognition Using Machine Learning | In this paper we study the problem of finding most topical named entities among all entities in a document, which we refer to as focused named entity recognition. We show that these focused named entities are useful for many natural language processing applications, such as document summarization , search result ranking, and entity detection and tracking. We propose a statistical model for focused named entity recognition by converting it into a classification problem . We then study the impact of various linguistic features and compare a number of classification algorithms. From experiments on an annotated Chinese news corpus, we demonstrate that the proposed method can achieve near human-level accuracy. | INTRODUCTION
With the rapid growth of online electronic documents,
many technologies have been developed to deal with the
enormous amount of information, such as automatic summarization
, topic detection and tracking, and information
retrieval. Among these technologies, a key component is to
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'04, July 2529, 2004, Sheffield, South Yorkshire, UK.
Copyright 2004 ACM 1-58113-881-4/04/0007 ...
$
5.00.
identify the main topics of a document, where topics can be
represented by words, sentences, concepts, and named entities
. A number of techniques for this purpose have been
proposed in the literature, including methods based on position
[3], cue phrases [3], word frequency, lexical chains[1]
and discourse segmentation [13]. Although word frequency
is the easiest way to representing the topics of a document,
it was reported in [12] that position methods produce better
results than word counting based methods.
Important sentence extraction is the most popular method
studied in the literature. A recent trend in topic sentence
extraction is to employ machine learning methods. For example
, trainable classifiers have been used in [8, 21, 5, 11]
to select sentences based on features such as cue phrase,
location, sentence length, word frequency and title, etc.
All of the above methods share the same goal of extracting
important sentences from documents. However, for topic
representation, sentence-level document summaries may still
contain redundant information. For this reason, other representations
have also been suggested. For example, in [17],
the authors used structural features of technical papers to
identify important concepts rather than sentences. The authors
of [9] presented an efficient algorithm to choose topic
terms for hierarchical summarization according to a proba-bilistic
language model. Another hybrid system, presented
in [7], generated summarizations with the help of named entity
foci of an article. These named entities include people,
organizations, and places, and untyped names.
In this paper, we study the problem of finding important
named entities from news articles, which we call focused
named entity recognition. A news article often reports
an event that can be effectively summarized by the five W
(who, what, when, where, and why) approach. Many of the
five W's can be associated with appropriate named entities
in the article. Our definition of focused named entities is
mainly concerned with Who and What. Therefore it is almost
self-evident that the concept of focused named entity
is important for document understanding and automatic information
extraction. In fact, a number of recent studies
have already suggested that named entities are useful for
text summarization [15, 4, 7, 16]. Moreover, we shall illustrate
that focused named entities can be used in other text
processing tasks as well. For example, we can rank search
results by giving more weights to focused named entities.
We define focused named entities as named entities that
are most relevant to the main topic of a news article. Our
281
task is to automatically select these focused named entities
from the set of all entities in a document. Since focused
named entity recognition is a newly proposed machine learning
task, we need to determine whether it is well-posed.
That is, whether there exists a sufficient level of agreement
on focused named entities among human reviewers.
A detailed study on this matter will be reported in the section
5.2. The conclusion of our study is that there is indeed
a sufficient level of agreement. Encouraged by this study,
we further investigated the machine learning approach to
this problem, which is the focus of the paper. We discuss
various issues encountered in the process of building a machine
learning based system, and show that our method can
achieve near human performance.
The remainder of this paper is organized as follows. In
Section 2 we introduce the problem of focused named entity
recognition and illustrate its applications. Section 3 describes
a general machine learning approach to this problem.
In Section 4, we present features used in our system. Section
5 presents a study of human-level agreement on focused
named entities, and various experiments which illustrate the
importance of different features. Some final conclusions will
be given in section 6.
THE PROBLEM
Figure 1 is an example document.
1
This article reports
that Boeing Company would work with its new Research
and Technology Center to develop a new style of electric
airplane. On the upper half of the page, we list all named
entities appearing in the article and mark the focused entities
. Among the twelve named entities, "Boeing Company"
and its "Research and Technology Center" are most relevant
to the main topic. Here we call "Boeing Company" and "Research
and Technology Center" the focuses. Clearly, focused
named entities are important for representing the main topic
of the content. In the following, we show that the concept
of focused named entity is useful for many natural language
processing applications, such as summarization, search ranking
and topic detection and tracking.
2.1
Using Focused Named Entity for
Summarization
We consider the task of automatic summarization of the
sample document in Figure 1. A traditional method is to select
sentences with highest weights, where sentence weights
are calculated by averaging term frequencies of words it contains
.
The resulting summarization is given in Figure 2.
Using focused named entities, we consider two methods to
refine the above summarization. The first method is to increase
the weight of the focused named entity "Boeing" in
the sentences, leading to the summary in Figure 3. The
other method simply picks sentences containing the focused
named entity "Boeing" as in Figure 4. From this example,
we can see that summarization using focused named entities
gives more indicative description of an article.
2.2
Using Focused Named Entity for Ranking
Search Results
Suppose we want to find news about World Cup football
match from a collection of news articles. First we search
1
The
original
article
can
be
accessed
at
http://www.boeing.com/news/releases/2001/q4/nr 011127a.html.
Figure 1: Sample document with focused named entities
marked
Boeing To Explore Electric Airplane
Fuel cells and electric motors will not replace
jet engines on commercial transports, but they could
one day replace gas turbine auxiliary power units.
Unlike a battery, which needs to be recharged,
fuel cells keep working as long as the fuel lasts.
"Fuel cells show the promise of one day providing
efficient, essentially pollution-free electrical power
for commercial airplane primary electrical power
needs," Daggett said.
Figure 2: Summary using term frequency weighting
Boeing To Explore Electric Airplane
Boeing
Commercial
Airplanes
will
develop
and
test an electrically powered demonstrator airplane as
part of a study to evaluate environmentally friendly
fuel cell technology for future Boeing products.
Fuel cells and electric motors will not replace
jet engines on commercial transports, but they could
one day replace gas turbine auxiliary power units.
"By adapting this technology for aviation, Boeing
intends to demonstrate its leadership in the
pursuit of delivering environmentally preferred products
."
Figure 3: Summary weighted by focused named entities
282
Boeing To Explore Electric Airplane
Boeing
Commercial
Airplanes
will
develop
and
test an electrically powered demonstrator airplane as
part of a study to evaluate environmentally friendly
fuel cell technology for future Boeing products.
The airplane manufacturer is working with Boe-ing's
new Research and Technology Center in Madrid,
Spain, to modify a small, single-engine airplane by
replacing its engine with fuel cells and an electric
motor that will turn a conventional propeller.
Boeing Madrid will design and integrate the experimental
airplane's control system.
"By adapting this technology for aviation, Boeing
intends to demonstrate its leadership in the
pursuit of delivering environmentally preferred products
."
Figure 4: Summary using sentences containing focused
named entities
for documents containing the key phrase "World Cup". The
ranking function, which determines which document is more
relevant to the query, is very important to the search quality.
Since our query is a single phrase, the ranked search results
, displayed in Table 1, are based on the term frequency
of the phrase "World Cup". It is clear that without deeper
text understanding, term frequency is a quite reasonable
measure of relevancy. However, although some articles may
contain more "World Cup" than others, they may actually
focus less on the World Cup event which we are interested
. Therefore a better indication of document relevancy
is whether a document focuses on the entity we are interested
in. A simple method is to re-order the search results
first by whether the query entity is focused or not, and then
by its term-frequency. It is quite clear that this method
gives higher quality ranking.
In this example, we use Chinese corpus for demonstration,
so the original searching results are in Chinese, which we
have translated into English for reading convenience.
2.3
Other Uses of Focused Named Entity
We believe that focused named entities are also helpful in
text clustering and categorization tasks such as topic detection
and tracking. This is because if focused named entities
are automatically recognized, then the event for each document
can be described more precisely. Since focused named
entities characterize what an article talks about, it is natural
to organize articles based on them. Therefore by giving
more weights to focused named entities, we believe that we
can potentially obtain better quality clustering and more
accurate topic detection and tracking.
Our study of the focused named entity recognition problem
is motivated by its potential applications as illustrated
above. Experiments in section 5.2 indicate that there is a
sufficient agreement on focused named entities among human
reviewers. Therefore our goal is to build a system that
can automatically detect focused named entities among all
named entities in a document. We shall mention that although
this paper only studies named entities, the basic idea
can be extended to tasks such as finding important words,
noun-phrases in a document.
LEARNING BASED FOCUSED NAMED ENTITY RECOGNITION
Focused named entity recognition can be regarded as a
binary classification problem. Consider the set of all named
entities in a document extracted by a named entity recognition
system. Each entity in this set can be labeled yes if it
is a focused entity, or no if it is not. We formally define a
two-class categorization problem as one to determine a label
y
{-1, 1} associated with a vector x of input variables.
However, in order to build a successful focused named entity
extractor, a number of issues have to be studied. First,
named entities that refer to the same person or organization
need to be grouped together; secondly what features are
useful; and thirdly, how well different learning algorithms
perform on this task. These issues will be carefully studied.
3.1
Coreference Resolution
Coreference is a common phenomenon in natural language.
It means that an entity can be referred to in different ways
and in different locations of the text. Therefore for focused
named entity recognition, it is useful to apply a coreference
resolution algorithm to merge entities with the same referents
in a given document. There are different kinds of
coreference according to the basic coreference types, such
as pronominal coreference, proper name coreference, apposition
, predicate nominal, etc. Here in our system, we only
consider proper name coreference, which is to identify all
variations of a named entity in the text.
Although it is possible to use machine learning methods
for coreference resolution (see [20] as an example), we shall
use a simpler scheme, which works reasonably well. Our
coreference resolution method can be described as follows.
1. Partitioning: The set of named entities is divided into
sub-sets according to named entity types, because coreference
only occurs among entities with the same types.
2. Pair-wise comparison: Within each sub-set, pair-wise
comparison is performed to detect whether each entity-pair
is an instance of coreference. In this study, we use
a simple algorithm which is based on string-matching
only. Since we work with Chinese data, we split each
entity into single Chinese characters. We study two
different schemes here: using either exact string matching
or partial string matching to decide coreference.
In the case of exact string matching, two entities are
considered to be a coreference pair only when they
are identical. In the case of partial string matching, if
characters in the shorter entity form a (non-consecutive)
sub-string of the longer entity, then the two entities are
considered to be a coreference pair.
3. Clustering: Merge all coreference pairs created in the
second step into the same coreference chains. This step
can also be done differently. For example, by using a
sequential clustering method.
Although the coreference resolution algorithm described
above is not perfect, it is not crucial since the results will
283
Table 1: Search result of "World Cup"
focus/not
tf
title
focus
20
Uncover the Mystery of World Cup Draws
focus
11
Brazil and Germany Qualified, Iran Kicked out
focus
9
Preparing for World Cup, China Football Federation and Milutinovic Snatch the Time
focus
6
Sun Wen Understands the Pressure Milutinovic and China Team Faced
focus
5
Korea Leaves More Tickets to China Fans
focus
4
Paraguay Qualified, but Head Coach Dismissed
no
4
LiXiang: Special Relationships between Milutinovic and I
no
3
Three Stars on Golden Eagle Festival
focus
3
Adidas Fevernova, the Official 2002 FIFA World Cup Ball, Appears Before the Public in Beijing
no
2
China's World Top 10 Start to Vote
focus
2
Qualified vs. Kicked out: McCarthy Stays on, Blazevic Demits
focus
2
China Attends Group Match in Korea, But not in the Same Group With Korea
no
2
Don't Scare Peoples with Entering WTO
no
1
Kelon Tops China's Home Appliance Industry in CCTV Ads Bidding
no
1
Lou Lan: Great Secrets Behind
focus
1
Australia Beats Uruguay by One Goal
no
1
Chang Hong's "King of Precision Display": Good Friends of Football Fans
be passed to a machine learning algorithm in a later stage,
which can offset the mistakes made in the coreference stage.
Our experiment shows that by using coreference resolution,
the overall system performance can be improved appreciably
.
3.2
Classification Methods
In this paper, we compare three methods: a decision tree
based rule induction system, a naive Bayes classifier, and a
regularized linear classification method based on robust risk
minimization.
3.2.1
Decision Tree
In text-mining application, model interpretability is an
important characteristic to be considered in addition to the
accuracy achieved and the computational cost. The requirement
of interpretability can be satisfied by using a rule-based
system, such as rules obtained from a decision tree. Rule-based
systems are particularly appealing since a person can
examine the rules and modify them. It is also much easier
to understand what a system does by examining its rules.
We shall thus include a decision tree based classifier in this
study. In a typical decision tree training algorithm, there are
usually two stages. The first stage is tree growing where a
tree is built by greedily splitting each tree node based on
a certain figure of merit. However after the first stage, the
tree can overfit the training data, therefore a second stage
involving tree pruning is invoked. In this stage, one removes
overfitted branches of the tree so that the remaining portion
has better predictive power. In our decision tree package,
the splitting criteria during tree growth is similar to that
of the standard C4.5 program [18], and the tree pruning is
done using a Bayesian model combination approach. See [6]
for detailed description.
3.2.2
Naive Bayes
Another very popular binary classification method is naive
Bayes. In spite of its simplicity, it often achieves reasonable
performance in practical applications. It can be regarded as
a linear classification method, where we seek a weight vector
w and a threshold such that w
T
x < if its label y =
-1
and w
T
x
if its label y = 1. A score of value w
T
x
can
be assigned to each data point as a surrogate for the
likelihood of x to be in class.
In this work, we adopt the multinomial model described
in [14]. Let
{(x
1
, y
1
), . . . , (x
n
, y
n
)
} be the set of training
data. The linear weight w is given by w = w
1
- w
-1
, and
=
1
-1
. Denote by x
i,j
the j-th component of the
data vector x
i
, then the j-th component w
c
j
of w
c
(c =
1)
is given by
w
c
j
= log
+
i:y
i
=c
x
i,j
d +
d
j=1
i:y
i
=c
x
i,j
,
and
c
(c =
1) is given by
c
=
- log
|{i:y
i
=c}|
n
.
The parameter > 0 in the above formulation is a smoothing
(regularization) parameter. [14] fixed to be 1, which
corresponds to the Laplacian smoothing.
3.2.3
Robust Risk Minimization Method
Similar to naive Bayes, this method is also a linear prediction
method.
Given a linear model p(x) = w
T
x + b,
we consider the following prediction rule: predict y = 1 if
p(x)
0, and predict y = -1 otherwise. The classification
error (we shall ignore the point p(x) = 0, which is assumed
to occur rarely) is
I(p(x), y) =
1
if p(x)y
0,
0
if p(x)y > 0.
A very natural way to compute a linear classifier is by finding
a weight ( ^
w, ^
b) that minimizes the average classification
error in the training set:
( ^
w, ^
b) = arg min
w,b
1
n
n
i=1
I(w
T
x
i
+ b, y
i
).
Unfortunately this problem is typically NP-hard computa-tionally
. It is thus desirable to replace the classification error
loss I(p, y) with another formulation that is computation-ally
more desirable. Large margin methods such as SVM
employ modified loss functions that are convex. Many loss
functions work well for related classification problems such
as text-categorization [24, 10].
The specific loss function
284
consider here is
h(p, y) =
-2py
py <
-1
1
2
(py
- 1)
2
py
[-1, 1]
0
py > 1.
That is, our linear weights are computed by minimizing the
following average loss on the training data:
( ^
w, ^
b) = arg min
w
1
n
n
i=1
h(w
T
x
i
+ b, y
i
).
This method, which we refer to as RRM (robust risk minimization
), has been applied to linguistic processing [23] and
text categorization [2] with good results. Detailed algorithm
was introduced in [22].
FEATURES
We assume that named entities are extracted by a named
entity recognition system. Many named entity recognition
techniques have been reported in the literal, most of them
use machine learning approach. An overview of these methods
can be found in [19]. In our system, for the purpose of
simplicity, we use human annotated named entities in the
experiments. In the learning phase, each named entity is
considered as an independent learning instance. Features
must reflect properties of an individual named entity, such
as its type and frequency, and various global statistical measures
either at the document scale or at the corpus scale.
This section describes features we have considered in our
system, our motivations, and how their values are encoded.
4.1
Entity Type
Four entity types are defined: person, organization, place,
and proper nouns. The type of a named entity is a very
useful feature. For example, person and organization are
more likely to be the focus than a place. Each entity type
corresponds to a binary feature-component in the feature
vector, taking a value of either one or zero. For example,
a person type is encoded as [1 0 0 0], and an organization
type is encoded as [0 1 0 0].
4.2
In Title or Not
Whether a named entity appears in the title or not is an
important indicator of whether it is a focused entity. This
is because title is a concise summary of what an article is
about. The value of this feature is binary (0 or 1).
4.3
Entity Frequency
This feature is the number of times that the named entity
occurs in the document. Generally speaking, the more frequent
it occurs, the more important it is. The value of this
feature is just the frequency of the named entity.
4.4
Entity Distribution
This feature is somewhat complicated. The motivation is
that if a named entity occurs in many different parts of a
document, then it is more likely to be an important entity.
Therefore we use the entropy of the probability distribution
that measures how evenly an entity is distributed in a document
.
Consider a document which is divided into m sections.
Suppose that each named entity's probability distribution is
given by
{p
1
, ..., p
i
, ..., p
m
}, where p
i
=
occurrence in ith section
total occurrence in the doc
.
The entropy of the named entity distribution is computed
by entropy =
m
i=1
p
i
log p
i
. In our experiments, we select
m = 10. This feature contributes a real valued feature-component
to the feature vector.
4.5
Entity Neighbor
The context in which a certain named entity appears is
quite useful. In this study, we only consider a simple feature
which counts its left and right neighboring entity types.
If several named entities of the same type are listed side by
side, then it is likely that the purpose is for enumeration, and
the listed named entities are not important. Each neighboring
side has five possible types -- four named entity types
plus a normal-word (not a named entity) type. For example,
consider a person mentioned three times in the document.
Among the three mentions, the left neighbors are two person
names and one common word, and the right neighbors are
one place name and two common words. Then the entity
neighbor feature components are [2 0 0 0 1 0 0 1 0 2].
4.6
First Sentence Occurrence
This feature is inspired by the position method [3, 12] in
sentence extraction. Its value is the occurrences of the entity
appearing in the beginning sentence of a paragraph.
4.7
Document Has Entity in Title or Not
This feature indicates whether any entity exists in the title
of the document, and thus takes binary value of 0 or 1.
4.8
Total Entity Count
This feature is the total number of entities in the document
, which takes integer value. The feature reflects the
relative importance of an entity in the entity set.
4.9
Document Frequency in the Corpus
This is a corpus level feature. If a named entity has a low
frequency in the document collection, but relatively high
frequency in the current document, then it is likely to be a
focused entity. When this feature is used, the term frequency
feature in section 4.3 will be computed using (tf /docsize)
log(N/df ), where df is the number of documents that a
named entity occurs in.
EXPERIMENTS
In this section, we study the following issues: corpus annotation
, human-level agreement on focused named entities,
performance of machine learning methods compared with a
baseline, influence of different features, and the impact of
coreference module to the overall performance.
5.1
Corpus Annotation
We select fifteen days of Beijing Youth Daily news in
November 2001 as our testing corpus, which contains 1,325
articles. The text, downloaded from http://bjyouth.ynet.com,
is in Chinese. The articles belong to a variety of categories,
including politics, economy, laws, education, science, entertainments
, and sports.
Since different people may have different opinions on the
focused named entities, a common set of rules should be
agreed upon before the whole corpus is to be annotated.
We use the following method to come up with a general
guideline for annotating focused named entities.
285
First, the named entities in each document were annotated
by human. Then, we selected twenty documents from
the corpus and invited twelve people to mark focused named
entities. Nine of the twelve people are experts in natural
language processing, so their opinions are very valuable to
define focused named entities. Based on the survey result,
entities marked by more than half of the survey participants
were defined as focused named entities. We obtained fifty
focused named entities for the twenty articles. By studying
the focused named entities in those articles, we were able to
design specifications for focused named entity annotation.
The whole corpus was then marked according to the specifications
.
5.2
Human Agreement Statistics
In our survey, fifty entities were identified as focused entities
from the total number of 341 entities in the 20 documents
. Table 2 shows, among the 50 focused entities, 5
entities are agreed as focus by all 12 persons, and 7 entities
are agreed by 11 persons, etc.
Table 2: Human agreement statistics
num of focused named entities
5
7
5
8
7
10
8
num of person agreeing
12
11
10
9
8
7
6
Let N
k
denotes the number of person with agreement on
focused named entity k, then the human agreement level
Agree
k
on the k-th focused named entity is Agree
k
=
N
k
12
.
The average agreement on the 50 focused named entities
is Average Agree =
50
k=1
Agree
k
50
= 72.17%, with variance
2.65%. We also computed the precision and the recall for the
survey participants with respect to the fifty focused named
entities.
Table 3 shows that the best human annotator
achieves an F
1
measure of 81.32%.
Some of the participants
marked either too many or too few named entities,
and thus had much lower performance numbers. This problem
was fixed when the whole corpus was annotated using
specifications induced from this small-scale experiment.
Table 3: Human annotation performance
user id
precision
recall
F
1
1
90.24
74.00
81.32
2
86.05
74.00
79.57
3
83.33
70.00
76.09
4
84.21
64.00
72.73
5
96.55
56.00
70.89
6
90.63
58.00
70.73
7
71.74
66.00
68.75
8
73.81
62.00
67.39
9
57.14
80.00
66.67
10
48.19
80.00
60.15
11
38.60
88.00
53.66
12
33.33
94.00
49.21
5.3
Corpus Named Entity Statistics
We consider two data sets in our experiments: one is the
whole corpus of 1,325 articles, and the other is a subset of
726 articles with named entities in their titles. Table 4 shows
there are totally 3,001 focused entities among 18,371 entities
in the whole corpus, which means that 16.34 percent of the
entities are marked as focused. On average, there are 2.26
focused named entities for each article, which is consistent
with the small-scale survey result.
Table 4: Corpus statistics on named entities
set
docnum
entities
focuses
focus percent
focus/doc
1
1,325
18,371
3,001
16.34%
2.26
2
726
10,697
1,669
15.60%
2.30
5.4
Baseline Results
Since named entities in title or with high frequency are
more likely to be the focal entities, we consider three baseline
methods. The first method marks entities in titles to
be the foci; the second method marks most frequent entities
in each article to be the focal entities; the third method is
a combination of the above two, which selects those entities
either in title or occurring most frequently. We use partial
string matching for coreference resolution in the three
baseline experiments.
Named entities occurring in the title are more likely to be
the focus of the document, but they only represent a small
portion of all focal entities. Baseline experiment 1 shows the
precision of this method is quite high but the recall is very
low.
Baseline experiment 2 implies that most of the top 1
named entities are focused entities, but again the recall is
very low. However, if more named entities are selected, the
precision is decreased significantly, so that the F
1
measure
does not improve. The top-3 performance is the worst, with
an F
1
measure of only 50.47%. Note that several named entities
may have the same occurrence frequency in one document
, which introduces uncertainty into the method.
By combining named entities from the title and with high
frequency, we obtain better results than either of the two
basic baseline methods. The best performance is achieved
by combining the in-title and top 1 named entities, which
achieves F
1
measures of 66.68% for data set 1, and 70.51%
for data set 2.
5.5
Machine Learning Results
Since in our implementation, decision tree and naive Bayes
methods only take integer features, we encode the floating
features to integer values using a simple equal interval
binning method. If a feature x is observed to have values
bounded by x
min
and x
max
, then the bin width is computed
by =
x
max
-x
min
k
and the bin boundaries are at x
min
+ i
where i = 1, ..., k
- 1. The method is applied to each continuous
feature independently and k is set to 10. Although
more sophisticated discretization methods exist, the equal
interval binning method performs quite well in practice.
Machine learning results are obtained from five-fold cross-validation
. Coreference resolution is done with partial string-matching
. The test results are reported in Table 6.
This experiment shows that good performance can be
achieved by using machine learning techniques. The RRM
performance on both data sets are significantly better than
the base line results, and comparable to that of the best human
annotator we observed from our small-scale experiment
in Section 5.2.
286
Table 5: Baseline results
Corpus
Method
Focuses
focus/doc
Precision
Recall
F
1
726docs
title
992
1.36
83.47
49.61
62.23
1,325docs
top1
1,580
1.19
88.54
46.62
61.08
top2
4,194
3.17
54.48
76.14
63.52
top3
7,658
5.78
35.13
89.64
50.47
726docs
title+top1
1,247
1.72
82.44
61.59
70.51
title+top2
2,338
3.22
56.93
79.75
66.43
title+top3
4,165
5.74
36.06
89.99
51.49
1,325docs
title+top1
2,011
1.52
83.09
55.68
66.68
title+top2
4,388
3.31
53.78
78.64
63.88
title+top3
7,738
5.84
34.94
90.10
50.36
Table 6: Machine learning results
Dataset
RRM
Decision Tree
Naive Bayes
P
R
F
1
P
R
F
1
P
R
F
1
726 docs
88.51
80.54
84.27
87.29
78.02
82.37
69.32
90.28
78.37
1,325 docs
84.70
78.23
81.32
83.83
74.61
78.89
69.14
89.08
77.82
5.6
Influence of Features
The goal of this section is to study the impact of different
features with different algorithms. Results are reported in
Table 7. Feature id corresponds to the feature subsection
number in section 4.
Experiment A uses frequency-based features only. It is
quite similar to the bag-of-word document model for text
categorization, with the entity-frequency and in-title information
. By adding more sophisticated document-level features
, the performance can be significantly improved. For
the RRM method, F
1
finally reaches 81.32%. It is interesting
to observe that the corpus-level feature (experiment F
versus G) has different impacts on the three algorithms. It
is a good feature for naive Bayes, but not for the RRM and
decision tree. Whether corpus-level features can appreciably
enhance the classification performance requires more careful
investigation.
The experiments also indicate that the three learning algorithms
do not perform equally well. RRM appears to have
the best overall performance. The naive Bayes method requires
all features to be independent, which is a quite unreal-istic
assumption in practice. The main problem for decision
tree is that it easily fragments the data, so that the probability
estimate at the leaf-nodes become unreliable. This is
also the reason why voted decision trees (using procedures
like boosting or bagging) perform better.
The decision tree can find rules readable by a human. For
example, one such rule reads as: if a named entities appears
at least twice, its left and right neighbors are normal words,
its discrete distribution entropy is greater than 2, and the
entity appears in the title, then the probability of it being a
focused entity is 0.87.
5.7
Coreference Resolution
In order to understand the impact of coreference resolution
on the performance of focused named entity recognition,
we did the same set of experiments as in section 5.5, but with
exact string matching only for coreference resolution in the
feature extraction process. Table 8 reports the five-fold cross
validation results. On average the performance is decreased
by about 3 to 5 percent. This means coreference resolution
plays an important role in the task. The reason is that it
maps variations of a named entity into a single group, so
that features such as occurrence frequency and entity distribution
can be estimated more reliably. We believe that with
more sophisticated analysis such as pronominal coreference
resolution, the classification performance can be further improved
CONCLUSIONS AND FUTURE WORK
In this paper, we studied the problem of focused named
entity recognition. We gave examples to illustrate that focused
named entities are useful for many natural language
processing applications.
The task can be converted into
a binary classification problem. We focused on designing
linguistic features, and compared the performance of three
machine learning algorithms. Our results show that the machine
learning approach can achieve near human-level accuracy
. Because our system is trainable and features we use
are language independent, it is easy for us to build a similar
classification model for other languages. Our method can
also be generalized to related tasks such as finding important
words and noun-phrases in a document.
In the future, we will integrate focused named entity recognition
into real applications, such as information retrieval,
automatic summarization, and topic detection and tracking,
so that we can further study and evaluate its influences to
these systems.
ACKNOWLEDGMENTS
We thank Honglei Guo for providing the original corpus
with named entity annotation. Jianmin Jiang helped us to
set up named entity annotation server, which made it much
easier to view and annotate the corpus. We thank Zhaoming
Qiu, Shixia Liu, Zhili Guo for their valuable comments on
the experiments. We are also grateful to our colleagues who
spent their time to participate in the focused named entity
survey.
REFERENCES
[1] R. Barzilay and M. Elhadad. Using lexical chains for
text summarization. In Proceedings of the ACL
287
Table 7: Performance of different features with different algorithms
ID
Features
RRM
Decision Tree
Naive Bayes
P
R
F
1
P
R
F
1
P
R
F
1
A
2+3+7
79.11
20.86
32.96
77.48
61.81
68.70
96.39
33.47
49.67
B
1+2+3
71.95
82.08
76.60
71.06
72.31
71.23
93.29
42.91
58.76
C
1+2+3+7
73.32
0.8143
76.87
70.90
78.63
74.54
92.58
48.65
63.74
D
1+2+3+7+5
70.60
84.99
76.98
74.42
75.85
75.09
85.44
61.96
71.71
E
1+2+3+7+5+8
86.15
75.89
80.68
74.42
75.85
75.09
66.56
86.14
75.07
F
1+2+
+7+8
85.98
77.37
81.44
79.62
78.30
78.92
66.40
89.44
76.19
G
1+2+
+8+9
84.70
78.23
81.32
83.83
74.61
78.89
69.14
89.08
77.82
Table 8: Machine learning test result with exact string-matching for coreference resolution
data set
RRM
Decision Tree
Naive Bayes
P
R
F
1
P
R
F
1
P
R
F
1
726 docs
84.43
75.21
79.49
83.13
73.68
78.10
67.85
85.64
75.64
1,325 docs
81.67
72.60
76.74
79.60
70.45
74.69
66.77
83.56
74.20
Intelligent Scalable Text Summarization Workshop
(ISTS'97), pages 1017, 1997.
[2] F. J. Damerau, T. Zhang, S. M. Weiss, and
N. Indurkhya. Text categorization for a comprehensive
time-dependent benchmark. Information Processing &
Management, 40(2):209221, 2004.
[3] H. P. Edmundson. New methods in automatic
abstracting. Journal of The Association for
Computing Machinery, 16(2):264285, 1969.
[4] J. Y. Ge, X. J. Huang, and L. Wu. Approaches to
event-focused summarization based on named entities
and query words. In DUC 2003 Workshop on Text
Summarization, 2003.
[5] E. Hovy and C.-Y. Lin. Automated text
summarization in summarist. In I. Mani and
M. Maybury, editors, Advances in Automated Text
Summarization, pages 8194. MIT Press, 1999.
[6] D. E. Johnson, F. J. Oles, T. Zhang, and T. Goetz. A
decision-tree-based symbolic rule induction system for
text categorization. IBM Systems Journal, 41:428437,
2002.
[7] M.-Y. Kan and K. R. McKeown. Information
extraction and summarization: domain independence
through focus types. Columbia University Computer
Science Technical Report CUCS-030-99.
[8] J. M. Kupiec, J. Pedersen, and F. Chen. A trainable
document summarizer. In SIGIR '95, pages 6873,
1995.
[9] D. Lawrie, W. B. Croft, and A. Rosenberg. Finding
topic words for hierarchical summarization. In SIGIR
'01, pages 349357, 2001.
[10] F. Li and Y. Yang. A loss function analysis for
classification methods in text categorization. In ICML
03, pages 472479, 2003.
[11] C.-Y. Lin. Training a selection function for extraction.
In CIKM '99, pages 18, 1999.
[12] C.-Y. Lin and E. Hovy. Identifying topics by position.
In Proceedings of the Applied Natural Language
Processing Conference (ANLP-97), pages 283290,
1997.
[13] D. Marcu. From discourse structures to text
summaries. In Proceedings of the ACL'97/EACL'97
Workshop on Intelligent Scalable Text Summarization,
pages 8288. ACL, 1997.
[14] A. McCallum and K. Nigam. A comparison of event
models for naive bayes text classification. In
AAAI/ICML-98 Workshop on Learning for Text
Categorization, pages 4148, 1998.
[15] J. L. Neto, A. Santos, C. Kaestner, A. Freitas, and
J. Nievola. A trainable algorithm for summarizing
news stories. In Proceedings of PKDD'2000 Workshop
on Machine Learning and Textual Information Access,
September 2000.
[16] C. Nobata, S. Sekine, H. Isahara, and R. Grishman.
Summarization system integrated with named entity
tagging and ie pattern discovery. In Proceedings of
Third International Conference on Language
Resources and Evaluation (LREC 2002), 2002.
[17] C. D. Paice and P. A. Jones. The identification of
important concepts in highly structured technical
papers. In SIGIR '93, pages 6978. ACM, 1993.
[18] J. R. Quinlan. C4.5: Programs for Machine Learning.
Morgan Kaufmann, 1993.
[19] E. F. T. K. Sang and F. D. Meulder. Introduction to
the conll-2003 shared task: Language-independent
named entity recognition. In Proceedings of
CoNLL-2003, pages 142147, 2003.
[20] W.-M. Soon, H.-T. Ng, and C.-Y. Lim. A machine
learning approach to coreference resolution of noun
phrases. Computational Linguistics, 27(4):521544,
2001.
[21] S. Teufel and M. Moens. Sentence extraction as a
classification task. In ACL/EACL-97 Workshop on
Intelligent and Scalable Text Summarization, 1997.
[22] T. Zhang. On the dual formulation of regularized
linear systems. Machine Learning, 46:91129, 2002.
[23] T. Zhang, F. Damerau, and D. E. Johnson. Text
chunking based on a generalization of Winnow.
Journal of Machine Learning Research, 2:615637,
2002.
[24] T. Zhang and F. J. Oles. Text categorization based on
regularized linear classification methods. Information
Retrieval, 4:531, 2001.
288
| named entities;Information retrieval;naive Bayes;topic identification;classification model;sentence extraction;Summarization;entity recognition;Natural language processing applications;ranking;information retrieval;Linguistic features;automatic summarization;Classification methods;robust risk minimization;decision tree;electronic documents;machine learning;Focused named entity recognition;Statistical model;Features;Machine learning approach;text summarization;Main topics;natural language processing |
95 | Formally Deriving an STG Machine | Starting from P. Sestoft semantics for lazy evaluation, we define a new semantics in which normal forms consist of variables pointing to lambdas or constructions. This is in accordance with the more recent changes in the Spineless Tagless G-machine (STG) machine, where constructions only appear in closures (lambdas only appeared in closures already in previous versions). We prove the equivalence between the new semantics and Sestoft's. Then, a sequence of STG machines are derived, formally proving the correctness of each derivation. The last machine consists of a few imperative instructions and its distance to a conventional language is minimal. The paper also discusses the differences between the final machine and the actual STG machine implemented in the Glasgow Haskell Compiler. | INTRODUCTION
The Spineless Tagless G-machine (STG) [6] is at the heart
of the Glasgow Haskell Compiler (GHC) [7] which is perhaps
the Haskell compiler generating the most efficient code. For
a description of Haskell language see [8]. Part of the secret
for that is the set of analysis and transformations carried
out at the intermediate representation level. Another part
of the explanation is the efficient design and implementation
of the STG machine.
A high level description of the STG can be found in [6].
If the reader is interested in a more detailed view, then the
only available information is the Haskell code of GHC (about
80.000 lines, 12.000 of which are devoted to the implementation
of the STG machine) and the C code of its different
runtime systems (more than 40.000 lines)[1].
In this paper we provide a step-by-step derivation of the
STG machine, starting from a description higher-level than
that of [6] and arriving at a description lower-level than that.
Our starting point is a commonly accepted operational
semantics for lazy evaluation provided by Peter Sestoft in
[10] as an improvement of John Launchbury's well-known
definition in [4]. Then, we present the following refinements:
1. A new operational semantics, which we call semantics
S3 --acknowledging that semantics 1 and 2 were defined
by Mountjoy in a previous attempt [5]--, where
normal forms may appear only in bindings.
2. A first machine, called STG-1, derived from S3 in
which explicit replacement of pointers for variables is
done in expressions.
3. A second machine STG-2 introducing environments in
closures, case alternatives, and in the control expression
.
4. A third machine, called ISTG (I stands for imperative)
with a very small set of elementary instructions, each
one very easy to be implemented in a conventional
language such as C.
5. A translation from the language of STG-2 to the language
of ISTG in which the data structures of STG-2
are represented (or implemented) by the ISTG data
structures.
102
e x
-- variable
|
x.e
-- lambda abstraction
|
e x
-- application
|
letrec x
i
= e
i
in e
-- recursive let
|
C x
i
-- constructor application
|
case e of C
i
x
ij
e
i
-- case expression
Figure 1: Launchbury's normalized -calculus
At each refinement, a formal proof of the soundness and
completeness of the lower level with respect to the upper
one is carried out
1
. In the end, the final implementation is
shown correct with respect to Sestoft's operational semantics
.
The main contribution of the work is showing that an efficient
machine such as STG can be presented, understood,
and formally reasoned about at different levels of abstraction
. Also, there are some differences between the machine
we arrive at and the actual STG machine implemented in
the Glasgow Haskell Compiler. We argue that some design
decisions in the actual STG machine are not properly justified
.
The plan of the paper is as follows: after this introduction
, in Section 2, a new language called FUN is introduced
and the semantics S3 for this language is defined. Two theorems
relating Launchbury's original language and semantics
to the new ones are presented. Section 3 defines the
two machines STG-1 and STG-2. Some propositions show
the consistency between both machines and the correctness
and completeness of STG-1 with respect to S3, eventhough
the latter creates more closures in the heap and produces
different (but equivalent) normal forms. Section 4 defines
machine ISTG and Section 5 defines the translation from
STG-2 expressions to ISTG instructions. Two invariants are
proved which show the correctness of the translation. Section
6 discusses the differences between our translation and
the actual implementation done by GHC. Finally, Section 7
concludes.
A NEW SEMANTICS FOR LAZY EVAL-UATION
We begin by reviewing the language and semantics given
by Sestoft as an improvement to Launchbury's semantics.
Both share the language given in Figure 1 where A
i
denotes
a vector A
1
, . . . , A
n
of subscripted entities. It is a normalized
-calculus, extended with recursive let, constructor
applications and case expressions. Sestoft's normalization
process forces constructor applications to be saturated and
all applications to only have variables as arguments. Weak
head normal forms are either lambda abstractions or constructions
. Throughout this section, w will denote (weak
head) normal forms.
Sestoft's semantic rules are given in Figure 2. There, a
judgement : e
A
: w denotes that expression e, with
its free variables bound in heap , reduces to normal form
w and produces the final heap . When fresh pointers are
created, freshness is understood w.r.t. (dom ) A, where
A contains the addresses of the closures under evaluation
1
The
details
of
the
proofs
can
be
found
in
a
technical
report
at
one
of
the
author's
page
http://dalila.sip.ucm.es/~albertoe.
: x.e
A
: x.e
Lam
: C p
i
A
: C p
i
Cons
: e
A
: x.e
: e [p/x]
A
: w
: e p
A
: w
App
: e
A{p}
: w
[p e] : p
A
[p w] : w
Var
[p
i
^
e
i
] : ^
e
A
: w
: letrec x
i
= e
i
in e
A
: w where p
i
fresh Letrec
: e
A
: C
k
p
j
: e
k
[p
j
/x
kj
]
A
: w
: case e of C
i
x
ij
e
i
A
: w
Case
Figure 2: Sestoft's natural semantics
(see rule Var). The notation ^
e in rule Letrec means the replacement
of the variables x
i
by the fresh pointers p
i
. This
is the only rule where new closures are created and added
to the heap. We use the term pointers to refer to dynam-ically
created free variables, bounded to expressions in the
heap, and the term variables to refer to (lambda-bound, let-bound
or case-bound) program variables. We consistently
use p, p
i
, . . . to denote free variables and x, y, . . . to denote
program variables.
J. Mountjoy's [5] had the idea of changing Launchbury-Sestoft's
language and semantics in order to get closer to the
STG language, and then to derive the STG machine from
the new semantics.
He developed two different semantics: In the first one,
which we call semantics S1, the main change was that normal
forms were either constructions (as they were in Sestoft's
semantics) or variables pointing to closures containing
-abstractions, instead of just -abstractions. The reason
for this was to forbid a -abstraction in the control expression
as it happens in the STG machine. Another change
was to force applications to have the form x x
1
, i.e. consisting
of a variable in the functional part. This is also what
the STG language requires. These changes forced Mountjoy
to modify the source language and to define a normalization
from Launchbury's language to the new one. Mountjoy
proved that the normalization did not change the normal
forms arrived at by both semantics.
The second semantics, which we call semantics S2, forced
applications to be done at once to n arguments instead of
doing it one by one. Correspondingly, -abstractions were
allowed to have several arguments. This is exactly what
the STG machine requires. Semantics S2 was informally
derived and contained some mistakes. In particular, (cf. [5,
pag. 171]) rule App
M
makes a -abstraction to appear in
the control expression, in contradiction with the desire of
having -abstractions only in the heap.
Completing and correcting Mountjoy's work we have defined
a new semantics S3 in which the main changes in the
source language w.r.t. Mountjoy's are the following:
1. We force constructor applications to appear only in
bindings, i.e. in heap closures. Correspondingly, normal
forms are variables pointing to either -abstractions
or constructions. We will use the term lambda forms to
refer to -abstractions or constructions alike. The motivation
for this decision is to generate more efficient
code as it will be seen in Section 5.1.
103
e
e x
in
-- n > 0, application
|
x
-- variable
|
letrec x
i
= lf
i
in e -- recursive let
|
case e of alt
i
-- case expression
alt
C x
j
e
-- case alternative
lf
x
in
.e
-- n > 0, lambda abstraction
|
C x
in
-- constructor application
|
e
-- expression
Figure 3: Language FUN
2. We relax applications to have the form e x
in
, where e is
an arbitrary expression (excluding, of course, lambda
forms). The initial motivation for this was not to introduce
unjustified restrictions. In the conclusions we
discuss that the generated code is also more efficient
than the one produced by restricting applications to
be of the form x x
in
.
Additionally, our starting point is Sestoft's semantics instead
of Launchbury's. The main difference is that Sestoft
substitutes fresh pointers for program variables in rule Letrec
while Launchbury substitutes fresh variables for all bound
variables in rule Var instead.
The syntax of the language, called FUN, is shown in Figure
3. Notice that applications are done to n arguments
at once, being e x
in
an abbreviation of (. . . (e x
1
) . . .) x
n
,
and that consequently -abstractions may have several arguments
. Its operational semantics, called S3 is given in
Figure 4. For simplicity, we have eliminated the set A of
pending updates appearing in Sestoft's semantics. This set
is not strictly needed provided that the fresh name generator
for pointers does not repeat previously generated names
and provided that pointer's names are always distinguish-able
from program variables. In rule Case
S3
, expression e
k
is the righthand side expression of the alternative alt
k
. The
notation [p e] highlights the fact that (p e) ,
and [p e] means the disjoint union of and (p e).
Please notice that this notation may not coincide with other
notations in which and [p e] denote different heaps.
Finally notice that, besides rule Letrec
S3
, also rule App
S3
creates closures in the heap.
Language FUN is at least as expressive as Launchbury's
-calculus. The following normalization function transforms
any Launchbury's expression into a semantically equivalent
FUN expression.
Definition 1. We define the normalization function N :
Launch FUN:
N x
def
= x
N (e x)
def
= (N e) x
N ( x.e)
def
= letrec y = N ( x.e) in y
, y fresh
N (C x
i
)
def
= letrec y = C x
i
in y
, y fresh
N (letrec x
i
= e
in
in e)
def
= letrec x
i
= N e
i
n
in N e
N (case e of C
i
y
ij
e
i
)
def
= case N e of C
i
y
ij
N e
i
N , N : Launch FUN:
N (C x
in
)
def
=
C x
in
N e
def
=
N
e
, e = C x
in
N
( x.e)
def
=
x.N
e
N
e
def
=
N e
, e = x.e
The following proposition prove that the normalization
functions are well defined.
Proposition 1.
1. Let e Launch then N e FUN
2. Let e lf then N e lf
3. Let e = x.e and e = C x
i
then, N e = N e
4. (N e)[p
i
/x
i
] = N (e[p
i
/x
i
])
5. (N e)[p
i
/x
i
] = N (e[p
i
/x
i
])
Proof.
1. By structural induction on e.
2. Trivial.
3. Trivial.
4. By definition of N and of substitutions.
5. By definition of N and of substitutions.
To see that both semantics reduce an expression to equivalent
normal forms, first we prove that the normalization
does not change the meaning of an expression within Sestoft's
semantics. Then, we prove that both semantics reduce
any FUN expression to equivalent normal forms, provided
that such normal forms exist.
2.1
Soundness and completeness
The following two propositions prove that the normalization
does not change the meaning of an expression. We use
the following notation: denotes a one-to-one renaming of
pointers, and
means that . This is needed
to express the equivalence between the heaps of both semantics
up to some renaming . As S3 generates more closures
than Sestoft's, it is not possible to guarantee that the fresh
pointers are exactly the same in the two heaps.
Proposition 2. (Sestoft Sestoft
) For all e Launch
we have:
{ } : e : w
{ } : N e
: w
.
w
= N ( w)
N
Proof. By induction on the number of reductions of
Launchbury expressions.
Proposition 3. (Sestoft
Sestoft) For all e FUN
we have:
{ } : e : w
e Launch.
N e = e
{ } : e : w
.
N ( w ) = w
N
104
[p x
in
.e] : p : p
Lam
S3
[p C p
i
] : p : p
Cons
S3
: e [p x
in
.y
im
.e ] : p
: e p
in
[q y
im
.e [p
i
/x
i
n
]] : q m, n > 0, q fresh
App
S3
: e [p x
im
.e ] : p : e [p
i
/x
i
m
] p
m+1
. . . p
n
[q w] : q
: e p
in
: q
n m
App
S3
: e [q w] : q
[p e] : p [p w] : q
Var
S3
[p
i
^
lf
i
] : ^
e [p w] : p
: letrec x
i
= lf
i
in e : p p
i
fresh
Letrec
S3
: e [p C
k
p
j
] : p
: e
k
[p
j
/y
kj
] [q w] : q
: case e of C
i
y
ij
e
i
: q
Case
S3
Figure 4: Semantics S3
Proof. By induction on the number of reductions of
Launchbury expressions.
Now we prove the equivalence between the two semantics
. We consider only FUN expressions because it has been
proved that the normalization does not change the meaning
of an expression.
Proposition 4. (Sestoft S3, completeness of S3) For
all e FUN we have:
{ } : e : w
{ } : e [p w ] : p
.
w = w
Proof. By induction on the number of reductions of
FUN expressions.
Proposition 5. (S3 Sestoft, soundness of S3) For all
e FUN we have:
{ } : e [p w] : p
{ } : e : w
.
w = w
Proof. By induction on the number of reductions of
FUN expressions.
Once adapted the source language to the STG language,
we are ready to derive an STG-like machine from semantics
S3.
A VERY SIMPLE STG MACHINE
Following a similar approach to Sestoft MARK-1 machine
[10], we first introduce a very simple STG machine, which
we will call STG-1, in which explicit variable substitutions
are done. A configuration in this machine is a triple (, e, S)
where represents the heap, e is the control expression and
S is the stack. The heap binds pointers to lambda forms
which, in turn, may reference other pointers. The stack
stores three kinds of objects: arguments p
i
of pending applications
, case alternatives alts of pending pattern matchings,
and marks #p of pending updates.
In Figure 5, the transitions of the machine are shown.
They look very close to the lazy semantics S3 presented in
Section 2. For instance, the single rule for letrec in Figure
5 is a literal transcription of the Letrec
S3
rule of Figure
4. The semantic rules for case and applications are split
each one into two rules in the machine. The semantic rule
for variable is also split into two in order to take care of
updating the closure. So, in principle, an execution of the
STG-1 machine could be regarded as the linearization of the
semantic derivation tree by introducing an auxiliary stack.
But sometimes appearances are misleading. The theorem
below shows that in fact STG-1 builds less closures in the
heap than the semantics and it may arrive to different (but
semantically equivalent) normal forms. In order to prove
the soundness and completeness of STG-1, we first enrich
the semantics with a stack parameter S in the rules. The
new rules for
S
(only those which modify S) are shown in
Figure 6. It is trivial to show that the rules are equivalent to
the ones in Figure 4 as the stack is just an observation of the
derivations. It may not influence them. The following theorem
establishes the correspondence between the (enriched)
semantics and the machine.
Proposition 6. Given , e and S, then : e
S
[p
w] : p iff (, e, S)
( , p , S ), where
1.
2. if [p C p
in
] then S = S , p = p and [p
C p
in
]
3. if [p x
in
.e ] then there exists m 0 s.t. [p
y
im
.x
in
.e ] and S = q
im
: S and e = e [q
i
/y
i
m
]
105
Heap
Control
Stack
rule
letrec {x
i
= lf
i
} in e
S
letrec (
1
)
=
[p
i
lf
i
[p
j
/x
j
]]
e[p
i
/x
i
]
S
case e of alts
S
case1
=
e
alts : S
[p C
k
p
i
]
p
C
j
y
ji
e
j
: S
case2
=
e
k
[p
i
/y
ki
]
S
e p
in
S
app1
=
e
p
in
: S
[p x
in
.e]
p
p
in
: S
app2
=
e[p
i
/x
i
n
]
S
[p e ]
p
S
var1
=
e
#p : S
[p x
ik
.y
im
.e]
p
p
ik
: #q : S
var2
=
[q p p
ik
]
p
p
ik
: S
[p C
k
p
i
]
p
#q : S
var3
=
[q C
k
p
i
]
p
S
(
1
)
p
i
are distinct and fresh w.r.t. , letrec {x
i
= lf
i
} in e, and S
Figure 5: The STG-1 Machine
Proof. By induction on the number of reductions of
FUN expressions.
The proposition shows that the semantic rule App
S3
of
Figure 4 is not literally transcribed in the machine. The
machine does not create intermediate lambdas in the heap
unless an update is needed. Rule app2 in Figure 5 applies a
lambda always to all its arguments provided that they are in
the stack. For this reason, a lambda with more parameters
than that of the semantics may be arrived at as normal
form of a functional expression. Also for this reason, an
update mark may be interspersed with arguments in the
stack when a lambda is reached (see rule var2 in Figure 5).
A final implication is that the machine may stop with m
pending arguments in the stack and a variable in the control
expression pointing to a lambda with n > m parameters.
The semantics always ends a derivation with a variable as
normal form and an empty stack.
Again, following Sestoft and his MARK-2 machine, once
we have proved the soundness and completeness of STG-1,
we introduce STG-2 having environments instead of explicit
variable substitutions. Also, we add trimmers to this machine
so that environments kept in closures and in case alternatives
only reference the free variables of the expression
instead of all variables in scope. A configuration of STG-2
is a quadruple (, e, E, S) where E is the environment of e,
the alternatives are pairs (alts, E), and a closure is a pair
(lf , E). Now expressions and lambda forms keep their original
variables and the associated environment maps them to
pointers in the heap. The notation E |
t
means the trimming
of environment E to the trimmer t. A trimmer is just a collection
of variable names. The resulting machine is shown
in Figure 7.
Proposition 7.
Given a closed expression e
0
.
({ }, e
0
, [ ])
STG-1
- (, q, p
in
) where either:
[q C q
im
] n = 0
or [q x
im
.e] m > n 0
if and only if ({ }, e
0
, { }, [ ])
STG-2
- (, x, E[x q], p
in
)) and
either:
[q (C x
im
, {x
i
q
im
})] n = 0
or [q (x
im
.e , E )] m > n 0 e = E e .
Proof. By induction on the number of reductions.
AN IMPERATIVE STG MACHINE
In this Section we `invent' an imperative STG machine,
called ISTG, by defining a set of machine instructions and
an operational semantics for them in terms of the state transition
that each instruction produces. In fact, this machine
tries to provide an intermediate level of reasoning between
the STG-2 machine and the final C implementation. In
the actual GHC implementation, `below' the operational description
of [6] we find only a translation to C. By looking
at the compiler and at the runtime system listings, one can
grasp some details, but many others are lost. We think that
the gap to be bridged is too high. Moreover, it is not possible
to reason about the correctness of the implementation
when so many details are introduced at once. The ISTG architecture
has been inspired by the actual implementation of
the STG machine done by GHC, and the ISTG instructions
have been derived from the STG-2 machine by analyzing the
elementary steps involved in every machine transition.
An ISTG machine configuration consists of a 5-tuple (is, S,
node, , cs), where is is a machine instruction sequence ended
with the instruction ENTER or RETURNCON, S is the
stack, node is a heap pointer pointing to the closure under
execution (the one to which is belongs to), is the heap and
cs is a code store where the instruction sequences resulting
from compiling the program expressions are kept.
We will use the following notation: a for pointers to closures
in , as and ws for lists of such pointers, and p for
106
: e
p
in
:S
[p x
in
.y
im
.e ] : p
: e p
in
S
[q y
im
.e [p
i
/x
i
n
]] : q m, n > 0, q fresh
App
S3
: e
p
in
:S
[p x
im
.e ] : p : e [p
i
/x
i
m
] p
m+1
. . . p
n
S
[q w] : q
: e p
in
S
: q
n m
App
S3
: e
#p:S
[q w] : q
[p e] : p
S
[p w] : q
Var
S3
: e
alts:S
[p C
k
p
j
] : p
: e
k
[p
j
/y
kj
]
S
[q w] : q
: case e of alts
S
: q
Case
S3
Figure 6: The enriched semantics
pointers to code fragments in cs. By cs[p is] we denote
that the code store cs maps pointer p to the instruction
sequence is and, by cs[p is
i
n
], that cs maps p to a
vectored set of instruction sequences is
1
, . . . , is
n
, each one
corresponding to an alternative of a case expression with
n constructors C
1
, . . . , C
n
. Also, S ! i will denote the i-th
element of the stack S counting from the top and starting
at 0. Likewise, node
! i will denote the i-th free variable of
the closure pointed to by node in , this time starting at 1.
Stack S may store pointers a to closures in , pointers p
to code sequences and code alternatives in cs, and update
marks #a indicating that closure pointed to by a must be
updated. A closure is a pair (p, ws) where p is a pointer
to an instruction sequence is in cs, and ws is the closure
environment, having a heap pointer for every free variable
in the expression whose translation is is.
These representation decisions are very near to the GHC
implementation. In its runtime system all these elements
(stack, heap, node register and code) are present [9]. Our
closures are also a small simplification of theirs.
In Figure 8, the ISTG machine instructions and its operational
semantics are shown. The machine instructions
BUILDENV, PUSHALTS and UPDTMARK roughly correspond
to the three possible pushing actions of machine
STG-2. The SLIDE instruction has no clear correspondence
in the STG-2. As we will see in Section 5, it will be used
to change the current environment when a new closure is
entered. Instructions ALLOC and BUILDCLS will implement
heap closure creation in the letrec rule of STG-2. Both
BUILDENV and BUILDCLS make use of a list of pairs, each
pair indicating whether the source variable is located in the
stack or in the current closure. Of course, it is not intended
this test to be done at runtime. An efficient translation of
these `machine' instructions to an imperative language will
generate the appropriate copy statement for each pair.
Instructions ENTER and RETURNCON are typical of
the actual STG machine as described in [6]. It is interesting
to note that it has been possible to describe our previous
STG machines without any reference to them. In our view,
they belong to ISTG, i.e. to a lower level of abstraction. Finally
, instruction ARGCHECK, which implements updates
with lambda normal forms, is here at the same level of abstraction
as RETURNCON, which implements updates with
constructions normal forms. Predefined code is stored in cs
for updating with a partial application and for blackholing
a closure under evaluation. The corresponding code pointers
are respectively called p
n+1
pap
and p
bh
in Figure 8. The
associated code is the following:
p
bh
= [ ]
p
n+1
pap
= [BUILDENV [(NODE , 1), . . . , (NODE , n + 1) ],
ENTER ]
The code of a blackhole just blocks the machine as there
is no instruction to execute. There is predefined code for
partial applications with different values for n. The code
just copies the closure into the stack and jumps to the first
pointer that is assumed to be pointing to a -abstraction
closure.
The translation to C of the 9 instructions of the ISTG
should appear straightforward for the reader. For instance,
BUILDCLS and BUILDENV can be implemented by a sequence
of assignments, copying values to/from the stack
an the heap; PUSHALTS, UPDTMARK and ENTER do
straightforward stack manipulation; SLIDE is more involved
but can be easily translated to a sequence of loops moving
information within the stack to collapse a number of stack
fragments. The more complex ones are RETURNCON and
ARGCHECK. Both contains a loop which updates the heap
with normal forms (respectively, constructions and partial
applications) as long as they encounter update marks in the
stack. Finally, the installation of a new instruction sequence
in the control made by ENTER and RETURNCON are implemented
by a simple jump.
FORMAL TRANSLATION FROM STG-2 TO ISTG
In this Section, we provide first the translation schemes for
the FUN expressions and lambda forms and then prove that
this translation correctly implements the STG-2 machine on
top of the ISTG machine. Before embarking into the details,
we give some hints to intuitively understand the translation:
The ISTG stack will represent not only the STG-2
stack, but also (part of) the current environment E
and all the environments associated to pending case
alternatives. So, care must be taken to distinguish between
environments and other objects in the stack.
The rest of the current environment E is kept in the
current closure. The translation knows where each free
107
Heap
Control
Environment
Stack
rule
letrec {x
i
= lf
i
|
t
i
} in e
E
S
letrec (
1
)
[p
i
(lf
i
, E |
t
i
)]
e
E
S
case e of alts |
t
E
S
case1
e
E
(alts, E |
t
) : S
[p (C
k
x
i
, {x
i
p
i
})]
x
E{x p}
(alts, E ) : S
case2 (
2
)
e
k
E {y
ki
p
i
}
S
e x
in
E{x
i
p
in
}
S
app1
e
E
p
in
: S
[p (x
in
.e, E )]
x
E{x p}
p
in
: S
app2
e
E {x
i
p
in
}
S
[p (e, E )]
x
E{x p}
S
var1
e
E
#p : S
[p (x
ik
.y
in
.e, E )]
x
E{x p}
p
ik
: #q : S
var2 (
3
)
[q (x x
ik
, E ])
x
E
p
ik
: S
[p (C
k
x
i
, E )]
x
E{x p}
#q : S
var3
[q (C
k
x
i
, E )]
x
E
S
(
1
)
p
i
are distinct and fresh w.r.t. , letrec {x
i
= lf
i
} in e, and S. E = E {x
i
p
i
}
(
2
)
Expression e
k
corresponds to alternative C
k
y
ki
e
k
in alts
(
3
)
E = {x p, x
i
p
ik
}
Figure 7: The STG-2 machine
variable is located by maintaining two compile-time
environments and . The first one corresponds to
the environment kept in the stack, while the second one
corresponds to the free variables accessed through
the node pointer.
The stack can be considered as divided into big blocks
separated by code pointers p pointing to case alternatives
. Each big block topped with such a pointer
corresponds to the environment of the associated alternatives
.
In turn, each big block can be considered as divided
into small blocks, each one topped with a set of arguments
of pending applications. The compile-time
environment is likewise divided into big and small
blocks, so reflecting the stack structure.
When a variable is reached in the current instruction
sequence, an ENTER instruction is executed. This
will finish the current sequence and start a new one.
The upper big block of the stack must be deleted (corresponding
to changing the current environment) but
arguments of pending applications must be kept. This
stack restructuring is accomplished by a SLIDE operation
with an appropriate argument.
Definition 2. A stack environment is a list [(
k
, m
k
, n
k
),
. . . , (
1
, m
1
, n
1
)] of blocks. It describes the variables in the
stack starting from the top. In a block (, m, n), is an
environment mapping exactly m- | n | program variables
to disjoint numbers in the range 1..m- | n |. The empty
environment, denoted
is the list [({}, 0, 0)].
A block (, m, n) corresponds to a small block in the above
explanation. Blocks with n = -1, are topped with a code
pointer pointing to alternatives. So, they provide a separation
between big blocks. The upper big block consists of all
the small blocks up to (and excluding) the first small block
with n = -1. Blocks with n > 0 have m - n free variables
and are topped with n arguments of pending applications.
The upper block is the only one with n = 0 meaning that it
is not still closed and that it can be extended.
Definition 3. A closure environment with n variables is
a mapping from these variables to disjoint numbers in the
range 1..n.
Definition 4. The offset of a variable x in from the top
of the stack, denoted x, is given by
x
def
= (
k
i=l
m
i
) l
x, being x dom
l
If the initial closed expression to be translated has different
names for bound variables, then the compile time environments
and will never have duplicate names. It will be
proved below that every free variable of an expression being
compiled will necessarily be either in or in , and never in
both. This allows us to introduce the notation (, ) x to
mean
(, ) x
def
=
(STACK , x)
if x dom
(NODE , x)
if x dom
The stack environment may suffer a number of operations:
closing the current small block with a set of arguments, enlarging
the current small block with new bindings, and closing
the current big block with a pointer to case alternatives.
These are formally defined as follows.
Definition 5. The following operations with stack environments
are defined:
1. ((, m, 0) : ) + n
def
= ({}, 0, 0) : (, m + n, n) :
108
Instructions
Stack
Node
Heap
Code
control
[ENTER]
a : S
node
[a (p, ws)]
cs[p is]
=
is
S
a
cs
[RETURNCON C
m
k
]
p : S
node
cs[p is
i
n
]
=
is
k
S
node
cs
[RETURNCON C
m
k
]
#a : S
node
[a (p
bh
, as),
node (p, ws)]
cs
=
[RETURNCON C
m
k
]
S
node
[a (p, ws)]
cs
ARGCHECK m : is
a
im
: S
node
cs
=
is
a
im
: S
node
cs
ARGCHECK m : is
a
in
: #a : S
node
[a (p
bh
, ws)]
cs
n < m
=
ARGCHECK m : is
a
in
: S
node
[a (p
n+1
pap
, node : a
in
)]
cs
heap
ALLOC m : is
S
node
cs
(
1
)
=
is
a
m
: S
node
cs
BUILDCLS i p z
in
: is
S
node
cs
(
2
)
=
is
S
node
[S!i (p, a
in
)]
cs
stack
BUILDENV z
in
: is
S
node
cs
(
2
)
=
is
a
in
: S
node
cs
PUSHALTS p : is
S
node
cs
=
is
p : S
node
cs
UPDTMARK : is
S
node
[node (p, ws)]
cs
=
is
#node : S
node
[node (p
bh
, ws)]
cs
SLIDE (n
k
, m
k
)
l
: is
a
kj n
k
: b
kj
m
k
l
: S
node
cs
=
is
a
kj n
k
l
: S
node
cs
(
1
)
a
m
is a pointer to a new closure with space for m free variables, and is the resulting
heap after the allocation
(
2
)
a
i
=
S!i
if z
i
= (STACK , i)
node
!i
if z
i
= (NODE , i)
Figure 8: The ISTG machine
2. ((, m, 0) : )+({x
i
j
i
n
}, n)
def
= ({x
i
m + j
i
n
},
m+n, 0) :
3. ((, m, 0) : )
+
+
def
= ({}, 0, 0) : (, m + 1, -1) :
5.1
Translation schemes
Functions trE and trA respectively translate a FUN expression
and a case alternative to a sequence of ISTG machine
instructions; function trAs translates a set of alternatives
to a pointer to a vectored set of machine instruction
sequences in the code store; and function trB translates a
lambda form to a pointer to a machine instruction sequence
in the code store. The translation schemes are shown in
Figure 9.
The notation . . . & cs[p . . .] means that the corresponding
translation scheme has a side effect which consists
of creating a code sequence in the code store cs and pointing
it by the code pointer p.
Proposition 8. (static invariant) Given a closed expression
e
0
with different bound variables and an initial call
trE e
0
{}, in all internal calls of the form trE e :
1. The stack environment has the form = (, m, 0) : .
Moreover, there is no other block ( , m , n) in with
n = 0. Consequently, all environment operations in
the above translation are well defined.
2. All free variables of e are defined either in or in .
Moreover, dom dom = .
3. The last instruction generated for e is ENTER. Consequently
, the main instruction sequence and all sequences
corresponding to case alternatives and to non-constructor
closures, end in an ENTER.
Proof. (1) and (2) are proved by induction on the tree
structure of calls to trE ; (3) is proved by structural induction
on FUN expressions.
In order to prove the correctness of the translation, we
only need to consider ISTG machine configurations of the
form (is, S
I
, node,
I
, cs) in which is is generated by a call
to trE for some expression e and environments , . We call
these stable configurations. We enrich then these configurations
with three additional components: the environments
109
trE (e x
in
)
= [BUILDENV (, ) x
i
n
] ++
trE e ( + n)
trE (case e of alts |
x
in
)
= [BUILDENV zs, PUSHALTS p] ++ trE e
+
+
( - xs)
where p
= trAs alts
= + ({xs
j
m - j + 1
m
}, m)
xs
= [x | x x
in
x dom ]
zs
= [(node, x) | x xs]
m
= | xs |
trE (letrec x
i
= lf
i
|
y
ij mi
n
in e) = [ALLOC m
n
, . . . , ALLOC m
1
] ++
[BUILDCLS (i - 1) p
i
zs
i
n
] ++
trE e
where
= + ({x
i
n - i + 1
n
}, n)
p
i
= trB (lf
i
|
y
ij mi
),
i {1..n}
zs
i
= ( , ) y
ij
m
i
,
i {1..n}
trE x
= [BUILDENV [(, ) x],
SLIDE ((1, 0) : ms),
ENTER]
where ms
= map (\( , m, n) (n, m - n)) (takeWhile nn )
nn ( , m, -1) = False
nn
= True
trAs (alt
i
n
)
= p & cs[p trA alt
i
n
]
trA (C x
in
e)
= trE e {x
i
i
n
}
trB (C
n
k
x
in
|
x
in
)
= p & cs[p [RETURNCON C
n
k
]]
trB (x
il
.e |
y
j n
)
= p & cs[p [ARGCHECK l] ++ trE e ]
where
= [({x
i
l - i + 1
l
}, l, 0)]
= {y
j
j
n
}
trB (e |
y
j n
)
= p & cs[p [UPDTMARK ] ++ trE e
]
where
= {y
j
j
n
}
Figure 9: Translation schemes from STG-2 to ISTG
and used to generate is, and an environment stack S
env
containing a sequence of stack environments. The environments
in S
env
are in one to one correspondence with case
pointers stored in S
I
. Initially S
env
is empty. Each time
an instruction PUSHALTS is executed (see trE definition
for case), the environment the corresponding alternatives
are compiled with, is pushed onto stack S
env
. Each
time a RETURNCON pops a case pointer, stack S
env
is
also pop-ed. So, enriched ISTG configurations have the form
(is, , , S
I
, S
env
, node,
I
, cs).
Definition 6. A STG-2 environment E is equivalent to an
ISTG environment defined by , , S
I
,
I
and node, denoted
E (, S
I
, ,
I
, node) if dom E dom dom and
x dom E
E x = S
I
! ( x)
if x dom
E x = node
I
! ( x)
if x dom
Definition 7. A STG-2 stack S is equivalent to a triple
(, S
I
, S
env
) of an ISTG enriched configuration, denoted S
(, S
I
, S
env
), if
1. Whenever = (, m, 0) : , then S
I
= a
im
: S
I
and
S ( , S
I
, S
env
)
2. Whenever = (, m, n) : , n > 0, then S = a
in
: S ,
S
I
= a
in
: b
j
m-n
: S
I
and S ( , S
I
, S
env
)
3. Whenever = (, m, -1) : , then S = (alts, E) : S ,
S
I
= p
alts
: S
I
, S
env
=
alts
: S
env
, p
alts
= trAs alts
alts
,
E (
alts
, S
I
, , , ) and S (
alts
, S
I
, S
env
)
4. Whenever S = #a : S and S
I
= #a : S
I
, then S
(, S
I
, S
env
)
5. Additionally, [ ] ({}, [ ], [ ])
110
Definition 8. A STG-2 heap is equivalent to an ISTG
pair (
I
, cs), denoted (
I
, cs), if for all p we have [p
(lf |
x
in
, E)] if and only if
I
[p (q, ws)], cs[q is], is =
trB (lf |
x
in
) and ws = E x
i
n
.
Definition 9. A STG-2 configuration is equivalent to an
ISTG enriched stable configuration, denoted (, e, E, S)
(is, , , S
I
, S
env
, node,
I
, cs) if
1. (
I
, cs)
2. is = trE e
3. E (, S
I
, ,
I
, node)
4. S (, S
I
, S
env
)
Proposition 9. (dynamic invariant) Given a closed expression
e
0
with different bound variables and initial STG-2
and ISTG configurations, respectively ({}, e
0
, {}, [ ]) and
(trE e
0
{},
, {}, [ ], [ ], , {}, cs), where cs is the code
store generated by the whole translation of e
0
, then both machines
evolve through equivalent configurations.
Proof. By induction on the number of transitions of
both machines. Only transitions between ISTG stable configurations
are considered.
Corollary 10. The translation given in Section 5.1 is
correct.
DIFFERENCES WITH THE ACTUAL STG MACHINE
There are some differences between the machine translation
presented in Section 5 and the actual code generated
by GHC. Some are just omissions, other are non-substantial
differences and some other are deeper ones.
In the first group it is the treatment of basic values, very
elaborated in GHC (see for example [3]) and completely ig-nored
here. We have preferred to concentrate our study in
the functional kernel of the machine but, of course, a formal
reasoning about this aspect is a clear continuation of our
work.
In the second group it is the optimization of update implementation
. In GHC, updates can be done either by indirection
or by closure creation, depending on whether there
is enough space or not in the old closure to do update in
place. This implies to keep closure size information somewhere
. GHC keeps it in the so called info table, a static part
shared by all closures created from the same bind. This
table forces an additional indirection to access the closure
code. Our model has simplified these aspects. We understand
also that stack restructuring, as the one performed by
our SLIDE instruction, is not implemented in this way by
GHC. Apparently, stubbing of non used stack positions is
done instead. An efficiency study could show which implementation
is better. The cost of our SLIDE instruction is
in O(n), being n the number of arguments to be preserved
in the stack when the current environment is discarded.
Perhaps the deeper difference between our derived machine
and the actual STG is our insistence in that FUN applications
should have the form e x
in
instead of x x
in
as it is
the case in the STG language. This decision is not justified
in the GHC papers and perhaps could have a noticeable negative
impact in performance. In a lazy language, the functional
part of an application should be eagerly evaluated,
but GHC does it lazily. This implies constructing a number
of closures that will be immediately entered (and perhaps
updated afterwards), with a corresponding additional cost
both in space and time. Our translation avoids creating and
entering these closures. If the counter-argument were having
the possibility of sharing functional expressions, this is
always available in FUN since a variable is a particular case
of an expression. What we claim is that the normalization
process in the Core-to-STG translation should not introduce
unneeded sharing.
CONCLUSIONS
We have presented a stepwise derivation of a (well known)
abstract machine starting from Sestoft's operational semantics
, going through several intermediate machines and arriving
at an imperative machine very close to a conventional imperative
language. This strategy of adding a small amount
of detail in each step has allowed us both to provide insight
on fundamental decisions underlying the STG design
and, perhaps more importantly, to be able to show the correctness
of each refinement with respect to the previous one
and, consequently, the correctness of the whole derivation.
To our knowledge, this is the first time that formal translation
schemes and a formal proof of correctness of the STG
to C translation has been done.
Our previous work [2] followed a different path: it showed
the soundness and completeness of a STG-like machine called
STG-1S (laying somewhere between machines STG-2 and
ISTG of this paper) with respect to Sestoft's semantics.
The technique used was also different: a bisimulation between
the STG-1S machine and Sestoft's MARK-2 machine
was proved. We got the inspiration for the strategy followed
here from Mountjoy [5] and in Section 2 we have explained
the differences between his and our work. The previous machines
of all these works, including STG-1 and STG-2 of this
paper, are very abstract in the sense that they deal directly
with functional expressions. The new machine ISTG introduced
here is a really low level machine dealing with raw
imperative instructions and pointers. Two contributions of
this paper have been to bridge this big gap by means of
the translations schemes and the proof of correctness of this
translation.
Our experience is that formal reasoning about even well
known products always reveals new details, give new insight,
makes good decisions more solid and provides trust in the
behavior of our programs.
REFERENCES
[1] A. at URL: http://www.haskell.org/ghc/.
[2] A. Encina and R. Pe~
na. Proving the Correctness of
the STG Machine. In Implementation of Functional
Languages, IFL'01. Selected Papers. LNCS 2312,
pages 88104. Springer-Verlag, 2002.
[3] S. P. Jones and J. Launchbury. Unboxed values as first
class citizens in a non-strict functional language.
Conference on Functional Programming Languages
and Computer Architecture FPCA'91, LNCS 523,
September 1991.
[4] J. Launchbury. A Natural Semantics for Lazy
Evaluation. In Proc. Conference on Principles of
Programming Languages, POPL'93. ACM, 1993.
[5] J. Mountjoy. The Spineless Tagless G-machine,
Naturally. In Third International Conference on
Functional Programming, ICFP'98, Baltimore. ACM
Press, 1998.
111
[6] S. L. Peyton Jones. Implementing Lazy Functional
Languages on Stock Hardware: the Spineless Tagless
G-machine, Version 2.5. Journal of Functional
Programming, 2(2):127202, April 1992.
[7] S. L. Peyton Jones, C. V. Hall, K. Hammond, W. D.
Partain, and P. L. Wadler. The Glasgow Haskell
Compiler: A Technical Overview. In Joint Framework
for Inf. Technology, Keele, pages 249257, 1993.
[8] S. L. Peyton Jones and J. Hughes, editors. Report on
the Programming Language Haskell 98. URL
http://www.haskell.org, February 1999.
[9] S. L. Peyton Jones, S. Marlow, and A. Reid. The STG
Runtime System (revised).
http://www.haskell.org/ghc/docs, 1999.
[10] P. Sestoft. Deriving a Lazy Abstract Machine. Journal
of Functional Programming, 7(3):231264, May 1997.
112
| Lazy evaluation;operational semantics;Functional programming;STG machine;compiler verification;Closures;Translation scheme;Abstract machine;Stepwise derivation;Operational semantics;abstract machines;Haskell compiler |
96 | Building Bridges for Web Query Classification | Web query classification (QC) aims to classify Web users' queries, which are often short and ambiguous, into a set of target categories. QC has many applications including page ranking in Web search, targeted advertisement in response to queries, and personalization. In this paper, we present a novel approach for QC that outperforms the winning solution of the ACM KDDCUP 2005 competition, whose objective is to classify 800,000 real user queries. In our approach, we first build a bridging classifier on an intermediate taxonomy in an offline mode. This classifier is then used in an online mode to map user queries to the target categories via the above intermediate taxonomy. A major innovation is that by leveraging the similarity distribution over the intermediate taxonomy, we do not need to retrain a new classifier for each new set of target categories, and therefore the bridging classifier needs to be trained only once. In addition, we introduce category selection as a new method for narrowing down the scope of the intermediate taxonomy based on which we classify the queries. Category selection can improve both efficiency and effectiveness of the online classification. By combining our algorithm with the winning solution of KDDCUP 2005, we made an improvement by 9.7% and 3.8% in terms of precision and F1 respectively compared with the best results of KDDCUP 2005. | INTRODUCTION
With exponentially increasing information becoming available
on the Internet, Web search has become an indispensable
tool for Web users to gain desired information. Typi-cally
, Web users submit a short Web query consisting of a
few words to search engines. Because these queries are short
and ambiguous, how to interpret the queries in terms of a
set of target categories has become a major research issue.
In this paper, we call the problem of generating a ranked list
of target categories from user queries the query classification
problem, or QC for short.
The importance of QC is underscored by many services
provided by Web search. A direct application is to provide
better search result pages for users with interests of different
categories. For example, the users issuing a Web query
"apple" might expect to see Web pages related to the fruit
apple, or they may prefer to see products or news related to
the computer company. Online advertisement services can
rely on the QC results to promote different products more
accurately. Search result pages can be grouped according
to the categories predicted by a QC algorithm. However,
the computation of QC is non-trivial, since the queries are
usually short in length, ambiguous and noisy (e.g., wrong
spelling). Direct matching between queries and target categories
often produces no result. In addition, the target categories
can often change, depending on the new Web contents
as the Web evolves, and as the intended services change as
well.
KDDCUP 2005 ( http://www.acm.org/sigkdd/kddcup )
highlighted the interests in QC, where 800,000 real Web
queries are to be classified into 67 target categories. Each
query can belong to more than one target category. For this
task, there is no training data provided. As an example of
a QC task, given the query "apple", it should be classified
into "Computers
\Hardware; Living\Food&Cooking".
The winning solution in the KDDCUP 2005 competition,
which won on all three evaluation metrics (precision, F1 and
creativity), relied on an innovative method to map queries
to target categories.
By this method, an input query is
first mapped to an intermediate category, and then a second
mapping is applied to map the query from the intermediate
category to the target category. However, we note that this
method suffers from two potential problems. First, the classifier
for the second mapping function needs to be trained
whenever the target category structure changes. Since in
real applications, the target categories can change depending
on the needs of the service providers, as well as the
distribution of the Web contents, this solution is not flexible
131
enough. What would be better is to train the classifiers once
and then use them in future QC tasks, even when the target
categories are different. Second, the winners used the Open
Directory Project (ODP) taxonomy as the intermediate taxonomy
. Since the ODP contains more than 590,000 different
categories, it is costly to handle all mapping functions. It is
better to select a portion of the most relevant parts of the
intermediate categories.
In this paper, we introduce a novel QC algorithm that
solves the above two problems. In particular, we first build
a bridging classifier on an intermediate taxonomy in an offline
mode. This classifier is then used in online mode to map
users' queries to the target categories via the above intermediate
taxonomy. Therefore, we do not have to build the
classifier each time the target categories change. In addition,
we propose a category-selection method to select the categories
in the intermediate taxonomy so that the effectiveness
and efficiency of the online classification can be improved.
The KDDCUP 2005 winning solution included two kinds
of base classifiers and two ensemble classifiers of them. By
comparing our new method with any base classifier in the
winner's solution for the KDDCUP 2005 competition, we
found that our new method can improve the performance
by more than 10.4% and 7.1% in terms of precision and
F1 respectively, while our method does not require the extra
resource such as WordNet [8]. The proposed method can
even achieve a similar performance to the winner's ensemble
classifiers that achieved the best performance in the KDDCUP
2005 competition. Furthermore, by combining the our
method with the base classifiers in the winner's solution,
we can improve the classification results by 9.7% in terms
of precision and 3.8% in terms of F1 as compared to the
winner's results.
This rest of the paper is organized as follows. We define
the query classification problem in Section 2. Section 3
presents the methods of enriching queries and target categories
. In Section 4, we briefly introduce the previous methods
and put forward a new method. In Section 5, we compare
the approaches empirically on the tasks of KDDCUP
2005 competition. We list some related works in Section 6.
Section 7 gives the conclusion of the paper and some possible
future research issues.
PROBLEM DEFINITION
The query classification problem is not as well-formed as
other classification problems such as text classification. The
difficulties include short and ambiguous queries and the lack
of training data. In this section, inspired by KDDCUP 2005,
we give a stringent definition of the QC problem.
Query Classification:
* The aim of query classification is to classify a user
query Q
i
into a ranked list of n categories C
i1
, C
i2
,
. . ., C
in
, among a set of N categories
{C
1
, C
2
, . . .,
C
N
}. Among the output, C
i1
is ranked higher than
C
i2
, and C
i2
is higher than C
i3
, and so on.
* The queries are collected from real search engines submitted
by Web users. The meaning and intension of
the queries are subjective.
* The target categories are a tree with each node representing
a category. The semantic meaning of each
category is defined by the labels along the path from
the root to the corresponding node.
In addition, the training data must be found online because
, in general, labeled training data for query classification
are very difficult to obtain.
Figure 1 illustrates the target taxonomy of the KDDCUP
2005 competition.
Because there are no data provided
to define the content and the semantics of a category,
as in conventional classification problems, a new solution
needs be found. As mentioned above, an added difficulty
is that the target taxonomy may change frequently. The
queries in this problem are from the MSN search engine
(http://search.msn.com). Several examples of the queries
are shown in Table 1. Since a query usually contains very
few words, the sparseness of queries becomes a serious problem
as compared to other text classification problems.
Table 1: Examples of queries.
1967 shelby mustang
actress hildegarde
a & r management" property management Maryland
netconfig.exe
S o ftw are
Other
Tools & Hardware
Living
Sports
Computers
Hardware
Figure 1: An Example of the Target Taxonomy.
QUERY AND CATEGORY ENRICHMENT
In this section, we discuss the approaches for enriching
queries and categories, which are critical for the query classification
task.
3.1
Enrichment through Search Engines
Since queries and categories usually contain only a few
words in the QC problem, we need to expand them to obtain
richer representations. One straightforward method is to
submit them to search engines to get the related pages (for
categories, we can take their labels as the queries and submit
them to search engines, such as "Computers
\Hardware" in
Figure 1).
The returned Web pages from search engines
provide the context of the queries and the target categories,
which can help determine the meanings/semantics of the
queries and categories.
Given the search results for a query or category, we need
to decide what features should be extracted from the pages
to construct the representation. Three kinds of features are
considered in this paper: the title of a page, the snippet
generated by the search engines, and the full plain text of a
page. The snippet is in fact a short query-based summary
of a Web page in which the query words occur frequently.
The full plain text is all the text in a page with the html
tags removed. Since the title of a page is usually very short
(5.2 words on average for our data set), we combine it with
132
other kinds of features together. These features are studied
in our experiments.
Besides the above textual features, we can also obtain the
category information of a Web page through the directory
information from search engines. For example, Google's "Di-rectory
Search" can provide the labels of the returned Web
pages. Such labels will be leveraged to classify a query, as
stated in Section 4.1.
3.2
Word Matching Between Categories
The query classification problem can be converted to a traditional
text classification problem by finding some training
data online for each category in the target taxonomy. Our
method of collecting the training data is by finding documents
in certain intermediate taxonomies that are found
online. To do so, we need to construct mapping functions
between the intermediate categories and the target categories
. Given a certain category in an intermediate taxonomy
, we say that it is directly mapped to a target category
if and only if the following condition is satisfied: one or
more terms in each node along the path in the target category
appear along the path corresponding to the matched
intermediate category. For example, the intermediate category
"Computers
\Hardware \Storage" is directly mapped to
the target category "Computers
\Hardware" since the words
"Computers" and "Hardware" both appear along the path
Computers
Hardware Storage as shown in Figure 2.
We call this matching method direct matching.
After constructing the above mapping functions by exact
word matching, we may still miss a large number of
mappings. To obtain a more complete mapping function,
we expand the words in the labels of the target taxonomy
through a thesaurus such as the WordNet [8].
For
example, the keyword "Hardware" is extended to "Hardware
& Devices & Equipments". Then an intermediate category
such as "Computers
\Devices" can now be mapped to
"Computers
\Hardware". This matching method is called
extended matching in this paper.
Computers
Storage
Hardware
(2) Target Taxonomy
(1) Intermediate Taxonomy
Computers
Hardware
Figure 2: Illustration of the matching between taxonomies
CLASSIFICATION APPROACHES
In this section, we first describe the state-of-the-art query
classification methods. Then we describe our new bridging
classifier to address the disadvantages of the existing methods
.
4.1
Classification by Exact Matching
As described in Section 3.1, a query can be expanded
through search engines which results in a list of related Web
pages together with their categories from an intermediate
taxonomy. A straightforward approach to QC is to leverage
the categories by exact matching. We denote the categories
in the intermediate taxonomy and the target taxonomy as
C
I
and C
T
respectively. For each category in C
I
, we can detect
whether it is mapped to any category in C
T
according to
the matching approaches given in Section 3.2. After that,
the most frequent target categories to which the returned
intermediate categories have been successfully mapped are
regarded as the classification result. That is:
c
= arg max
C
T
j
n
i=1
I(C
I
(i) is mapped to C
T
j
)
(1)
In Equation (1), I(
) is the indicator function whose value
is 1 when its parameter is true and 0 otherwise. C
I
(i) is
the category in the intermediate taxonomy for the i
th
page
returned by the search engine. n result pages are used for
query classification and the parameter n is studied in our
experiments.
It is not hard to imagine that the exact matching approach
tends to produce classification results with high precision
but low recall. It produces high precision because this
approach relies on the Web pages which are associated with
the manually annotated category information. It produces
low recall because many search result pages have no intermediate
categories. Moreover, the exact matching approach
cannot find all the mappings from the existing intermediate
taxonomy to the target taxonomy which also results in low
recall.
4.2
Classification by SVM
To alleviate the low-recall problem of the exact matching
method, some statistical classifiers can be used for QC. In
the KDDCUP 2005 winning solution, Support Vector Machine
(SVM) was used as a base classifier. Query classification
with SVM consists of the following steps: 1) construct
the training data for the target categories based on mapping
functions between categories, as discussed in Section
3.2. If an intermediate category C
I
is mapped to a target
category C
T
, then the Web pages in C
I
are mapped into
C
T
; 2) train SVM classifiers for the target categories; 3) for
each Web query to be classified, use search engines to get its
enriched features as discussed in Section 3.1 and classify the
query using the SVM classifiers. The advantage of this QC
method is that it can improve the recall of the classification
result. For example, assume two intermediate categories, C
I
1
and C
I
2
, are semantically related with a target category C
T
1
.
C
I
1
can be matched with C
T
1
through word matching but C
I
2
cannot. For a query to be classified, if a search engine only
returns pages of C
I
2
, this query cannot be classified into the
target category if the exact matching classification method
is used. However, if the query is classified by a statistical
classifier, it can also be assigned the target category C
T
1
, as
the classifier is trained using pages of C
I
1
, which may also
contain terms of C
I
2
because the two intermediate categories
are similar in topic.
Although statistical classifiers can help increase the recall
of the exact matching approach, they still need the exact
matching for collecting the training data. What is more, if
the target taxonomy changes, we need to collect the training
data by exact matching and train statistical classifiers again.
In the following sections, we develop a new method to solve
the abov e problems.
133
4.3
Our New Method: Classifiers by Bridges
4.3.1
Taxonomy-Bridging Algorithm
We now describe our new QC approach called taxonomy-bridging
classifier, or bridging classifier in short, by which
we connect the target taxonomy and queries by taking an
intermediate taxonomy as a bridge. The idea is illustrated
in Figure 3, where two vertical lines separate the space into
three parts. The square in the left part denotes the queries
to be classified; the tree in the right part represents the
target taxonomy; the tree in the middle part is an existing
intermediate taxonomy. The thickness of the dotted lines
reflects the similarly relationship between two nodes. For
example, we can see that the relationship between C
T
i
and
C
I
j
is much stronger than that between C
T
i
and C
I
k
. Given
a category C
T
i
in the target taxonomy and a query to be
classified q
k
, we can judge the similarity between them by
the distributions of their relationship to the categories in
the intermediate taxonomy.
By defining the relationship
and similarity under the probabilistic framework, the above
idea can be explained by Equation (2).
T
i
C
k
q
I
j
C
I
k
C
T
C
I
C
Q
Figure 3: Illustration of the Bridging Classifier.
p(C
T
i
|q) =
C
I
j
p(C
T
i
, C
I
j
|q)
=
C
I
j
p(C
T
i
|C
I
j
, q)p(C
I
j
|q)
C
I
j
p(C
T
i
|C
I
j
)p(C
I
j
|q)
=
C
I
j
p(C
T
i
|C
I
j
)
p(q|C
I
j
)p(C
I
j
)
p(q)
C
I
j
p(C
T
i
|C
I
j
)p(q
|C
I
j
)p(C
I
j
)
(2)
In Equation (2), p(C
T
i
|q) denotes the conditional probability
of C
T
i
given q.
Similarly, p(C
T
i
|C
I
j
) and p(q
|C
I
j
) denotes
the probability of C
T
i
and q given C
I
j
respectively.
p(C
I
j
) is the prior probability of C
I
j
which can be estimated
from the Web pages in C
I
. If C
T
i
is represented by a set
of words (w
1
, w
2
, . . . , w
n
) where each word w
k
appears n
k
times, p(C
T
i
|C
I
j
) can be calculated through Equation (3)
p(C
T
i
|C
I
j
) =
n
k=1
p(w
k
|C
I
j
)
n
k
(3)
where p(w
k
|C
I
j
) stands for the probability that the word w
k
occurs in class C
I
j
, which can be estimated by the principle
of maximal likelihood. p(q
|C
I
j
) can be calculated in the same
way as p(C
T
i
|C
I
j
).
A query q can be classified according to Equation (4):
c
= arg max
C
T
i
p(C
T
i
|q)
(4)
To make our bridging classifier easier to understand, we
can explain it in another way by rewriting Equation (2) as
Equation (5),
p(C
T
i
|q) =
C
I
j
p(C
T
i
, C
I
j
|q)
=
C
I
j
p(C
T
i
|C
I
j
, q)p(C
I
j
|q)
C
I
j
p(C
T
i
|C
I
j
)p(C
I
j
|q)
=
C
I
j
p(C
I
j
|C
T
i
)p(C
T
i
)
p(C
I
j
)
p(C
I
j
|q)
= p(C
T
i
)
C
I
j
p(C
I
j
|C
T
i
)p(C
I
j
|q)
p(C
I
j
)
(5)
Let us consider the numerator on the right side of the
Equation (5).
Given a query q and C
T
i
, p(C
I
j
|C
T
i
) and
p(C
I
j
|q) are fixed and
C
I
j
p(C
I
j
|C
T
i
) = 1,
C
I
j
p(C
I
j
|q) = 1.
p(C
I
j
|C
T
i
) and p(C
I
j
|q) represent the probability that C
T
i
and q belong to C
I
j
. It is easy to prove that p(C
T
i
|q) tends
to be larger when q and C
T
i
tends to belong to the same category
in the intermediate taxonomy. The denominator p(C
I
j
)
reflects the size of category C
I
j
which acts as a weighting factor
. It guarantees that the higher the probability that q and
C
T
i
belong to the smaller sized category (where size refers to
the number of nodes underneath the category in the tree)
in the intermediate taxonomy, the higher the probability
that q belongs to C
I
i
. Such an observation agrees with our
intuition, since a larger category tends to contain more sub-topics
while a smaller category contains fewer sub-topics.
Thus we can say with higher confidence that q and C
I
i
are
related to the same sub-topic when they belong to the same
smaller category.
4.3.2
Category Selection
The intermediate taxonomy may contain enormous categories
and some of them are irrelevant to the query classification
task corresponding with the predefined target taxonomy
. Therefore, to reduce the computation complexity,
we should perform "Category Selection" in a similar sense
of "Feature Selection" in text classification [15]. Two approaches
are employed in this paper to evaluate the goodness
of a category in the intermediate taxonomy. After sorting
the categories according to the scores calculated by the following
two approaches, category selection can be fulfilled by
selecting the top n categories.
Total Probability (TP): this method gives a score to each
category in the intermediate taxonomy according to its probability
of generating the categories in the target taxonomy,
as shown in Equation (6).
Score(C
I
j
) =
C
T
i
P (C
T
i
|C
I
j
)
(6)
Mutual Information (MI): MI is a criterion commonly
used in statistical language modeling of word associations
and other related applications [15]. Given a word t and a
134
category c, the mutual information between t and c is defined
as:
M I(t, c) = log
P (t
c)
P (t)
P (c)
(7)
By considering the two-way contingency table for t and c,
where A is the number of times t and c co-occur, B is the
number of times that t occurs without c, C is number of
times c occurs without t and N is the total number of documents
, then the mutual information between t and c can
be estimated using:
M I(t, c)
log
A
N
(A + C)
(A + B)
(8)
Since the name of a category in the target taxonomy usually
contains more than one term, we define the "mutual infor-mation"
between a category in the intermediate taxonomy
C
I
j
and a category in the target taxonomy C
T
i
as:
M I(C
T
i
, C
I
j
) =
1
|C
T
i
|
tC
T
i
M I(t, C
I
j
)
(9)
where
|C
T
i
| is the number of terms in the name of C
T
i
.
To measure the goodness of C
I
j
in a global category selection
, we combine the category-specific scores of C
I
j
by:
M I
avg
(C
I
j
) =
C
T
j
M I(C
T
i
, C
I
j
)
(10)
4.3.3
Discussions
As we can see, in the bridging classifier, we do not need
to train a classifier function between an intermediate taxonomy
and the target taxonomy. We only need to build the
classifiers on the intermediate taxonomy once and it can be
applied to any target taxonomy. The framework can be extended
in two directions. One is to include some training
data for each target category. With the training data, we
do not have to treat the labels of the target categories as
queries and retrieve related Web pages through search engines
to represent the categories. We can extract features
from the training data directly. The second extension is to
use other sophisticated models such as the n-gram model [9]
or SVM [10] for computing p(C
T
i
|C
I
j
) and p(q
|C
I
j
).
EXPERIMENTS
In this section, we first introduce the data set and the
evaluation metrics. Then we present the experiment results
and give some discussions.
5.1
Data Set and Evaluation Metrics
5.1.1
Data sets
In this paper, we use the data sets from the KDDCUP
2005 competition which is available on the Web
1
. One of the
data sets contains 111 sample queries together with the category
information. These samples are used to exemplify the
format of the queries by the organizer. However, since the
category information of these queries is truthful, they can
serve as the validation data. Another data set contains 800
queries with category information labeled by three human
labelers. In fact, the organizers provided 800,000 queries in
1
http://www.acm.org/sigs/sigkdd/kdd2005/kddcup.html
total which are selected from the MSN search logs for testing
the submitted solutions. Since manually labeling all the
800,000 queries is too expensive and time consuming, the
organizers randomly selected 800 queries for evaluation.
We denote the three human query-labelers (and sometimes
the dataset labeled by them if no confusion is caused)
as L1, L2 and L3, respectively. Each query has at most
five labels in ranked order. Table 2 shows the average precision
and F1 score values of each labeler when evaluated
against the other two labelers. The average values among
the three labelers are around 0.50 which indicates that the
query classification problem is not an easy task even for human
labelers. In this paper, all the experiments use only
the 800 queries, except in the ensemble classifiers, where we
use the 111 sample queries to tune the weight of each single
classifier.
Table 2: The Average Scores of Each Labeler When
Evaluated Against the Other Two Labelers
L1
L2
L3
Average
F1
0.538
0.477
0.512
0.509
Pre
0.501
0.613
0.463
0.526
The existing intermediate taxonomy used in the paper
is from Open Directory Project (ODP, http://dmoz.org/).
We crawled 1,546,441 Web pages from ODP which spanned
over 172,565 categories. The categories have a hierarchical
structure as shown in Figure 2(1).
We can consider the
hierarchy at different levels. Table 3 shows the number of
categories on different levels. The first row counts all the
categories while the second row counts only the categories
containing more than 10 Web pages. Table 4 summarizes the
statistics of Web page numbers in the categories with more
than 10 documents on different levels. As we can see, when
we move down to the lower levels along the hierarchy, more
categories appear while each category contains fewer Web
pages. In order to remove noise, we consider the categories
with more than 10 pages in this paper.
Table 3: Number of Categories on Different Levels
Top 2
Top 3
Top 4
Top 5
Top All
#doc > 0
435
5,300
24,315
56,228
172,565
#doc > 10
399
4,011
13,541
23,989
39,250
Table 4: Statistics of the Numbers of Documents in
the Categories on Different Levels
Top 2
Top 3
Top 4
Top 5
Top All
Largest
211,192
153,382
84,455
25,053
920
Smallest
11
11
11
11
11
Mean
4,044.0
400.8
115.6
61.6
29.1
5.1.2
Evaluation Measurements
In KDDCUP 2005, precision, performance and creativity
are the three measures to evaluate the submitted solutions.
"creativity" refers to the novelty of the solutions judged by
experts. The other two measures are defined according to
the standard measures to evaluate the performance of classification
, that is, precision, recall and F1-measure [12]. Pre-135
cision (P) is the proportion of actual positive class members
returned by the system among all predicted positive class
members returned by the system. Recall (R) is the proportion
of predicted positive members among all actual positive
class members in the data. F1 is the harmonic mean of precision
and recall as shown below:
F 1 = 2
P R/(P + R)
(11)
"performance" adopted by KDDCUP 2005 is in fact F1.
Therefore, we denote it by F1 instead of "performance" for
simplicity.
As 3 labelers were asked to label the queries, the results
reported are averaged over the values evaluated on each of
them.
5.2
Results and Analysis
5.2.1
Performance of Exact matching and SVM
In this section, we study the performance of the two methods
which tightly depend on word matching: exact matching
and SVM, as well as the effect of query and category
expansion. Table 5 shows the results of the category expansion
through intermediate taxonomy by word matching,
that is the results of collecting training data for the target
taxonomy. Each element in the table represents the number
of documents collected for the target categories. The first
row contains the results by direct matching while the second
row contains the results after expanding the category
names through extended matching. We can see that after
extending the names of the target categories, the number
of documents collected for the target categories increases.
We expect that the expansion with the help of WordNet
should provide more documents to reflect the semantics of
the target categories which is verified by Table 6.
Table 5: Number of Pages Collected for Training
under Different Category Expansion Methods
Min
Max
Median
Mean
Direct Matching
4
126,397
2,389
14,646
Extended Matching
22
227,690
6,815
21,295
Table 6 presents the result comparisons of the exact matching
method and SVM. We enrich the query by retrieving the
relevant pages through Google (http://www.google.com). The
top n returned pages are used to represent the query where
n varies from 20 to 80, with the step size of 20.
Two
approaches are used to extract features from the returned
pages. One is to extract the snippet of the returned pages
and the other is to extract all the text in the Web pages except
the HTML tags. The Web pages' titles will be added to
both of these two kinds of features. The column "0" means
that we use only the terms in the query without enrichment.
In our experiments, we expand the target categories through
the ODP taxonomy; that is, we collect the training data
for the target categories from ODP. When constructing the
mapping relationship as shown in Section 3.2, if we use direct
matching, we denote SVM and the exact matching method
with "SVM-D" and "Extact-D" respectively. Otherwise,if
we use the extended matching method, we denote SVM and
the exact matching method with "SVM-E" and "Extact-E"
respectively. The exact matching method needs the category
list of the retrieved Web pages for each query. The
Table 6: Performance of Exact Matching and SVM
(1)Measured by F1
n
0
20
40
60
80
Exact-D
Null
0.251
0.249
0.247
0.246
Exact-E
Null
0.385
0.396
0.386
0.384
SVM-D
snippet
0.205
0.288
0.292
0.291
0.289
full text
0.254
0.276
0.267
0.273
SVM-E
snippet
0.256
0.378
0.383
0.379
0.379
full text
0.316
0.340
0.327
0.336
(2) Measured by Precision
n
0
20
40
60
80
Exact-D
Null
0.300
0.279
0.272
0.268
Exact-E
Null
0.403
0.405
0.389
0.383
SVM-D
snippet
0.178
0.248
0.248
0.244
0.246
full text
0.227
0.234
0.242
0.240
SVM-E
snippet
0.212
0.335
0.321
0.312
0.311
full text
0.288
0.309
0.305
0.296
category information is obtained through Google's "Direc-tory
Search" service (http://www.google.com/dirhp).
From Table 6 we can see that "Exact-E" is much better
than "Exact-D", and "SVM-E" is much better than "SVM-D"
. This indicates that the extended matching with the
help of WordNet can achieve a more proper representation
of the target category. We can also observe that "Exact-E"
performs better than "SVM-E". Another observation
is that the "snippet" representation outperforms "full text"
consistently. The reason is that the "snippet" provides a
more concise context of the query than the "full text" which
tends to introduce noise. We can also see that most of the
classifiers achieve the highest performance when the queries
are represented by the top 40 search result pages. Therefore,
in the later experiments, we use snippets of the top 40 pages
to represent queries.
5.2.2
Performance of the Bridging Classifier
As we can see in the above experiments, the thesaurus
WordNet plays an important role in both the exact matching
method and SVM since it can help expand the words in
the labels of the target categories, which can further improve
the mapping functions. However, the effect of a thesaurus
may be limited due to the following reasons: 1) there may
be no thesaurus in some fields; 2) it is hard to determine the
precise expansion of the words even with a high-quality thesaurus
, especially with the rapidly changing usage of words
on the Web. Therefore, we put forward the bridging classifier
which only relies on the intermediate taxonomies.
In order to expand a target category, we can treat its name
as a query and submit it to search engines. We use the snippet
of the top n returned pages to represent a category since
we learned from the query expansion that snippet performs
better than "full text". The parameter n varies from 20 to
100. Table 7 shows the results when "top all" categories in
the ODP taxonomy are used for bridging the queries and the
target taxonomy. The effect of different levels of the intermediate
taxonomy will be studied later. From Table 7, we can
136
see that the bridging classifier achieves the best performance
when n equals 60. The best F1 and precision achieved by
the bridging classifier is higher than those achieved either by
the exact matching method or SVM. The relative improvement
is more than 10.4% and 7.1% in terms of precision
and F1 respectively. The main reason for the improvement
is that the bridging classifier can make thorough use of the
finer grained intermediate taxonomy in a probabilistic way.
While the previous methods including the exact matching
method and SVM exploit the intermediate taxonomy in a
hard way when constructing the mapping function as shown
in Section 3.2.
Table 7:
Performances of the Bridging Classifier
with Different Representations of Target Categories
n
20
40
60
80
100
F1
0.414
0.420
0.424
0.421
0.416
Precision
0.437
0.443
0.447
0.444
0.439
Table 8:
Performances of the Bridging Classifier
with Different Granularity
Top 2
Top 3
Top 4
Top 5
Top All
F1
0.267
0.285
0.312
0.352
0.424
Precision
0.270
0.291
0.339
0.368
0.447
Table 8 shows the performance of the bridging classifier
when we change the granularity of the categories in the intermediate
taxonomy. To change the granularity of the categories
, we use the categories on the top L level by varying L.
It is clear that the categories have larger granularity when
L is smaller. From Table 8, we can see that the performance
of the bridging classifier improves steadily by reducing the
granularity of categories. The reason is that categories with
large granularity may be a mixture of several target categories
which prohibit distinguishing the target categories.
0.25
0.30
0.35
0.40
0.45
0.50
4000
11000
18000
25000
32000
39250
MI-F1
MI-Pre
TP-F1
TP-Pre
Figure 4: Effect of category selection.
However, reducing the granularity of categories in the intermediate
taxonomy will certainly increase the number of
the intermediate categories which will thus increase the computation
cost. One way to solve this problem is to do category
selection. Figure 4 shows the performance of the bridging
classifier when we select the categories from all the ODP
taxonomy through the two category selection approaches
proposed in Section 4.3.2. We can see that when the category
number is around 18,000, the performance of the bridging
classifier is comparable to, if not better than, the previous
approaches, including the exact matching method and
SVM. MI works better than TP in that MI can not only
measure the relevance between the categories in the target
taxonomy and those in the intermediate taxonomy, but also
favors the categories which are more powerful to distinguish
the categories in the target taxonomy. However, TP only
cares about the merit of relevance.
5.2.3
Ensemble of Classifiers
The winner of the KDDCUP 2005 competition found that
the best result was achieved by combining the exact matching
method and SVM. In the winning solution, besides the
exact matching method on Google's directory search, two
other exact matching methods are developed using LookS-mart
(http://www.looksmart.com) and a search engine based
on Lemur (http://www.lemurproject.org) and their crawled
Web pages from ODP [11]. Two classifier-combination strategies
are used, with one aiming at higher precision (denoted
by EV, where 111 samples are used as the validation data to
tune the weight of each base classifier) and the other aiming
at higher F1 (denoted by EN in which the validation
data set is ignored). EV assigns a weight to a classifier proportional
to the classifier's precision while EN gives equal
weights to all classifiers. We follow the same strategy to
combine our new method with the winner's methods, which
is denoted as "Exact-E"+"SVM-E"+Bridging as shown in
Table 9. The numbers in the parentheses are the relative improvement
. Note that the bridging classifier alone achieves
similar F1 measurement as the KDDCU 2005 winning solution
("Exact-E"+"SVM-E" with the EV combination strategy
) but improves the precision by 5.4%. From Table 9 we
can also find that the combination of the bridging classifier
and the KDDCUP 2005 winning solution can improve the
performance by 9.7% and 3.8% in terms of precision and
F1, respectively, when compared with the winning solution.
This indicates that the bridging classifier works in a different
way as the exact matching method and SVM, and they
are complimentary to each other.
Table 9: Performances of Ensemble Classifiers
"Exact-E"
"Exact-E" + "SVM-E"
+ "SVM-E"
+Bridging
EV
F1
0.426
0.429(+0.007)
Precision
0.424
0.465(+0.097)
EN
F1
0.444
0.461(+0.038)
Precision
0.414
0.430(+0.039)
RELATED WORK
Though not much work has been done on topical query
classification, some work has been conducted on other kinds
of query classification problems. Gravano et al. classified
the Web queries by geographical locality [3] while Kang et
al. proposed to classify queries according to their functional
types [4].
Beitzel et al. studied the same problem in [2] as we pur-sued
in this paper, with the goal to classify the queries according
to their topic(s). They used two primary data sets
137
containing the queries from the AOL web search service.
These queries were manually classified into a set of 18 categories
. The main difference between our problem and that
of [2] is that we did not have training data as given input. In
fact, it is a very difficult and time consuming task to provide
enough training examples, especially when the target taxonomy
is complicated. Another potential problem related
to the training data, as pointed out in [2], is caused by the
ongoing changes in the query stream, which makes it hard to
systematically cover the space of queries. In this paper, we
just rely on the structure and category names of the target
taxonomy without training data, which is consistent with
the task of KDDCUP 2005.
KDDCUP 2005 provides a test bed for the Web query
classification problem.
There are a total of 37 solutions
from 32 teams attending the competition. As summarized
by the organizers [6], most solutions expanded the queries
through search engines or WordNet and expanded the category
by mapping between some pre-defined/existing taxonomy
to the target taxonomy. Some solutions require human
intervention in the mapping process [5, 13].
Besides classifying the queries into target taxonomy, we
can also cluster the queries to discover some hidden taxonomies
through unsupervised methods. Both Beeferman
[1] and Wen [14] used search engines' clickthrough data to
cluster the queries. The former makes no use of the actual
content of the queries and URLs, but only how they
co-occur within the clickthrough data, while the latter exploits
the usage of the content. Although the work in [1]
and [14] proved the effectiveness of the clickthrough data
for query clustering, we did not utilize them in our solution
due to the following two reasons: 1) the clickthorugh data
can be quite noisy and is search engine dependent; 2) it is
difficult to obtain the clickthrough data due to privacy and
legal issues.
CONCLUSION AND FUTURE WORK
This paper presented a novel solution for classifying Web
queries into a set of target categories, where the queries are
very short and there are no training data. In our solution,
an intermediate taxonomy is used to train classifiers bridging
the queries and target categories so that there is no need
to collect the training data. Experiments on the KDDCUP
2005 data set show that the bridging classifier approach is
promising. By combining the bridging classifier with the
winning solution of KDDCUP 2005, we made a further improvement
by 9.7% and 3.8% in terms of precision and F1
respectively compared with the best results of KDDCUP
2005. In the future, we plan to extend the bridging classifier
idea to other types of query processing tasks, including
query clustering. We will also conduct research on how to
leverage a group of intermediate taxonomies for query classification
ACKNOWLEDGMENTS
Dou Shen and Qiang Yang are supported by a grant from
NEC (NECLC05/06.EG01). We thank the anonymous reviewers
for their useful comments.
REFERENCES
[1] D. Beeferman and A. Berger. Agglomerative clustering
of a search engine query log. In KDD '00: Proceedings
of the sixth ACM SIGKDD international conference
on Knowledge discovery and data mining, pages
407416, 2000.
[2] S. M. Beitzel, E. C. Jensen, O. Frieder, D. Grossman,
D. D. Lewis, A. Chowdhury, and A. Kolcz. Automatic
web query classification using labeled and unlabeled
training data. In SIGIR '05: Proceedings of the 28th
annual international ACM SIGIR conference on
Research and development in information retrieval,
pages 581582, 2005.
[3] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.
Categorizing web queries according to geographical
locality. In CIKM '03: Proceedings of the twelfth
international conference on Information and
knowledge management, pages 325333, 2003.
[4] I.-H. Kang and G. Kim. Query type classification for
web document retrieval. In SIGIR '03: Proceedings of
the 26th annual international ACM SIGIR conference
on Research and development in informaion retrieval,
pages 6471, 2003.
[5] Z. T. Kardkov
acs, D. Tikk, and Z. B
ans
aghi. The
ferrety algorithm for the kdd cup 2005 problem.
SIGKDD Explor. Newsl., 7(2):111116, 2005.
[6] Y. Li, Z. Zheng, and H. K. Dai. Kdd cup-2005 report:
facing a great challenge. SIGKDD Explor. Newsl.,
7(2):9199, 2005.
[7] A. McCallum and K. Nigam. A comparison of event
models for naive bayes text classication. In AAAI-98
Workshop on Learning for Text Categorization, 1998.
[8] G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and
K. Miller. Introduction to wordnet: an on-line lexical
database. International Journal of Lexicography,
3(4):23244, 1990.
[9] F. Peng, D. Schuurmans, and S. Wang. Augmenting
naive bayes classifiers with statistical language
models. Inf. Retr., 7(3-4):317345, 2004.
[10] J. Platt. Probabilistic outputs for support vector
machines and comparisons to regularized likelihood
methods. In A. Smola, P. Bartlett, B. Scholkopf, and
D. Schuurmans, editors, Advances in Large Margin
Classifiers. MIT Press, 1999.
[11] D. Shen, R. Pan, J.-T. Sun, J. J. Pan, K. Wu, J. Yin,
and Q. Yang. Q2c@ust: our winning solution to query
classification in kddcup 2005. SIGKDD Explor.
Newsl., 7(2):100110, 2005.
[12] R. C. van. Information Retrieval. Butterworths,
London, second edition edition, 1979.
[13] D. Vogel, S. Bickel, P. Haider, R. Schimpfky,
P. Siemen, S. Bridges, and T. Scheffer. Classifying
search engine queries using the web as background
knowledge. SIGKDD Explor. Newsl., 7(2):117122,
2005.
[14] J.-R. Wen, J.-Y. Nie, and H.-J. Zhang. Query
clustering using content words and user feedback. In
SIGIR '01: Proceedings of the 24th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 442443,
2001.
[15] Y. Yang and J. O. Pedersen. A comparative study on
feature selection in text categorization. In ICML '97:
Proceedings of the Fourteenth International Conference
on Machine Learning, pages 412420, 1997.
138
| Bridging classifier;Category Selection;Ensemble classifier;Bridging Classifier;Target categories;Similarity distribution;Mapping functions;Search engine;Matching approaches;Intermediate categories;Taxonomy;Query enrichment;KDDCUP 2005;Query classification;Category selection;Web Query Classification |
97 | Geographically Focused Collaborative Crawling | A collaborative crawler is a group of crawling nodes, in which each crawling node is responsible for a specific portion of the web. We study the problem of collecting geographically -aware pages using collaborative crawling strategies. We first propose several collaborative crawling strategies for the geographically focused crawling, whose goal is to collect web pages about specified geographic locations, by considering features like URL address of page, content of page, extended anchor text of link, and others. Later, we propose various evaluation criteria to qualify the performance of such crawling strategies. Finally, we experimentally study our crawling strategies by crawling the real web data showing that some of our crawling strategies greatly outperform the simple URL-hash based partition collaborative crawling, in which the crawling assignments are determined according to the hash-value computation over URLs. More precisely, features like URL address of page and extended anchor text of link are shown to yield the best overall performance for the geographically focused crawling. | INTRODUCTION
While most of the current search engines are effective for
pure keyword-oriented searches, these search engines are not
fully effective for geographic-oriented keyword searches. For
instance, queries like "restaurants in New York, NY" or
"good plumbers near 100 milam street, Houston, TX" or
"romantic hotels in Las Vegas, NV" are not properly man-aged
by traditional web search engines. Therefore, in recent
This work was done while the author was visiting Genieknows
.com
Copyright is held by the International World Wide Web Conference Committee
(IW3C2). Distribution of these papers is limited to classroom use,
and personal use by others.
WWW 2006, May 2326, 2006, Edinburgh, Scotland.
ACM 1-59593-323-9/06/0005.
years, there has been surge of interest within the search
industry on the search localization (e.g., Google Local
1
, Yahoo
Local
2
). The main aim of such search localization is
to allow the user to perform the search according his/her
keyword input as well as the geographic location of his/her
interest.
Due to the current size of the Web and its dynamical nature
, building a large scale search engine is challenging and
it is still active area of research. For instance, the design
of efficient crawling strategies and policies have been exten-sively
studied in recent years (see [9] for the overview of the
field). While it is possible to build geographically sensitive
search engines using the full web data collected through a
standard web crawling, it would rather be more attractive
to build such search engines over a more focused web data
collection which are only relevant to the targeted geographic
locations. Focusing on the collection of web pages which are
relevant to the targeted geographic location would leverage
the overall processing time and efforts for building such
search engines. For instance, if we want to build a search
engine targeting those users in New York, NY, then we can
build it using the web collection, only relevant to the city
of New York, NY. Therefore, given intended geographic regions
for crawling, we refer the task of collecting web pages,
relevant to the intended geographic regions as geographically
focused crawling.
The idea of focusing on a particular portion of the web
for crawling is not novel. For instance, the design of efficient
topic-oriented or domain-oriented crawling strategies
has been previously studied [8, 23, 24]. However, there has
been little previous work on incorporating the geographical
dimension of web pages to the crawling. In this paper,
we study various aspects of crawling when the geographical
dimension is considered.
While the basic idea behind the standard crawling is straightforward
, the collaborative crawling or parallel crawling is often
used due to the performance and scalability issues that
might arise during the real crawling of the web [12, 19].
In a collaborative or parallel crawler, the multiple crawling
nodes are run in parallel on a multiprocessor or in a distributed
manner to maximize the download speed and to further
improve the overall performance especially for the scalability
of crawling. Therefore, we study the geographically
focused crawling under the collaborative setting, in which
the targeted geographic regions are divided and then assigned
to each participating crawling node. More precisely,
1
http://local.google.com
2
http://local.yahoo.com
287
in a geographically focused collaborative crawler, there will
be a set of geographically focused crawling nodes in which
each node is only responsible for collecting those web pages,
relevant to its assigned geographic regions. Furthermore,
there will be additional set of general crawling nodes which
aim to support other geographically focused crawling nodes
through the general crawling (download of pages which are
not geographically-aware). The main contributions of our
paper are follows:
1. We propose several geographically focused collaborative
crawling strategies whose goal is to collect web
pages about the specified geographic regions.
2. We propose several evaluation criteria for measuring
the performance of a geographically focused crawling
strategy.
3. We empirically study our proposed crawling strategies
by crawling the real web. More specifically, we collect
web pages pertinent to the top 100 US cities for each
crawling strategy.
4. We empirically study geographic locality.
That is, pages
which are geographically related are more likely to be
linked compared to those which are not.
The rest of the paper is organized as follows. In Section
2, we introduce some of the previous works related to
our geographically focused collaborative crawling. In Section
3, we describe the problem of geographically focused
collaborative crawling and then we propose several crawling
policies to deal with this type of crawling. In Section 4,
we present evaluation models to measure the performance
of a geographically focused collaborative crawling strategy.
In Section 5, we present results of our experiments with the
real web data. Finally, in Section 6, we present final remarks
about our work.
RELATED WORKS
A focused crawler is designed to only collect web pages
on a specified topic while transversing the web. The basic
idea of a focused crawler is to optimize the priority of the
unvisited URLs on the crawler frontier so that pages concerning
a particular topic are retrieved earlier. Bra et al. [4]
propose a focused web crawling method in the context of a
client-based real-time search engine. Its crawling strategy is
based on the intuition that relevant pages on the topic likely
contain links to other pages on the same topic. Thus, the
crawler follows more links from relevant pages which are estimated
by a binary classifier that uses keyword and regular
expression matchings. In spite of its reasonably acceptable
performance, it has an important drawback as a relevant
page on the topic might be hardly reachable when this page
is not pointed by pages relevant to the topic.
Cho et al. [11] propose several strategies for prioritiz-ing
unvisited URLs based on the pages downloaded so far.
In contrast to other focused crawlers in which a supervised
topic classifier is used to control the way that crawler handles
the priority of pages to be be downloaded, their strategies
are based on considering some simple properties such as
linkage or keyword information to define the priority of pages
to be downloaded. They conclude that determining the priority
of pages to be downloaded based on their PageRank
value yield the best overall crawling performance.
Chakrabarti et al. [8] propose another type of focused
crawler architecture which is composed of three components,
namely classifier, distiller and crawler. The classifier makes
the decision on the page relevancy to determine its future
link expansion. The distiller identifies those hub pages, as
defined in [20], pointing to many topic related pages to determine
the priority of pages to be visited. Finally, the crawling
module fetches pages using the list of pages provided by the
distiller. In the subsequent work, Chakrabarti et al. [7]
suggest that only a fraction of URLs extracted from a page
are worth following. They claim that a crawler can avoid
irrelevant links if the relevancy of links can be determined
by the local text surrounding it. They propose alternative
focused crawler architecture where documents are modeled
as tag trees using DOM (Document Object Model). In their
crawler, two classifiers are used, namely the "baseline" and
the "apprentice". The baseline classifier refers to the module
that navigates through the web to obtain the enriching
training data for the apprentice classifier. The apprentice
classifier, on the other hand, is trained over the data collected
through the baseline classifier and eventually guides
the overall crawling by determining the relevancy of links
using the contextual information around them.
Diligenti et al. [14] use the context graph to improve
the baseline best-first focused crawling method. In their
approach, there is a classifier which is trained through the
features extracted from the paths that lead to the relevant
pages. They claim that there is some chance that some off-topic
pages might potentially lead to highly relevant pages.
Therefore, in order to mediate the hardness of identifying
apparently off-topic pages, they propose the usage of context
graph to guide the crawling. More precisely, first a
context graph for seed pages is built using links to the pages
returned from a search engine. Next, the context graph is
used to train a set of classifiers to assign documents to different
categories using their estimated distance, based on
the number of links, to relevant pages on different categories
. Their experimental results reveal that the context
graph based focused crawler has a better performance and
achieves higher relevancy compared to an ordinary best-first
crawler.
Cho et al. [10] attempt to map and explore a full design
space for parallel and distributed crawlers. Their work
addresses issues of communication bandwidth, page quality
and the division of work between local crawlers. Later,
Chung et al. [12] study parallel or distributed crawling in
the context of topic-oriented crawling. Basically, in their
topic-oriented collaborative crawler, each crawling node is
responsible for a particular set of topics and the page is
assigned to the crawling node which is responsible for the
topic which the page is relevant to. To determine the topic of
page, a simple Naive-Bayes classifier is employed. Recently,
Exposto et al. [17] study distributed crawling by means of
the geographical partition of the web considering the multi-level
partitioning of the reduced IP web link graph. Note
that our IP-based collaborative crawling strategy is similar
to their approach in spirit as we consider the IP-addresses
related to the given web pages to distribute them among
participating crawling nodes.
Gravano and his collaborators study the geographically-aware
search problem in various works [15, 18, 5]. Particularly
, in [15], how to compute the geographical scope of web
resources is discussed. In their work, linkage and seman-288
tic information are used to assess the geographical scope of
web resources. Their basic idea is as follows. If a reasonable
number of links pertinent to one particular geographic location
point to a web resource and these links are smoothly
distributed across the location, then this location is treated
as one of the geographic scopes of the corresponding web
resource. Similarly, if a reasonable number of location references
is found within a web resource, and the location references
are smoothly distributed across the location, then this
location is treated as one of the geographical scopes of the
web resource. They also propose how to solve aliasing and
ambiguity. Recently,
Markowetz et al. [22] propose the design
and the initial implementation of a geographic search
engine prototype for Germany. Their prototype extracts
various geographic features from the crawled web dataset
consisting of pages whose domain name contains "de". A
geographic footprint, a set of relevant locations for page, is
assigned to each page. Subsequently, the resulting footprint
is integrated into the query processor of the search engine.
CRAWLING
Even though, in theory, the targeted geographic locations
of a geographically focused crawling can be any valid geographic
location, in our paper, a geographic location refers
to a city-state pair for the sake of simplicity. Therefore,
given a list of city-state pairs, the goal of our geographically
focused crawling is to collect web pages which are "relevant"
to the targeted city-state pairs. Thus, after splitting and
distributing the targeted city-state pairs to the participating
crawling nodes, each participating crawling node would
be responsible for the crawling of web pages relevant to its
assigned city-state pairs.
Example 1. Given {(New York, NY), (Houston, TX)}
as the targeted city-state pairs and 3 crawling nodes {Cn
1
,
Cn
2
, Cn
3
}, one possible design of geographically focused collaborative
crawler is to assign (New York, NY) to Cn
1
and
(Houston, TX) to Cn
2
.
Particularly, for our experiments, we perform the geographically
focused crawling of pages targeting the top 100
US cities, which will be explained later in Section 5. We
use some general notations to denote the targeted city-state
pairs and crawling nodes as follows. Let T C = {(c
1
, s
1
),
. . . , (c
n
, s
n
)} denote the set of targeted city-state pairs for
our crawling where each (c
i
, s
i
) is a city-state pair. When
it is clear in the context, we will simply denote (c
i
, s
i
) as c
i
.
Let CR = {Cn
1
, . . . , Cn
m
} denote the set of participating
crawling nodes for our crawling. The main challenges that
have to be dealt by a geographically focused collaborative
crawler are the following:
How to split and then distribute T C = {c
1
, . . . , c
n
}
among the participating CR = {Cn
1
, . . . , Cn
m
}
Given a retrieved page p, based on what criteria we
assign the extracted URLs from p to the participating
crawling nodes.
l number of URLs
extracted
q extracted
p
q
transferred
All URLs
a) All l URLs extracted from q are transferred
to another crawling node (the worst scenario
for policy A
)
l number of URLs
extracted
l number of URLs
extracted
l number of URLs
extracted
q extracted
p
q
q
q
.............
m number of crawling nodes
b) Page q is transferred to the m number of crawling
nodes, but all URLs extracted from each q of the
crawling nodes are not transferred to other crawling
nodes (the best scenario for policy B)
Figure 1: Exchange of the extracted URLs
3.2
Assignment of the extracted URLs
When a crawling node extracts the URLs from a given
page, it has to decide whether to keep the URLs for itself
or transfer them to other participating crawling nodes for
further fetching of the URLs. Once the URL is assigned to
a particular crawling node, it may be added to the node's
pending queue. Given a retrieved page p, let pr(c
i
|p) be
the probability that page p is about the city-state pair c
i
.
Suppose that the targeted city-state pairs are given and they
are distributed over the participating crawling nodes. There
are mainly two possible policies for the exchange of URLs
between crawling nodes.
Policy A: Given the retrieved page p, let c
i
be the
most probable city-state pair about p, i.e. arg max
c
i
T C
pr(c
i
|p). We assign each extracted URL from page p
to the crawling node Cn
j
responsible on c
i
Policy B: Given the retrieved page p, let {c
p
1
, . . . , c
p
k
}
T C be the set of city-state pairs whose P r(c
p
i
|p) =
0.
We assign each extracted URL from page p to
EACH crawling node Cn
j
responsible on c
p
i
T C,
Lemma 2. Let b be the bandwidth cost and let c be the
inter-communication cost between crawling nodes. If b > c,
then the Policy A is more cost effective than the Policy B.
Proof:
Given an extracted URL q from page p, let m be
the number of crawling nodes used by the Policy B (crawling
nodes which are assigned to download q). Since the cost for
the policy A and B is equal when m = 1, we suppose m 2.
Let l be the total number of URLs extracted from q. Let
C(A) and C(B) be the sum of total inter-communication
cost plus the bandwidth cost for the Policy A and Policy B
respectively. One can easily verify that the cost of download
for q and all URLs extracted from q is given as C(A)
b+l(c+b) as shown in Figure 1a) and C(B) mb+lmb.
289
as shown in Figure 1b). Therefore, it follows that C(A)
C(B) since m 2 and b > c.
The assignment of extracted URLs for each retrieved page
of all crawling collaboration strategies that we consider next
will be based on the Policy A.
3.3
Hash Based Collaboration
We consider the hash based collaboration, which is the approach
taken by most of collaborative crawlers, for the sake
of comparison of this basic approach to our geographically
focused collaboration strategies. The goal of hash based collaboration
is to implementing a distributed crawler partition
over the web by computing hash functions over URLs. When
a crawling node extracts a URL from the retrieved page, a
hash function is then computed over the URL. The URL is
assigned to the participating crawling node responsible for
the corresponding hash value of the URL. Since we are using
a uniform hash function for our experiments, we will have
a considerable data exchange between crawling nodes since
the uniform hash function will map most of URLs extracted
from the retrieved page to remote crawling nodes.
3.4
Geographically Focused Collaborations
We first divide up CR, the set of participating crawling
nodes, into geographically sensitive nodes and general nodes.
Even though, any combination of geographically sensitive
and general crawling nodes is allowed, the architecture of
our crawler consists of five geographically sensitive and one
general crawling node for our experiments. A geographically
sensitive crawling node will be responsible for the download
of pages pertinent to a subset targeted city-state pairs while
a general crawling node will be responsible for the download
of pages which are not geographically-aware supporting
other geographically sensitive nodes.
Each collaboration policy considers a particular set of features
for the assessment of the geographical scope of page
(whether a page is pertinent to a particular city-state pair
or not). From the result of this assessment, each extracted
URL from the page will be assigned to the crawling node
that is responsible for the download of pages pertinent to
the corresponding city-state pair.
3.4.1
URL Based
The intuition behind the URL based collaboration is that
pages containing a targeted city-state pair in their URL address
might potentially guide the crawler toward other pages
about the city-state pair. More specifically, for each extracted
URL from the retrieved page p, we verify whether
the city-state pair c
i
is found somewhere in the URL address
of the extracted URL. If the city-state pair c
i
is found,
then we assign the corresponding URL to the crawling node
which is responsible for the download of pages about c
i
.
3.4.2
Extended Anchor Text Based
Given link text l, an extended anchor text of l is defined
as the set of prefix and suffix tokens of l of certain size. It is
known that extended anchor text provides valuable information
to characterize the nature of the page which is pointed
by link text. Therefore, for the extended anchor text based
collaboration, our assumption is that pages associated with
the extended anchor text, in which a targeted city-state pair
c
i
is found, will lead the crawler toward those pages about
c
i
. More precisely, given retrieved page p, and the extended
anchor text l found somewhere in p, we verify whether the
city-state pair c
i
T C is found as part of the extended
anchor text l. When multiple findings of city-state occurs,
then we choose the city-state pair that is the closest to the
link text. Finally, we assign the URL associated with l to
the crawling node that is responsible for the download of
pages about c
i
.
3.4.3
Full Content Based
In [15], the location reference is used to assess the geographical
scope of page. Therefore, for the full content
based collaboration, we perform a content analysis of the
retrieved page to guide the crawler for the future link expansion
. Let pr((c
i
, s
i
)|p) be the probability that page p
is about city-state pair (c
i
, s
i
). Given T C and page p, we
compute pr((c
i
, s
i
)|p) for (c
i
, s
i
) T C as follows:
pr((c
i
, s
i
)|p) = #((c
i
, s
i
), p) + (1 - ) pr(s
i
|c
i
) #(c
i
, p)
(1)
where #((c
i
, s
i
), p) denotes the number of times that the
city-state pair (c
i
, s
i
) is found as part of the content of
p, #(c
i
, p) denotes the number of times (independent of
#((c
i
, s
i
), p)) that the city reference c
i
is found as part of
the content of p, and denotes the weighting factor. For
our experiments, = 0.7 was used.
The probability pr(s
i
|c
i
) is calculated under two simplified
assumptions: (1) pr(s
i
|c
i
) is dependent on the real population
size of (c
i
, s
i
) (e.g., Population of Kansas City, Kansas
is 500,000). We obtain the population size for each city
city-data.com
3
. (2) pr(s
i
|c
i
) is dependent on the number
of times that the state reference is found (independent of
#((c
i
, s
i
), p)) as part of the content of p. In other words,
our assumption for pr(s
j
|c
i
) can be written as
pr(s
i
|c
i
) S(s
i
|c
i
) + (1 - ) ~
S(s
i
|p)
(2)
where S(s
i
|c
i
) is the normalized form of the population
size of (c
i
, s
i
), ~
S(s
i
|p) is the normalized form of the number
of appearances of the state reference s
i
, independent of
#((c
i
, s
i
), p)), within the content of p, and denotes the
weighting factor. For our experiments, = 0.5 was used.
Therefore, pr((c
i
, s
i
)|p) is computed as
pr((c
i
, s
i
)|p)
= #((c
i
, s
i
), p) + (1 - ) (S(s
i
|c
i
)
+ (1 - ) ~
S(s
i
|p)) #(c
i
, p)
(3)
Finally, given a retrieve page p, we assign all extracted
URLs from p to the crawling node which is responsible for
pages relevant to arg max
(c
i
,s
i
)T C
P r((c
i
, s
i
)|p).
3.4.4
Classification Based
Chung et al. [12] show that the classification based collaboration
yields a good performance for the topic-oriented
collaborative crawling. Our classification based collaboration
for the geographically crawling is motivated by their
work. In this type of collaboration, the classes for the classifier
are the partitions of targeted city-state pairs. We train
our classifier to determine pr(c
i
|p), the probability that the
retrieved page p is pertinent to the city-state pair c
i
. Among
various possible classification methods, we chose the Naive-Bayes
classifier [25] due to its simplicity. To obtain training
3
http://www.city-data.com
290
data, pages from the Open Directory Project (ODP)
4
were
used. For each targeted city-state pair, we download all
pages under the corresponding city-state category which, in
turn, is the child category for the "REGIONAL" category
in the ODP. The number of pages downloaded for each city-state
pair varied from 500 to 2000. We also download a set of
randomly chosen pages which are not part of any city-state
category in the ODP. We download 2000 pages for this purpose
. Then, we train our Naive-Bayes classifier using these
training data. Our classifier determines whether a page p
is pertinent to either of the targeted city-state pairs or it is
not relevant to any city-state pair at all. Given the retrieved
page p, we assign all extracted URLs from p to the crawling
node which is responsible for the download of pages which
are pertinent to arg max
c
i
T C
pr(c
i
|p).
3.4.5
IP-Address Based
The IP-address of the web service indicates the geographic
location at which the web service is hosted. The IP-address
based collaboration explores this information to control the
behavior of the crawler for further downloads. Given a retrieved
page p, we first determine the IP-address of the web
service from which the crawler downloaded p. With this IP-address
, we use the IP-address mapping tool to obtain the
corresponding city-state pair of the given IP, and then we assign
all extracted URLs of page p to the crawling node which
is responsible on the computed city-state pair. For the IP-address
mapping tool, freely available IP address mapping
tool, hostip.info(API)
5
is employed.
3.5
Normalization and Disambiguation of City
Names
As indicated in [2, 15], problems of aliasing and ambiguity
arise when one wants to map the possible city-state reference
candidate to an unambiguous city-state pair. In this section,
we describe how we handle these issues out.
Aliasing: Many times different names or abbreviations
are used for the same city name. For example,
Los Angeles can be also referred as LA or L.A. Similar
to [15], we used the web database of the United
States Postal Service (USPS)
6
to deal with aliasing.
The service returns a list of variations of the corresponding
city name given the zip code. Thus, we first
obtained the list of representative zip codes for each
city in the list using the US Zip Code Database product
, purchased from ZIPWISE
7
, and then we obtain
the list of possible names and abbreviations for each
city from the USPS.
Ambiguity: When we deal with city names, we have
to deal with the ambiguity of the city name reference.
First, we can not guarantee whether the possible city
name reference actually refers to the city name. For
instance, New York might refer to New York as city
name or New York as part of the brand name "New
York Fries" or New York as state name. Second, a
city name can refer to cities in different states. For
example, four states, New York, Georgia, Oregon and
4
http://www.dmoz.org
5
http://www.hostip.info
6
http://www.usps.gov
7
http://www.zipwise.com
California, have a city called Albany. For both cases,
unless we fully analyze the context in which the reference
was made, the city name reference might be
inherently ambiguous. Note that for the full content
based collaboration, the issue of ambiguity is already
handled through the term pr(s
i
|c
i
) of the Eq. 2. For
the extended anchor text based and the URL based
collaborations, we always treat the possible city name
reference as the city that has the largest population
size. For instance, Glendale found in either the URL
address of page or the extended anchor text of page
would be treated as the city name reference for Glendale
, AZ.
8
.
EVALUATION MODELS
To assess the performance of each crawling collaboration
strategy, it is imperative to determine how much geographically
-aware pages were downloaded for each strategy and
whether the downloaded pages are actually pertinent to the
targeted geographic locations. Note that while some previous
works [2, 15, 18, 5] attempt to define precisely what a
geographically-aware page is, determining whether a page is
geographically-aware or not remains as an open problem [2,
18]. For our particular application, we define the notion of
geographical awareness of page through geographic entities
[21]. We refer the address description of a physical organization
or a person as geographic entity. Since the targeted
geographical city-state pairs for our experiments are the top
100 US cities, a geographic entity in the context of our experiments
are further simplified as an address information,
following the standard US address format, for any of the
top 100 US cities. In other words, a geographic entity in
our context is a sequence of Street Number, Street Name,
City Name and State Name, found as part of the content
of page. Next, we present various evaluation measures for
our crawling strategies based on geographic entities. Ad-ditionally
, we present traditional measures to quantify the
performance of any collaborative crawling. Note that our
evaluation measures are later used in our experiments.
Geo-coverage: When a page contain at least one geographic
entity (i.e. address information), then the
page is clearly a geographically aware page. Therefore
, we define the geo-coverage of retrieved pages as
the number of retrieved pages with at least one geographic
entity, pertinent to the targeted geographical
locations (e.g., the top US 100 cities) over the total
number of retrieved pages.
Geo-focus: Each crawling node of the geographically
focused collaborative crawler is responsible for a subset
of the targeted geographic locations. For instance,
suppose we have two geographically sensitive crawling
nodes Cn
1
, and Cn
2
, and the targeted city-state
pairs as {(New York, NY),(Los Angeles, CA)}. Suppose
Cn
1
is responsible for crawling pages pertinent to
(New York, NY) while Cn
2
is responsible for crawling
8
Note that this simple approach does minimally hurt the
overall crawling. For instance, in many cases, even the incorrect
assessment of the state name reference New York
instead of the correct city name reference New York, would
result into the assignment of all extracted URLs to the correct
crawling node.
291
3
1
2
4
5
7
8
6
Page without geo-entity
Page with geo-entity
Root Page
Figure 2: An example of geo-centrality measure
pages pertinent to (Los Angeles, CA). Therefore, if the
Cn
1
has downloaded a page about Los Angeles, CA,
then this would be clearly a failure of the collaborative
crawling approach.
To formalize this notion, we define the geo-focus of a
crawling node, as the number of retrieved pages that
contain at least one geographic entity of the assigned
city-state pairs of the crawling node.
Geo-centrality: One of the most frequently and fundamental
measures used for the analysis of network
structures is the centrality measure which address the
question of how central a node is respect to other nodes
in the network. The most commonly used ones are the
degree centrality, eigenvector centrality, closeness centrality
and betweenness centrality [3]. Motivated by
the closeness centrality and the betweenness centrality
, Lee et al. [21] define novel centrality measures
to assess how a node is central with respect to those
geographically-aware nodes (pages with geographic entities
). A geodesic path is the shortest path, in terms
of the number of edges transversed, between a specified
pair of nodes. Geo-centrality measures are based
on the geodesic paths from an arbitrary node to a geographically
aware node.
Given two arbitrary nodes, p
i
, p
j
, let GD(p
i
, p
j
) be
the geodesic path based distance between p
i
and p
j
(the length of the geodesic path). Let w
GD
(p
i
,p
j
)
=
1/m
GD
(p
i
,p
j
)
for some m
and we define (p
i
, p
j
)
as
(p
i
, p
j
) =
w
GD
(p
i
,p
j
)
if p
j
is geographically
aware node
0
otherwise
For any node p
i
, let
k
(p
i
) = {p
j
|GD(p
i
, p
j
) < k} be
the set nodes of whose geodesic distance from p
i
is less
than k.
Given p
i
, let GCt
k
(p
i
) be defined as
GCt
k
(p
i
) =
p
j
k
(p
i
)
(p
i
, p
j
)
Intuitively the geo-centrality measure computes how
many links have to be followed by a user which starts
his navigation from page p
i
to reach geographically-aware
pages. Moreover, w
GD
(p
i
,p
j
)
is used to penalize
each following of link by the user.
Example 3. Let consider the graph structure of Figure
2. Suppose that the weights are given as w
0
=
1, w
1
= 0.1, w
2
= 0.01, i.e. each time a user navigates
a link, we penalize it with 0.1. Given the root node 1
containing at least one geo-entity, we have
2
(node 1)=
{1, . . . , 8}. Therefore, we have w
GD
(node 1,node 1)
= 1,
w
GD
(node 1,node 2)
= 0.1, w
GD
(node 1,node 3)
= 0.1,
w
GD
(node 1,node 4)
= 0.1, w
GD
(node 1,node 5)
= 0.01,
w
GD
(node 1,node 6)
= 0.01, w
GD
(node 1,node 7)
= 0.01,
w
GD
(node 1,node 8)
= 0.01. Finally, GCt
k
(node 1) =
1.34.
Overlap: The Overlap measure is first introduced in
[10]. In the collaborative crawling, it is possible that
different crawling nodes download the same page multiple
times. Multiple downloads of the same page are
clearly undesirable. Therefore, the overlap of retrieved
pages is defined as
N -I
N
where N denotes the total
number of downloaded pages by the overall crawler
and I denotes the number of unique downloaded pages
by the overall crawler. Note that the hash based collaboration
approach does not have any overlap.
Diversity: In a crawling, it is possible that the crawling
is biased toward a certain domain name. For instance
, a crawler might find a crawler trap which is
an infinite loop within the web that dynamically produces
new pages trapping the crawler within this loop
[6]. To formalize this notion, we define the diversity
as
S
N
where S denotes the number of unique domain
names of downloaded pages by the overall crawler and
N denotes the total number of downloaded pages by
the overall crawler.
Communication overhead: In a collaborative crawling
, the participating crawling nodes need to exchange
URLs to coordinate the overall crawling work.
To
quantify how much communication is required for this
exchange, the communication overhead is defined in
terms of the exchanged URLs per downloaded page
[10].
CASE STUDY
In this section, we present the results of experiments that
we conducted to study various aspects of the proposed geographically
focused collaborative crawling strategies.
5.1
Experiment Description
We built an geographically focused collaborative crawler
that consists of one general crawling node, Cn
0
and five
geographically sensitive crawling nodes, {Cn
1
, . . . , Cn
5
}, as
described in Section 3.4. The targeted city-state pairs were
the top 100 US cities by the population size, whose list was
obtained from the city-data.com
9
.
We partition the targeted city-state pairs according to
their time zone to assign these to the geographically sensitive
crawling nodes as shown in Table 1. In other words, we
have the following architecture design as illustrated in Figure
3. Cn
0
is general crawler targeting pages which are not
geographically-aware. Cn
1
targets the Eastern time zone
with 33 cities. Cn
2
targets the Pacific time zone with 22
cities. Cn
3
targets the Mountain time zone with 10 cities.
9
www.city-data.com
292
Time Zone
State Name
Cities
Central
AL
Birmingham,Montgomery, Mobile
Alaska
AK
Anchorage
Mountain
AR
Phoenix, Tucson, Mesa,
Glendale, Scottsdale
Pacific
CA
Los Angeles , San Diego , San Jose
San Francisco, Long Beach, Fresno
Oakland, Santa Ana, Anaheim
Bakersfield, Stockton, Fremont
Glendale,Riverside , Modesto
Sacramento, Huntington Beach
Mountain
CO
Denver, Colorado Springs, Aurora
Eastern
DC
Washington
Eastern
FL
Hialeah
Eastern
GA
Atlanta, Augusta-Richmond County
Hawaii
HI
Honolulu
Mountain
ID
Boise
Central
IL
Chicago
Central
IN
Indianapolis,Fort Wayne
Central
IA
Des Moines
Central
KA
Wichita
Eastern
KE
Lexington-Fayette, Louisville
Central
LO
New Orleans, Baton Rouge
Shreveport
Eastern
MD
Baltimore
Eastern
MA
Boston
Eastern
MI
Detroit, Grand Rapids
Central
MN
Minneapolis, St. Paul
Central
MO
Kansas City , St. Louis
Central
NE
Omaha , Lincoln
Pacific
NV
Las Vegas
Eastern
NJ
Newark , Jersey City
Mountain
NM
Albuquerque
Eastern
NY
New York, Buffalo,Rochester,Yonkers
Eastern
NC
Charlotte, Raleigh,Greensboro
Durham , Winston-Salem
Eastern
OH
Columbus , Cleveland
Cincinnati , Toledo , Akron
Central
OK
Oklahoma City, Tulsa
Pacific
OR
Portland
Eastern
PA
Philadelphia,Pittsburgh
Central
TX
Houston,Dallas,San Antonio,Austin
El Paso,Fort Worth
Arlington, Corpus Christi
Plano , Garland ,Lubbock , Irving
Eastern
VI
Virginia Beach , Norfolk
Chesapeake, Richmond , Arlington
Pacific
WA
Seattle , Spokane , Tacoma
Central
WI
Milwaukee , Madison
Table 1: Top 100 US cities and their time zone
WEB
Cn5: Hawaii & Alaska
Cn0: General
Cn1: Eastern (33 cities)
Cn2: Pacific (22 cities)
Cn4: Central (33 cities)
Cn3: Mountain (10 cities)
Figure 3: Architecture of our crawler
Cn
4
targets the Central time zone with 33 cities. Finally,
Cn
5
targets the Hawaii-Aleutian and Alaska time zones with
two cities.
We developed our collaborative crawler by extending the
open source crawler, larbin
10
written in C++. Each crawling
node was to dig each domain name up to the five levels
of depth. The crawling nodes were deployed over 2 servers,
each of them with 3.2 GHz dual P4 processors, 1 GB of
RAM, and 600 GB of disk space. We ran our crawler for the
period of approximately 2 weeks to download approximately
12.8 million pages for each crawling strategy as shown in
Table 2. For each crawling process, the usable bandwidth
was limited to 3.2 mbps, so the total maximum bandwidth
used by our crawler was 19.2 mbps. For each crawling, we
used the category "Top: Regional: North America: United
States" of the ODP as the seed page of crawling. The IP
mapping tool used in our experiments did not return the
10
http://larbin.sourceforge.net/index-eng.html
Type of collaboration
Download size
Hash Based
12.872 m
URL Based
12.872 m
Extended Anchor Text Based
12.820 m
Simple Content Analysis Based
12.878 m
Classification Based
12.874 m
IP Address Based
12.874 m
Table 2: Number of downloaded pages
corresponding city-state pairs for Alaska and Hawaii, so we
ignored Alaska and Hawaii for our IP-address based collaborative
crawling.
5.2
Discussion
5.2.1
Quality Issue
As the first step toward the performance evaluation of our
crawling strategies, we built an extractor for the extraction
of geographic entities (addresses) from downloaded pages.
Our extractor, being a gazetteer based, extracted those geographic
entities using a dictionary of all possible city name
references for the top 100 US cities augmented by a list of all
possible street abbreviations (e.g., street, avenue, av., blvd)
and other pattern matching heuristics. Each extracted geographic
entity candidate was further matched against the
database of possible street names for each city that we built
from the 2004 TIGER/Line files
11
. Our extractor was shown
to yield 96% of accuracy out of 500 randomly chosen geographic
entities.
We first analyze the geo-coverage of each crawling strategy
as shown in Table 3. The top performers for the geo-coverage
are the URL based and extended anchor text based
collaborative strategies whose portion of pages downloaded
with geographic entities was 7.25% and 7.88%, respectively,
strongly suggesting that URL address of page and extended
anchor text of link are important features to be considered
for the discovery of geographically-aware pages. The next
best performer with respect to geo-coverage was the full content
based collaborative strategy achieving geo-coverage of
4.89%. Finally, the worst performers in the group of geographically
focused collaborative policies were the classification
based and the IP-address based strategies. The poor
performance of the IP-address based collaborative policy
shows that the actual physical location of web service is not
necessarily associated with the geographical scopes of pages
served by web service. The extremely poor performance of
the classification based crawler is surprising since this kind
of collaboration strategy shows to achieve good performance
for the topic-oriented crawling [12]. Finally, the worst performance
is observed with the URL-hash based collaborative
policy as expected whose portion of pages with geographical
entities out of all retrieved pages was less than 1%. In
conclusion, the usage of even simple but intuitively sounding
geographically focused collaborative policies can improve
the performance of standard collaborative crawling by a factor
of 3 to 8 for the task of collecting geographically-aware
pages.
To check whether each geographically sensitive crawling
node is actually downloading pages corresponding to their
assigned city-state pairs, we used the geo-focus as shown in
11
http://www.census.gov/geo/www/tiger/tiger2004se/
tgr2004se.html
293
Type of collaboration
Cn0
Cn1
Cn2
Cn3
Cn4
Cn5
Average
Average
(without Cn0)
URL-Hash Based
1.15%
0.80%
0.77%
0.75%
0.82%
0.86%
0.86%
0.86%
URL Based
3.04%
7.39%
9.89%
9.37%
7.30%
13.10%
7.25%
8.63%
Extended Anchor Text Based
5.29%
6.73%
9.78%
9.99%
6.01%
12.24%
7.88%
8.58%
Full Content Based
1.11%
3.92%
5.79%
6.87%
3.24%
8.51%
4.89%
5.71%
Classification Based
0.49%
1.23%
1.20%
1.27%
1.22%
1.10%
1.09%
1.21%
IP-Address Based
0.81%
2.02%
1.43%
2.59%
2.74%
0.00%
1.71%
2.20%
Table 3: Geo-coverage of crawling strategies
Type of collaboration
Cn1
Cn2
Cn3
Cn4
Cn5
Average
URL based
91.7%
89.0%
82.8%
94.3%
97.6%
91.1%
Extended anchor
82.0%
90.5%
79.6%
76.8%
92.3%
84.2%
text based
Full content based
75.2%
77.4%
75.1%
63.5%
84.9%
75.2%
Classification based
43.5%
32.6%
5.5%
25.8%
2.9%
22.1%
IP-Address based
59.6%
63.6%
55.6%
80.0%
0.0%
51.8%
Table 4: Geo-focus of crawling strategies
Type of collaboration
Cn0
Cn1
Cn2
Cn3
Cn4
Cn5
Average
URL-hash based
0.45
0.47
0.46
0.49
0.49
0.49
0.35
URL based
0.39
0.2
0.18
0.16
0.24
0.07
0.18
Extended anchor
0.39
0.31
0.22
0.13
0.32
0.05
0.16
text based
Full content based
0.49
0.35
0.31
0.29
0.39
0.14
0.19
Classification based
0.52
0.45
0.45
0.46
0.46
0.45
0.26
IP-Address based
0.46
0.25
0.31
0.19
0.32
0.00
0.27
Table 5: Number of unique geographic entities over
the total number of geographic entities
Table 4. Once again, the URL-based and the extended anchor
text based strategies show to perform well with respect
to this particular measure achieving in average above 85%
of geo-focus. Once again, their relatively high performance
strongly suggest that the city name reference within a URL
address of page or an extended anchor text is a good feature
to be considered for the determination of geographical
scope of page. The geo-focus value of 75.2% for the content
based collaborative strategy also suggests that the locality
phenomena which occurs with the topic of page also occurs
within the geographical dimension as well. It is reported,
[13], that pages tend to reference (point to) other pages on
the same general topic. The relatively high geo-focus value
for the content based collaborative strategy indicates that
pages on the similar geographical scope tend to reference
each other. The IP-address based policy achieves 51.7% of
geo-focus while the classification based policy only achieves
22.7% of geo-focus. The extremely poor geo-focus of the
classification based policy seems to be due to the failure of
the classifier for the determination of the correct geographical
scope of page.
In the geographically focused crawling, it is possible that
pages are biased toward a certain geographic locations. For
instance, when we download pages on Las Vegas, NV, it is
possible that we have downloaded a large number of pages
which are focused on a few number of casino hotels in Las
Vegas, NV which are highly referenced to each other. In
this case, quality of the downloaded pages would not be that
good since most of pages would contain a large number of
very similar geographic entities. To formalize the notion, we
depict the ratio between the number of unique geographic
entities and the total number of geographic entities from
the retrieved pages as shown in Table 5. This ratio verifies
whether each crawling policy is covering sufficient number of
pages whose geographical scope is different. It is interesting
Type of collaboration
Geo-centrality
Hash based
0.0222
URL based
0.1754
Extended anchor text based
0.1519
Full content based
0.0994
Classification based
0.0273
IP-address based
0.0380
Table 6: Geo-centrality of crawling strategies
Type of collaboration
Overlap
Hash Based
None
URL Based
None
Extended Anchor Text Based
0.08461
Full Content Based
0.173239
Classification Based
0.34599
IP-address based
None
Table 7: Overlap of crawling strategies
to note that those geographically focused collaborative policies
, which show to have good performance relative to the
previous measures, such as the URL based, the extended anchor
text based and the full content based strategies tend to
discover pages with less diverse geographical scope. On the
other hand, the less performed crawling strategies such as
the IP-based, the classification based, the URL-hash based
strategies are shown to collect pages with more diverse geographical
scope.
We finally study each crawling strategy in terms of the
geo-centrality measure as shown in Table 6. One may observe
from Table 6 that the geo-centrality value provides an
accurate view on the quality of the downloaded geo graphically-aware
pages for each crawling strategy since the geo-centrality
value for each crawling strategy follows what we have obtained
with respect to geo-coverage and geo-precision. URL
based and extended anchor text based strategies show to
have the best geo-centrality values with 0.1754 and 0.1519
respectively, followed by the full content based strategy with
0.0994, followed by the IP based strategy with 0.0380, and
finally the hash based strategy and the classification based
strategy show to have similarly low geo-centrality values.
5.2.2
Performance Issue
In Table 7, we first show the overlap measure which reflects
the number of duplicated pages out of the downloaded
pages. Note that the hash based policy does not have any
duplicated page since its page assignment is completely independent
of other page assignment. For the same reason,
the overlap for the URL based and the IP based strategies
are none. The overlap of the extended anchor text
294
Type of collaboration
Diversity
Hash Based
0.0814
URL Based
0.0405
Extended Anchor Text Based
0.0674
Full Content Based
0.0688
Classification Based
0.0564
IP-address based
0.3887
Table 8: Diversity of crawling strategies
based is 0.08461 indicating that the extended anchor text of
page computes the geographically scope of the corresponding
URL in an almost unique manner.
In other words,
there is low probability that two completely different city
name references are found within a URL address. Therefore
, this would be another reason why the extended anchor
text would be a good feature to be used for the partition of
the web within the geographical context. The overlap of the
full content based and the classification based strategies are
relatively high with 0.173239 and 0.34599 respectively.
In Table 8, we present the diversity of the downloaded
pages. The diversity values of geographically focused collaborative
crawling strategies suggest that most of the geographically
focused collaborative crawling strategies tend to
favor those pages which are found grouped under the same
domain names because of their crawling method. Especially,
the relatively low diversity value of the URL based strongly
emphasizes this tendency. Certainly, this matches with the
intuition since a page like "http://www.houston-guide.com"
will eventually lead toward the download of its child page
"http://www.houston-guide.com/guide/arts/framearts.html"
which shares the same domain.
In Table 9, we present the communication-overhead of
each crawling strategy.
Cho and Garcia-Molina [10] report
that the communication overhead of the Hash-Based
with two processors is well above five. The communication-overhead
of the Hash-based policy that we have follows with
what they have obtained.
The communication overhead
of geographically focused collaborative policies is relatively
high due to the intensive exchange of URLs between crawling
nodes.
In Table 10, we summarize the relative merits of the proposed
geographically focused collaborative crawling strategies
. In the Table, "Good" means that the strategy is expected
to perform relatively well for the measure, "Not Bad"
means that the strategy is expected to perform relatively acceptable
for that particular measure, and "Bad" means that
it may perform worse compared to most of other collaboration
strategies.
5.3
Geographic Locality
Many of the potential benefits of topic-oriented collaborative
crawling derive from the assumption of topic locality,
that pages tend to reference pages on the same topic [12,
13]. For instance, a classifier is used to determine whether
the child page is in the same topic as the parent page and
then guide the overall crawling [12]. Similarly, for geographically
focused collaborative crawling strategies we make the
assumption of geographic locality, that pages tend to reference
pages on the same geographic location. Therefore,
the performance of a geographically focused collaborative
crawling strategy is highly dependent on its way of exploiting
the geographic locality. That is whether the correspond-Type
of collaboration
Communication overhead
URL-hash based
13.89
URL based
25.72
Extended anchor text based
61.87
Full content text based
46.69
Classification based
58.38
IP-Address based
0.15
Table 9: Communication-overhead
ing strategy is based on the adequate features to determine
the geographical similarity of two pages which are possibly
linked. We empirically study in what extent the idea of geographic
locality holds. Recall that given the list of city-state
pairs G = {~
c
1
, . . . , ~
c
k
} and a geographically focused crawling
collaboration strategy (e.g., URL based collaboration),
pr(~
c
i
|p
j
) is the probability that page is p
j
is pertinent to
city-state pair c
i
according to that particular strategy. Let
gs(p, q), geographic similarity between pages p, q, be
gs(p, q) =
1
if (arg max
~
c
i
G
P r(~
c
i
|p)
= arg max
~
c
j
G
P r(~
c
j
|q))
0
otherwise
In other words, our geographical similarity determines
whether two pages are pertinent to the same city-state pair.
Given , the set of retrieved page for the considered crawling
strategy, let () and ~
() be
() = |{(p
i
, p
j
) |p
i
, p
j
linked and gs(p, q) = 1}|
|{(p
i
, p
j
) |p
i
, p
j
linked}|
~
() = |{(p
i
, p
j
) |p
i
, p
j
not linked and gs(p, q) = 1}|
|{(p
i
, p
j
) |p
i
, p
j
not linked}|
Note that () corresponds to the probability that a pair
of linked pages, chosen uniformly at random, is pertinent to
the same city-state pair under the considered collaboration
strategy while ~
() corresponds to the probability that a
pair of unlinked pages, chosen uniformly at random, is pertinent
to the same city-state pair under the considered collaboration
strategy. Therefore, if the geographic locality occurs
then we would expect to have high () value compared to
that of ~
(). We selected the URL based, the classification
based, and the full content based collaboration strategies,
and calculated both () and ~
() for each collaboration
strategy. In Table 11, we show the results of our computation
. One may observe from Table 11 that those pages that
share the same city-state pair in their URL address have the
high likelihood of being linked. Those pages that share the
same city-state pair in their content have some likelihood
of being linked. Finally, those pages which are classified as
sharing the same city-state pair are less likely to be linked.
We may conclude the following:
The geographical similarity of two web pages affects
the likelihood of being referenced. In other words, geographic
locality, that pages tend to reference pages
on the same geographic location, clearly occurs on the
web.
A geographically focused collaboration crawling strategy
which properly explores the adequate features for
determining the likelihood of two pages being in the
same geographical scope would expect to perform well
for the geographically focused crawling.
295
Type of collaboration
Geo-coverage
Geo-Focus
Geo-Connectivity
Overlap
Diversity
Communication
URL-Hash Based
Bad
Bad
Bad
Good
Good
Good
URL Based
Good
Good
Good
Good
Bad
Bad
Extended Anchor
Good
Good
Good
Good
Not Bad
Bad
Text Based
Full Content Based
Not Bad
Not Bad
Not Bad
Not Bad
Not Bad
Bad
Classification Based
Bad
Bad
Bad
Bad
Not Bad
Bad
IP-Address
Bad
Bad
Bad
Good
Bad
Good
Table 10: Comparison of geographically focused collaborative crawling strategies
Type of collaboration
()
~
()
URL based
0.41559
0.02582
classification based
0.044495
0.008923
full content based
0.26325
0.01157
Table 11: Geographic Locality
CONCLUSION
In this paper, we studied the problem of geographically
focused collaborative crawling by proposing several collaborative
crawling strategies for this particular type of crawling.
We also proposed various evaluation criteria to measure the
relative merits of each crawling strategy while empirically
studying the proposed crawling strategies with the download
of real web data. We conclude that the URL based and
the extended anchor text based crawling strategies have the
best overall performance. Finally, we empirically showed
geographic locality, that pages tend to reference pages on
the same geographical scope. For the future research, it
would be interesting to incorporate more sophisticated features
(e.g., based on DOM structures) to the proposed crawling
strategies.
ACKNOWLEDGMENT
We would like to thank Genieknows.com for allowing us
to access to its hardware, storage, and bandwidth resources
for our experimental studies.
REFERENCES
[1] C. C. Aggarwal, F. Al-Garawi, and P. S. Yu.
Intelligent crawling on the world wide web with
arbitrary predicates. In WWW, pages 96105, 2001.
[2] E. Amitay, N. Har'El, R. Sivan, and A. Soffer.
Web-a-where: geotagging web content. In SIGIR,
pages 273280, 2004.
[3] S. Borgatti. Centrality and network flow. Social
Networks, 27(1):5571, 2005.
[4] P. D. Bra, Y. K. Geert-Jan Houben, and R. Post.
Information retrieval in distributed hypertexts. In
RIAO, pages 481491, 1994.
[5] O. Buyukkokten, J. Cho, H. Garcia-Molina,
L. Gravano, and N. Shivakumar. Exploiting
geographical location information of web pages. In
WebDB (Informal Proceedings), pages 9196, 1999.
[6] S. Chakrabarti. Mining the Web. Morgan Kaufmann
Publishers, 2003.
[7] S. Chakrabarti, K. Punera, and M. Subramanyam.
Accelerated focused crawling through online relevance
feedback. In WWW, pages 148159, 2002.
[8] S. Chakrabarti, M. van den Berg, and B. Dom.
Focused crawling: A new approach to topic-specific
web resource discovery. Computer Networks,
31(11-16):16231640, 1999.
[9] J. Cho. Crawling the Web: Discovery and
Maintenance of Large-Scale Web Data. PhD thesis,
Stanford, 2001.
[10] J. Cho and H. Garcia-Molina. Parallel crawlers. In
WWW, pages 124135, 2002.
[11] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through url ordering. Computer Networks,
30(1-7):161172, 1998.
[12] C. Chung and C. L. A. Clarke. Topic-oriented
collaborative crawling. In CIKM, pages 3442, 2002.
[13] B. D. Davison. Topical locality in the web. In SIGIR,
pages 272279, 2000.
[14] M. Diligenti, F. Coetzee, S. Lawrence, C. L. Giles, and
M. Gori. Focused crawling using context graphs. In
VLDB, pages 527534, 2000.
[15] J. Ding, L. Gravano, and N. Shivakumar. Computing
geographical scopes of web resources. In VLDB, pages
545556, 2000.
[16] J. Edwards, K. S. McCurley, and J. A. Tomlin. An
adaptive model for optimizing performance of an
incremental web crawler. In WWW, pages 106113,
2001.
[17] J. Exposto, J. Macedo, A. Pina, A. Alves, and
J. Rufino. Geographical partition for distributed web
crawling. In GIR, pages 5560, 2005.
[18] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.
Categorizing web queries according to geographical
locality. In CIKM, pages 325333, 2003.
[19] A. Heydon and M. Najork. Mercator: A scalable,
extensible web crawler. World Wide Web,
2(4):219229, 1999.
[20] J. M. Kleinberg. Authoritative sources in a
hyperlinked environment. J. ACM, 46(5):604632,
1999.
[21] H. C. Lee and R. Miller. Bringing geographical order
to the web. private communication, 2005.
[22] A. Markowetz, Y.-Y. Chen, T. Suel, X. Long, and
B. Seeger. Design and implementation of a geographic
search engine. In WebDB, pages 1924, 2005.
[23] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.
A machine learning approach to building
domain-specific search engines. In IJCAI, pages
662667, 1999.
[24] F. Menczer, G. Pant, P. Srinivasan, and M. E. Ruiz.
Evaluating topic-driven web crawlers. In SIGIR, pages
241249, 2001.
[25] T. Mitchell. Machine Learning. McGraw Hill, 1997.
296
| geographical nodes;crawling strategies;Collaborative crawler;Evaluation criteria;URL based;Geographic Locality;Normalization and Disambiguation of City Names;Search engine;Geographically focused crawling;Scalability;Geographic locality;Collaborative crawling;Anchor text;collaborative crawling;Problems of aliasing and ambiguity;Search localization;IP address based;Focused crawler;Geo-focus;geographic entities;Full Content based;Extracted URL;pattern matching;Crawling strategies;Geo-coverage;Hash based collaboration;geographically focused crawling;Quality issue |
98 | GraalBench: A 3D Graphics Benchmark Suite for Mobile Phones | In this paper we consider implementations of embedded 3D graphics and provide evidence indicating that 3D benchmarks employed for desktop computers are not suitable for mobile environments. Consequently, we present GraalBench, a set of 3D graphics workloads representative for contemporary and emerging mobile devices . In addition, we present detailed simulation results for a typical rasterization pipeline. The results show that the proposed benchmarks use only a part of the resources offered by current 3D graphics libraries. For instance, while each benchmark uses the texturing unit for more than 70% of the generated fragments, the alpha unit is employed for less than 13% of the fragments. The Fog unit was used for 84% of the fragments by one benchmark, but the other benchmarks did not use it at all. Our experiments on the proposed suite suggest that the texturing, depth and blending units should be implemented in hardware, while, for instance, the dithering unit may be omitted from a hardware implementation. Finally, we discuss the architectural implications of the obtained results for hardware implementations. | INTRODUCTION
In recent years, mobile computing devices have been used for a
broader spectrum of applications than mobile telephony or personal
digital assistance. Several companies expect that 3D graphics applications
will become an important workload of wireless devices.
For example, according to [10], the number of users of interactive
3D graphics applications (in particular games) is expected to increase
drastically in the future: it is predicted that the global wireless
games market will grow to 4 billion dollars in 2006. Because
current wireless devices do not have sufficient computational power
to support 3D graphics in real time and because present accelerators
consume too much power, several companies and universities have
started to develop a low-power 3D graphics accelerator. However,
to the best of our knowledge, there is no publicly available benchmark
suite that can be used to guide the architectural exploration of
such devices.
This paper presents GraalBench, a 3D graphics benchmark suite
suitable for 3D graphics on low-power, mobile systems, in particular
mobile phones. These benchmarks were collected to facilitate
our studies on low-power 3D graphics accelerators in the Graal
(GRAphics AcceLerator) project [5]. It includes several games as
well as virtual reality applications such as 3D museum guides. Applications
were selected on the basis of several criteria. For example
, CAD/CAM applications, such as contained in the Viewperf
package [18], were excluded because it is unlikely that they will be
offered on mobile devices. Other characteristics we considered are
resolution and polygon count.
A second goal of this paper is to provide a detailed quantitative
workload characterization of the collected benchmarks. For each
rasterization unit, we determine if it is used by the benchmark, and
collect several statistics such as the number of fragments that bypass
the unit, fragments that are processed by the unit and pass the
test, and fragments that are processed but fail the test. Such statistics
can be used to guide the development of mobile 3D graphics
1
architectures. For example, a unit that is rarely used might not be
supported by a low-power accelerator or it might be implemented
using less resources. Furthermore, if many fragments are discarded
before the final tests, the pixel pipeline of the last stages might be
narrower than the width of earlier stages.
This paper is organized as follows. Previous work on 3D graphics
benchmarking is described in Section 2. In this section we also
give reasons why current 3D graphics benchmarks are not appropriate
for mobile environments. Section 3 first explains how the
benchmarks were obtained, describes our tracing environment, the
simulator we used to collect the statistics and, after that, describes
the components of the proposed benchmark suite and presents some
general characteristics of the workloads. Section 4 provides a workload
characterization of the benchmarks and discusses architectural
implications. Conclusions and directions for future work are given
in Section 5.
RELATED WORK
To the best of our knowledge, 3D graphics benchmarks specifi-cally
targeted at low-power architectures have not been proposed.
Furthermore, existing benchmarks cannot be considered to be suited
for embedded 3D graphics architectures. For example, consider
SPEC's Viewperf [18], a well-known benchmark suite used to evaluate
3D graphics accelerator cards employed in desktop computers.
These benchmarks are unsuitable for low-power graphics because
of the following:
The Viewperf benchmarks are designed for high-resolution
output devices, but the displays of current wireless systems
have a limited resolution. Specifically, by default the Viewperf
package is running at resolutions above SVGA (
800
600 pixels), while common display resolutions for mobile
phones are QCIF (
176 144) and QVGA (320 240).
The Viewperf benchmarks use a large number of polygons in
order to obtain high picture quality (most benchmarks have
more than 20,000 triangles per frame [11]). Translated to
a mobile platform, most rendered polygons will be smaller
than one pixel so their contribution to the generated images
will be small or even invisible. Specifically, the polygon
count of the Viewperf benchmarks DRV, DX, ProCDRS, and
MedMCAD is too high for mobile devices.
Some benchmarks of Viewperf are CAD/CAM applications
and use wire-frame rendering modes. It is unlikely that such
applications will be offered on mobile platforms.
Except Viewperf, there are no publicly-available, portable 3D graphics
benchmark suites. Although there are several benchmarking
suites [3, 4] based on the DirectX API, they are not suitable for our
study since DirectX implementations are available only on Windows
systems.
There have been several studies related to 3D graphics workload
characterization (e.g., [11, 6]). Most related to our investigation is
the study of Mitra and Chiueh [11], since they also considered dynamic
, polygonal 3D graphics workloads. Dynamic means that the
workloads consist of several consecutive image frames rather than
individual images, which allows to study techniques that exploit the
coherence between consecutive frames. Polygonal means that the
basic primitives are polygons, which are supported by all existing
3D chips. The main differences between that study and our workload
characterization are that Mitra and Chiueh considered high-end
applications (Viewperf, among others) and measured different
statistics.
Recently, a number of mobile 3D graphics accelerators [1, 16]
have been presented. In both works particular benchmarks were
employed to evaluate the accelerators. However, little information
is provided about the benchmarks and they have not been made
publicly available.
Another reason for the limited availability of mobile 3D graphics
benchmarks is that until recently there was no generally accepted
API for 3D graphics on mobile phones. Recently, due to
high interest in embedded 3D graphics, APIs suitable for mobile
3D graphics such as OpenGL ES [8], Java mobile 3D Graphics
API (JSR-184) [7], and Mobile GL [20] have appeared. Currently,
however, there are no 3D benchmarks written using these APIs. So,
we have used OpenGL applications. Furthermore, our benchmarks
use only a part of the OpenGL functionality which is also supported
by OpenGL ES.
THE GraalBench BENCHMARK SET
In this section we describe the environment we used to create the
benchmarks, the components of our benchmark set and also some
general characteristics of the workloads.
3.1
Tracing Environment
Due to their interactive nature, 3D games are generally not repeatable
. In order to obtain a set of repeatable workloads, we traced
existing applications logging all OpenGL calls. Our tracing environment
consists of two components: a tracer and a trace player.
Our tracer is based on GLtrace from Hawksoft [15]. It intercepts
and logs OpenGL calls made by a running application, and then
calls the OpenGL function invoked by the application. No source
code is required provided the application links dynamically with
the OpenGL library, meaning that the executable only holds links
to the required functions which are bounded to the corresponding
functions at run-time. Statically linked applications, in which case
the required libraries are encapsulated in the executable image, cannot
be traced using this mechanism when the source code is not
available.
We improved GLtrace in two ways. First, GLtrace does not log
completely reproducible OpenGL calls (for example, textures are
not logged). We modified the GLtrace library so that all OpenGL
calls are completely reproducible. Second, the trace produced by
GLtrace is a text trace, which is rather slow. We improved its performance
by adding a binary logging mode that significantly reduces
the tracing overhead.
In addition, we developed a trace player that plays the obtained
traces. It can play recorded frames as fast as the OpenGL implementation
allows. It does not skip any frame so the workload generated
is always the same. The workload statistics were collected
using our own OpenGL simulator based on Mesa [14], which is a
public-domain implementation of OpenGL.
3.2
The Benchmarks
The proposed benchmark suite consists of the following components
:
Q3L and Q3H
Quake III [9] or Q3, for short, is a popular interactive
3D game belonging to the shooter games category. A
screenshot of this game is depicted in Figure 1(a). Even
though it can be considered outdated for contemporary PC-class
graphics accelerators, it is an appropriate and demanding
application for low-power devices. Q3 has a flexible design
and permits many settings to be changed such as image
size and depth, texture quality, geometry detail, types of texture
filtering, etc. We used two profiles for this workload in
2
(a) Q3
(b) Tux
(c) AW
(d) ANL
(e) GRA
(f) DIN
Figure 1: Screenshots of the GraalBench workloads
order to determine the implications of different image sizes
and object complexity. The first profile, which will be re-ferred
to as Q3H, uses a relatively high image resolution and
objects detail. The second profile, Q3L, employs a low resolution
and objects detail. Q3 makes extensive use of blending
operations in order to implement multiple texture passes.
Tux Racer
(Tux) [19] This is a freely available game that runs on
Linux. The goal of this game is to drive a penguin down
a mountain terrain as quickly as possible, while collecting
herring. The image quality is higher than that of Q3. Tux
makes extensive use of automatic texture coordinate generation
functions. A screenshot can be seen in Figure 1(b).
AWadvs-04
(AW) [18] This test is part of the Viewperf 6.1.2 package
. In this test a fully textured human model is viewed from
different angles and distances. As remarked before, the other
test in the Viewperf package are not suitable for low-power
accelerators, because they represent high-end applications or
are from an application domain not likely to be offered on
mobile platforms. A screenshot of AW is depicted in Figure
1(c).
ANL, GRA, and DIN
These three VRML scenes were chosen based
on their diversity and complexity. ANL is a virtual model of
Austrian National Library and consists of 10292 polygons,
GRA is a model of Graz University of Technology, Austria
and consists of 8859 polygons, and Dino (DIN) is a model
of a dinosaur consisting of 4300 polygons. In order to obtain
a workload similar to one that might be generated by a typical
user, we created "fly-by" scenes. Initially, we used VR-Web
[13] to navigate through the scenes, but we found that
the VRMLView [12] navigator produces less texture traffic
because it uses the
glBindTexture
mechanism. Screenshots
of ANL, GRA, and DIN are depicted in Figure 1(d),
(e), and (f), respectively.
GraalBench is the result of extensive searching on the World
Wide Web. The applications were selected on the basis of several
criteria. First, since the display resolution of contemporary mobile
phones is at most
320 240, we excluded applications with
substantially higher resolution. Specifically, we used a maximum
resolution of
640 480. Second, the applications should be relevant
for a mobile phone, i.e., it should be likely that it will be
offered on a mobile phone. CAD/CAM applications were excluded
for this reason. Third, the level of details of the applications should
not be too high, because otherwise, most rendered polygons will be
smaller than one pixel on the display of a mobile phone. Fourth,
and finally, the benchmarks should have different characteristics.
For example, several links to 3D games have recently been provided
on Mesa's website (
www.mesa3d.org
). However, these
games such as Doom, Heretic, and Quake II belong to the same
category as Quake III and Tux Racer, and therefore do not represent
benchmarks with substantially different characteristics. We,
therefore, decided not to include them.
Applications using the latest technologies (Vertex and Pixel Sha-ders
) available on desktop 3D graphics accelerators were also not
included since these technologies are not supported by the embedded
3D graphics APIs mentioned in Section 2. We expect that more
3D graphics applications for low-power mobile devices will appear
when accelerators for these platforms will be introduced.
3
3.3
General Characteristics
Table 1 present some general statistics of the workloads. The
characteristics and statistics presented in this table are:
Image resolution
Currently, low-power accelerator should be able
to handle scenes with a typical resolution of
320240 pixels.
Since in the near future the typical resolution is expected to
increase we decided to use a resolution of
640 480. The
Q3L benchmark uses a lower resolution (
320240) in order
to study the impact of changing the resolution.
Frames
The total number of frames in each test.
Avg. triangles
The average number of triangles sent to the rasterizer
per frame.
Avg. processed triangles
The average number of triangles per frame
that remained after backface culling, i.e., the triangles that
remained after eliminating the triangles that are invisible because
they are facing backwards from the observer's viewpoint
.
Avg. area
The average number, per frame, of fragments/pixels after
scan conversion.
Texture size
The total size of all textures per workload. This quantity
gives an indication of the amount of texture memory required
.
Maximum triangles
The maximum number of triangles that were
sent for one frame. Because most 3D graphics accelerators
implement only rasterization, this statistic is an approximation
of the bandwidth required for geometry information,
since triangles need to be transferred from the CPU to the
accelerator via a system bus. We assume that triangles are
represented individually. Sharing vertices between adjacent
triangles allows to reduce the bus bandwidth required. This
quantity also determines the throughput required in order to
achieve real-time frame rates. We remark that the maximum
number rather than the average number of triangles per frame
determines the required bandwidth and throughput.
Maximum processed triangles per frame
The maximum number
of triangles that remained after backface culling over all frames.
Maximum area per frame
The maximum number of fragments
after scan conversion, over all frames.
Several observations can be drawn from Table 1. First, it can be
observed from the columns labeled "Received triangles" that the
scenes generated by Tux and Dino have a relatively low complexity
, that Q3, ANL, and Graz consist of medium complexity scenes,
and that AW produces the most complex scenes by far. Second,
backface culling is effective in eliminating invisible triangles. It
eliminates approximately 30% of all triangles in the Q3 benchmarks
, 24% in Graz, and more than half (55%) of all triangles in
AW. Backface culling is not enabled in the ANL and Dino workloads
. If we consider the largest number of triangles remaining
after backface culling (14236 for ANL) and assume that each triangle
is represented individually and requires 28 bytes (xyz coordinates
, 4 bytes each, rgb for color and alpha for transparency, 1 byte
each, and uvw texture coordinates, 4 bytes each) for each of its vertices
, the required bus bandwidth is approximately 1.2MB/frame
or 35.9MB/s to render 30 frames per second. Finally, we remark
that the largest amount of texture memory is required by the Q3
and Tux benchmarks, and that the other benchmarks require a relatively
small amount of texture memory.
Table 2: Stress variation and stress strength on various stages
of the 3D graphics pipeline
Bench.
T&L
Rasterization
Var.
Str.
Var.
Str.
Q3L
med
med
med
med
Q3H
med
med
med
high
Tux
var
low
low
med
AW
low
high
high
low
ANL
high
med
med
med
GRA
high
med
med
low
DIN
low
med
med
low
WORKLOAD CHARACTERIZATION
This section provides the detailed analysis of results we obtained
by running the proposed benchmark set. For each unit of a typical
rasterization pipeline we present the relevant characteristics followed
by the architectural implications.
4.1
Detailed Workload Statistics
One important aspect for 3D graphics benchmarking is to determine
possible bottlenecks in a 3D graphics environment since
the 3D graphics environment has a pipeline structure and different
parts of the pipeline can be implemented on separate computing
resources such as general purpose processors or graphics accelerators
. Balancing the load on the resources is an important decision.
Bottlenecks in the transform & lighting (T&L) part of the pipeline
can be generated by applications that have a large number of primitives
, i.e. substantial geometry computation load, where each primitive
has a small size, i.e. reduced impact on the rasterization part
of the pipeline, while bottlenecks in the rasterization part are usu-ally
generated by fill intensive applications that are using a small
number of primitives where each primitive covers a substantial part
of the scene. An easy way to determine if an application is for
instance rasterization intensive is to remove the rasterization part
from the graphics pipeline and determine the speed up.
The components of the GraalBench were also chosen to stress
various parts of the pipeline. For instance the AW and DIN components
generate an almost constant number of primitives while the
generated area varies substantially across frames, thus in these scenarios
the T&L part of the pipeline has a virtually constant load
while the rasterization part has a variable load. This behavior, depicted
in Figure 4 (c,d,g,h), can emphasize the role of the rasterization
part of the pipeline. The number of triangles received gives
an indication of the triangles that have to be transformed and lit,
while the number of triangles processed gives an indication of the
triangles that were sent to the rasterization stage after clipping and
culling. Other components, e.g. Tux, generate a variable number
of triangles while the generated area is almost constant, thus they
can be used to profile bottlenecks in the T&L part of the graphics
pipeline.
Another important aspect beside the variation of the workload
for a certain pipeline stage is also the stress strength of the various
workload. For convenience, in Table 2 is also presented a
rough view of the stress variation and stress strength along the 3D
graphics pipeline. The stress variation represents how much varies
a workload from one frame to another, while the stress strength
represents the load generated by each workload.
On the proposed benchmarks we determined that, for a software
implementation, the most computationally intensive part of
4
Table 1: General statistics of the benchmarks
Bench.
Resolution
Frames
Textures
(MB)
Received
Triangles
Avg.
Max.
Processed
Triangles
Avg.
Max.
Area
Avg.
Max.
Q3L
320 240
1,379
12.84
4.5k
9.7k
3.25k
6.8k
422k
1,327k
Q3H
640 480
1,379
12.84
4.6k
9.8k
3.36k
6.97k
1,678k
5,284k
Tux
640 480
1,363
11.71
3k
4.8k
1.8k
2.97k
760k
1,224k
AW
640 480
603
3.25
23k
25.7k
10.55k
13.94k
63k
307k
ANL
640 480
600
1.8
4.45k
14.2k
4.45k
14.2k
776k
1,242k
GRA
640 480
599
2.1
4.9k
10.8k
3.6k
6.9k
245k
325k
DIN
640 480
600
1.7
4.15k
4.3k
4.15k
4.3k
153k
259k
Blending
Depth Test
Alpha Test
Scissor Test
Stencil Test
Pixel Ownership
Dithering
LogicOp
Texture Unit
Color
Stencil
Texturing
Color Sum
Edge Walk
Span Interpolation
Triangle Setup
Texture Memory
Memory
Primitives sent fom the Transform & Lighting Stage
Depth
Fog
Clear
Figure 2: Graphics pipeline for rasterization.
the graphics pipeline is the rasterization part. This is the reason
why only the rasterizer stage was additionally studied. The units
for which we further gathered results are depicted in Figure 2 and
described in the following:
Triangle Setup, EdgeWalk, and Span Interpolation.
These
units convert primitives to fragments. We used the same algorithm
as employed in the Mesa library rasterizer. We remark that the
number of processed triangles in Table 3 is smaller than the number
of processed triangles in Table 1 because some triangles were
small and discarded using supplementary tests, such as a small triangle
filter. The average number of processed triangles is the lowest
for Tux, medium for Q3 and VRML components, and substantially
higher for AW considering that the number of triangles for the AW
and VRML components were generated in approximatively half as
many frames as the number of frames in Q3 and Tux. The numbers
of generated spans and fragments also give an indication of
the processing power required at the EdgeWalk and Span Interpolation
units. The AW benchmark generates on average only 4 spans
per triangle and approximately 2 fragments per span. These results
show that the benchmark that could create a pipeline bottleneck in
these units is the AW benchmark since it has small triangles (small
impact on the rest of the pipeline) and it has the largest number of
triangles (that are processed at the Triangle Setup).
Clear Unit.
The clear unit is used to fill the Depth buffer and/or
the Color buffer with a default value. As can be seen in Table 4,
the Q3 benchmark uses only depth buffer clearing, except for one
initial color buffer clear. Q3 exploits the observation that all pixels
from a scene are going to be filled at least once so there is no need
to clear the color buffer. The other benchmarks have an equal number
of depth and color buffer clears. Although the clear function
is called a relatively small number of times, the number of cleared
pixels can be as high as 20% of the pixels generated by the rasterizer
. This implies that this unit should be optimized for long write
burst to the graphics memory.
Texture Unit.
When enabled, this unit combines the color of the
incoming fragment with a texture sample color. Depending on the
texturing filter chosen, the texture color is obtained by a direct look-up
in a texture map or by a linear interpolation between several
colors (up to 8 colors in the case of trilinear interpolation).
The results obtained for the texture unit are depicted in Figure 3.
The Q3 and VV benchmarks used the texture unit for all fragments,
while the Tux and AW benchmarks used texturing for 75% and 90%
of the fragments respectively.
This unit is the most computationally intensive unit of a rasterizer
and can easily become a pipeline bottleneck, thus it should be
highly optimized for speed. Beside requiring high computational
power, this unit also requires a large number of accesses to the texture
memory . However, due to high spatial locality, even by adding
a small texture cache, the traffic from the off-chip texture memory
to the texture unit can be substantially reduced[2].
Fog Unit.
The Fog unit is to used modify the fragment color in
order to simulate atmospheric effects such as haze, mist, or smoke.
Only the Tux benchmark uses the fog unit and it was enabled for
84% of the fragments. From the three types of fog (linear, exponential
, and squared exponential) only linear was used. The results
suggest that for these low-end applications the fog unit is seldomly
used and that it might be implemented using slower components or
that this unit can be implemented off the critical path.
5
Table 3: Triangle Setup, Edge Walk, and Span Interpolation units statistics
Q3L
Q3H
Tux
AW
ANL
GRA
DIN
Triangles processed
4,147k
4,430k
2,425k
5,537k
2,528k
1,992k
2,487k
Generated spans
58,837k
117,253k
27,928k
20,288k
66,806k
14,419k
23,901k
Generated fragments
(frags.)
581,887k
2,306,487k
1,037,222k
38,044k
466,344k
146,604k
91,824k
Table 4: Clear unit statistics
Q3L
Q3H
Tux
AW
ANL
GRA
DIN
Clear depth calls
5,470
5,470
1,363
603
601
600
602
Clear color calls
1
1
1,363
604
601
600
602
Clear depth pixels
105,821k
423,284k
418,714k
185,242k
184,320k
184,013k
189,934k
Clear color pixels
76,800
307,200
418,714k
185,549k
184,320k
184,013k
189,934k
Scissor Unit.
This unit is used to discard fragments that are
outside of a specified rectangular area. Only Q3 employed scissoring
. All incoming fragments were processed and passed the test,
so even in this case the test is redundant since no fragments were
rejected. Normally, the scissor unit is used to restrict the drawing
process to a certain rectangular region of the image space. In
Q3 this unit, besides being always enabled for the whole size of
the image space (to clip primitives outside it), in some cases it is
also used to clear the depth component of a specific region of the
image so that certain objects (interface objects) will always be in
front of other objects (normal scene). Even though it might be used
intensively by some applications, this unit performs simple computations
, such as comparisons, and performs no memory accesses so
it does not require substantial computational power.
Alpha Unit.
This unit discards fragments based on their alpha
color component. This unit was used only in Q3 and Tux. Furthermore
, Q3 used the alpha unit only for a very small number
of fragments (0.03%). The only comparison function used was
"Greater or Equal". However, this is not a significant property
since the other comparison functions (modes) do not require a substantial
amount of extra hardware to be implemented. The number
of passed fragments could not be determined since the texturing
unit of our graphics simulator is not yet complete, and the alpha test
depends on the alpha component that can be modified by the texture
processing unit. However, this unit is used significantly only
for the Tux benchmark so this is the only benchmark that could
have produced different results. Furthermore, the propagated error
for the results we obtained can be at most 7.8% since 92.2% of the
fragments generated by Tux bypassed this unit. We, therefore, assumed
that all fragments passed the alpha test. This corresponds
to the worst case. Since this unit is seldomly used, it could be
implemented using a more conservative strategy toward allocated
resources.
Depth Unit.
This unit discards a fragment based on a comparison
between its depth value and the depth value stored in the depth
buffer in the fragment's corresponding position. This unit was used
intensively by all benchmarks as can be seen in Table 4.1. While the
Tux, Aw, and VRML benchmarks write almost all fragments that
passed the depth test to the depth buffer, the Q3 benchmark writes
to the depth buffer only 36% of the fragments that passed the test.
This is expected since Q3 uses multiple steps to apply textures to
primitives and so it does not need to write to the depth buffer at each
step. This unit should definitely be implemented in an aggressive
manner with respect to throughput (processing power) and latency,
since for instance the depth buffer read/write operations used at this
unit are quite expensive.
Blending Unit.
This unit combines the color of the incoming
fragment with the color stored at the corresponding position in the
framebuffer. As depicted in Figure 3, this unit is used only by the
Q3 and Tux benchmarks. The AW and VRML benchmarks do not
use this unit since they use only single textured primitives and all
blending operations are performed at the texturing stage. Q3, on the
other hand, uses a variety of blending modes, while Tux employs
only a very common blending mode (source = incoming pixel alpha
and dest= 1 - incoming pixel alpha). An explanation why Tux
manages to use only this mode is that Tux uses the alpha test instead
of multiple blending modes. Alpha tests are supposed to be
less computationally intensive than blending operations since there
is only one comparison per fragment, while the blending unit performs
up to 8 multiplications and 4 additions per fragment. Based
on its usage and computational power required, the implementation
of this unit should be tuned toward performance.
Unused Units.
The LogicOp, Stencil and Color Sum units are
not used by any benchmark. The dithering unit is used only by the
AW benchmark (for all fragments that passed the blending stage).
Since these units are expected to be hardly used their implementation
could be tuned toward low-power efficiency.
4.2
Architectural Implications Based on Unit
Usage
In this section the usage of each unit for the selected benchmarks
is presented. The statistics are gathered separately for each benchmark
. Figure 3 breaks down the number of fragments received by
each unit into fragments that bypassed the unit, fragments that were
processed by the unit and passed the test, and fragments that failed
the test. All values are normalized to the number of fragments generated
by the Span Interpolation unit.
From Figure 3 it can be seen that the Q3 benchmark is quite scalable
and the results obtained for the low resolution profile (Q3L)
are similar with the results obtained for the high resolution profile
(Q3H). The Q3 benchmark can be characterized as an application
that uses textures for most of its primitives. The Tux component is
also using textures for more than 70% of its primitives, and it also
uses the fog unit. The AW component does not use the scissor test
6
Table 5: Depth unit statistics
Q3L
Q3H
Tux
AW
ANL
GRA
DIN
Incoming frags.
581,887k
2,306,487k
1,037,222k
38,044k
466,344k
146,604k
91,824k
Processed frags.
578,345k
2,292,357k
512,618k
38,044k
466,344k
146,604k
91,824k
Passed frags.
461,045k
1,822,735k
473,738k
35,037k
281,684k
137,109k
73,268k
Frags. written to
the depth buffer
166,624k
666,633k
462,520k
35,037k
281,684k
137,109k
73,268k
DIN
Q
3L
A
NL
G
ra
A
w
Tux
Q
3H
0
20
40
60
80
100
120
Texture
Fog
Scissor
Alpha
Depth
Blending
Dithering
%
F
r
agments
Passed
Failed
Bypassed
Figure 3: Rasterization pipeline units usage
as the others are doing and also it has no pixels rejected at the depth
test. Another difference from the previous components is that AW
is also using the dithering mechanism in order to improve the image
quality on displays with a low color depth. Some architectural
implications based on the units usage are: Some of the units such
as Color sum, LogicOp and Stencil were not used, so they might
not be implemented in hardware. Some units such as Fog and Alpha
were less used and they can be also be implemented outside
the critical path. The Depth and Blending units should be hard-wired
units and tuned toward performance. The texture unit should
be definitely focused upon for a high performance implementation
since, due to the processing power required, it can easily become a
bottleneck for the graphics pipeline.
CONCLUSIONS AND FUTURE WORK
Although high-end 3D graphics benchmarks have been available
for some time, there are no benchmark suites dedicated to embedded
3D graphics accelerators. In this paper we have described a
set of relevant applications for embedded 3D graphics accelerators
performance evaluation. Also one of the objectives of this paper
was to determine what features of 3D graphics implementations
are used in relevant 3D graphics applications. We have also identified
a number of units from the 3D graphics pipeline which are
intensively used such as the texture and the depth units, while for
instance, stencil, fog, and dithering units being rarely used.
The OpenGL applications that were used to create the benchmarks
and the GLtrace tracer are accessible via the first author's
website (
http://ce.et.tudelft.nl/~tkg/
). The benchmarks
(i.e. the traces) cannot be made public currently, because
they are of no use without the trace player and the trace player is
confidential at the moment. However, the Quake III (demo version)
and the AWadvs-04 components do not require the use of the trace
player in order to generate repeatable workloads. We hope to be
able to make the benchmark suite publicly available in the future.
As future work, we intend to extend the number of components
for this benchmark suite, and we also intend to extend the statistics
to include results from embedded graphics architectures that are
using a tile-based rendering mechanism.
REFERENCES
[1] T. Akenine-Moller and J. Strom, "Graphics for the
Masses: A Hardware Rasterization Architecture for
Mobile Phones", ACM Trans. on Graph.,vol 22, nr 3,
2003, pp. 801-808.
[2] I. Antochi, B.H.H. Juurlink, A. G. M. Cilio, and P. Liuha.
"Trading Efficiency for Energy in a Texture Cache
Architecture", Proc. Euromicro Conf. on
Massively-Parallel Computing Systems (MPCS'02), 2002,
Ischia, Italy, pp. 189-196.
[3] Futuremark Corporation, "3DMark01SE", Available at
http://www.futuremark.com/products/3dmark2001/
[4] Futuremark Corporation, "3DMark03", Available at
http://www.futuremark.com/products/3dmark03/
[5] D. Crisu, S.D. Cotofana, S. Vassiliadis, and P. Liuha,
"GRAAL -- A Development Framework for Embedded
Graphics Accelerators", Proc. Design, Automation and
Test in Europe (DATE 04), Paris, France, February 2004.
[6] J.C. Dunwoody and M.A. Linton. "Tracing Interactive 3D
Graphics Programs", Proc. ACM Symp. on Interactive 3D
Graphics, 1990.
7
0
2000
4000
6000
8000
10000
12000
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
1300
Frame
Triangles
Received
Processed
(a) Q3L Triangles
0
200000
400000
600000
800000
1000000
1200000
1400000
0
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950 1000 1050 1100 1150 1200 1250 1300 1350
Frame
Are
a
(b) Q3L Area
0
1000
2000
3000
4000
5000
6000
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
1300
Frame
Triangles
Received
Processed
(c) Tux Triangles
0
200000
400000
600000
800000
1000000
1200000
1400000
0
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950 1000 1050 1100 1150 1200 1250 1300 1350
Frame
Are
a
(d) Tux Area
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0
50
100
150
200
250
300
350
400
450
500
550
600
Frame
Triangles
Received
Processed
(e) DIN Triangles
0
50000
100000
150000
200000
250000
300000
0
50
100
150
200
250
300
350
400
450
500
550
600
Frame
Are
a
(f) DIN Area
0
5000
10000
15000
20000
25000
30000
0
50
100
150
200
250
300
350
400
450
500
550
Frame
Triangles
Received
Processed
(g) AW Triangles
0
20000
40000
60000
80000
100000
120000
140000
0
50
100
150
200
250
300
350
400
450
500
550
Frame
Area
(h) AW Area
0
2000
4000
6000
8000
10000
12000
0
50
100
150
200
250
300
350
400
450
500
550
Frame
Triangles
Received
Processed
(i) GRA Triangles
0
50000
100000
150000
200000
250000
300000
350000
0
50
100
150
200
250
300
350
400
450
500
550
Frame
Are
a
(j) GRA Area
0
2000
4000
6000
8000
10000
12000
14000
16000
0
50
100
150
200
250
300
350
400
450
500
550
600
Frame
Triangles
Received
Processed
(k) ANL Triangles
0
200000
400000
600000
800000
1000000
1200000
1400000
0
50
100
150
200
250
300
350
400
450
500
550
600
Frame
Are
a
(l) ANL Area
Figure 4: Triangle and area statistics for the GraalBench components
8
[7] JSR-184 Expert Group, "Mobile 3D Graphics API for
Java
TM
2 Micro Edition", Available at http://jcp.org/
aboutJava/communityprocess/final/jsr184/index.html
[8] The Khronos Group, "OpenGL ES Overview", Available
at http://www.khronos.org/opengles/index.html
[9] Id Software Inc., "Quake III", Available at
http://www.idsoftware.com
[10] ARM Ltd., "ARM 3D Graphics Solutions", Available at
http://www.arm.com/miscPDFs/1643.pdf
[11] T. Mitra and T. Chiueh. "Dynamic 3D Graphics Workload
Characterization and the Architectural Implications",
Proc. 32nd ACM/IEEE Int. Symp. on Microarchitecture
(MICRO), 1999, pp. 62-71.
[12] Systems in Motion, "VRMLView", Available at
http://www.sim.no
[13] M. Pichler, G. Orasche, K. Andrews, E. Grossman, and
M. McCahill, "VRweb: a Multi-System VRML Viewer",
Proc. First Symp. on Virtual Reality Modeling Language,
1995, San Diego, California, United States, pp. 77-85.
[14] The Mesa Project, "The Mesa 3D Graphics Library",
Available at http://www.mesa3d.org
[15] Hawk Software, "GLTrace Programming Utility",
Available at http://www.hawksoft.com/gltrace/
[16] J. Sohn, R. Woo,and H.J. Yoo "Optimization of Portable
System Architecture for Real-Time 3D Graphics", Proc.
IEEE Int. Symp. on Circuits and Systems (ISCAS 2002),
Volume: 1 , 26-29 May 2002 pp. I-769 - I-772 vol.1.
[17] SourceForge, "spyGLass: an OpenGL Call Tracer and
Debugging Tool", Available at
http://spyglass.sourceforge.net/
[18] SPEC, "SPECviewperf 6.1.2", Available at
http://www.specbench.org/gpc/opc.static/opcview.htm
[19] Sunspire Studios, "Tux Racer", Available at
http://tuxracer.sourceforge.net/
[20] Portable 3D Research Group at Korea Advanced Institute
of Science and Technology, "MobileGL - The Standard
for Embedded 3D Graphics", Available at http://ssl.kaist.
ac.kr/projects/portable3D.html/main mgl defnition.htm
[21] Stanford University, "GLSim & GLTrace", Available at
http:
//graphics.stanford.edu/courses/cs448a-01-fall/glsim.html
[22] Yonsei University, 3D Graphics Accelerator Group,
http://msl.yonsei.ac.kr/3d/
9
| mechanism;workload characterization;API;Mobile devices;embedded 3D graphics;accelerators;3D graphics benchmarking;real-time;bottlenecks;rasterization;Graalbench;architecture;embedded systems;workload;benchmark;3D graphics;pipeline;Mobile environments;3D graphics applications;mobile phones;triangles;openGL;unit;GraalBench;performance;measurement;statistics;OpenGL;transform and lighting;embedded 3D graphics architectures;3D graphics benchmarks |
99 | Handoff Trigger Table for Integrated 3G/WLAN Networks | Vertical handoff is a switching process between heterogeneous wireless networks in a hybrid 3G/WLAN network. Vertical handoffs fromWLAN to 3G network often fail due to the abrupt degrade of the WLAN signal strength in the transition areas. In this paper, a Handoff Trigger Table is introduced to improve the performance of vertical handoff. Based on this table, a proactive handoff scheme is proposed. Simulation results show that with the proposed scheme, the vertical handoff decisions will be more efficient so that dropping probability can be decreased dramatically. | INTRODUCTION
With the emergence of different wireless technologies, which
are developed for different purposes, the integration of these
wireless networks has attracted much attention from both
academia and industry. Among them, the integration of 3G
cellular networks and wireless local access networks (WLAN)
has become a very active area in the development toward the
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
IWCMC'06, July 36, 2006, Vancouver, British Columbia, Canada.
Copyright 2006 ACM 1-59593-306-9/06/0007 ...
$
5.00.
next generation wireless networks. WLAN access technology
can offer high-speed data connections in a small coverage
with relatively low cost. On the other hand, cellular
networks can offer connectivity over several square kilometers
but with relatively low data rate. Taking advantages
of both networks will bring great benefits to both service
providers and users.
One of the desired features of such a heterogeneous wireless
network is to support seamless global roaming or vertical
handoff. Traditionally, handoff is performed within the
same wireless system, which is called horizontal handoff.
In contrast, a vertical handoff takes place between different
wireless networks [2]. In an integrated 3G/WLAN network
, there are two directions in vertical handoff, one is from
WLANs to 3G networks and the other is from3G Networks
to WLANs. In the first direction, the objective for handoff
is to maintain the connectivity, i.e., switching to the cellular
network before the WLAN link breaks while trying to stay in
the WLAN as long as possible because of its relatively high
bandwidth and low cost. Since a WLAN has smaller coverage
and is usually covered by a 3G network, when the mobile
terminal (MT) steps out of the WLAN area, the decay of
the signal fromthe WLAN should be accurately detected. A
timely decision of handoff should be made properly, and the
MT should switch the connection to the appropriate 3G network
successfully. In the second direction, the objective of
handoff is usually to improve QoS and acquire higher bandwidth
with lower cost. In this paper, we will focus on the
first direction, which is the handoff fromthe WLAN to the
3G network.
For a WLAN, signal power is limited, which causes the
signal strength to be quite easily influenced by physical obstruction
and blocks. For example, if the MT passes some
blocks or moves into the elevator, there will be an abrupt
drop in its received WLAN signal strength. In this case,
the MT may not have enough time to finish the WLAN-to
-3G-Network vertical handoff procedure before the link to
WLAN breaks. Therefore, how to effectively detect the signal
decay to trigger the handoff becomes a very important
issue.
In this paper, we propose to maintain a Handoff Trigger
Table (HTT) at the Access Point (AP) to record some
location information on transition areas in which vertical
575
handoffs occur. With the information of the HTT, a proactive
handoff scheme is proposed to enable the MTs to start
the vertical handoff procedure early enough to finish the
vertical handoff procedure, thus the handoff call dropping
probability can be decreased dramatically.
The rest of this paper is organized as follows. In Section 2,
some related work is discussed. In Section 3, we propose a
Handoff Trigger Table to assist making handoff decisions.
The handoff schem e based on this table is also presented.
Section 4 gives simulation results to demonstrate the better
performance of the proposed scheme in comparison with the
traditional one. Section 5 concludes the paper.
RELATED WORK
In traditional handoff schemes, the received signal strength
(RSS) has been used as an indicator for making handoff decisions
. Some of the traditional approaches are as follows:
[5]
RSS: handoff takes place if the RSS of a candidate
point of attachment is larger than the RSS of the current
point of attachm ent (RSS
new
> RSS
current
);
RSS plus threshold: handoff is made if the RSS of a
candidate point of attachment is larger than that of the
current point of attachment and the latter is less than a
certain pre-defined threshold T (RSS
new
> RSS
current
and RSS
current
< T );
RSS plus hysteresis: a handoff takes place if the RSS
of the candidate point of attachment is larger than the
RSS of the current one with a pre-defined hysteresis
margin H (RSS
new
> RSS
current
+ H);
Algorithm plus dwell timer: sometimes a dwell timer
can be added to the above algorithms. This timer is
started when one of the above conditions happens, and
the handoff takes place if the condition is met for the
entire dwell timer interval.
For the vertical handoff process, it may not be very reliable
to make handoff decisions based only on the RSS of the point
of attachment (e.g., AP of the WLAN) and the candidate
point of attachment (e.g., base station of the 3G network)
because of the asymmetric nature of the handoff problem
[6].
As mentioned before, the handoff from WLAN to 3G network
is expected to be efficient and effective. Some methods
have been proposed to achieve this goal. One of them
is the Fast Fourier Transform(FFT)-based decay detection
[1]. This approach tries to estimate the signal decay, and
will trigger the handoff after the signal is confirmed to be
decreased to a certain threshold. However, this approach
has high calculation complexity with the need of frequent
sampling, and suffers from estimation errors. In [2], handoff
triggering nodes are used to notify the mobile terminal to
start the handoff. These special nodes are data stations installed
in WLAN/cellular transition regions where vertical
handoffs occur. When an MT moves close to it, the handoff
triggering node will send a handoff trigger command to trigger
the link layer handoff. Using handoff trigger node can
be good at triggering the handoffs, but if they are needed in
many places within a WLAN, it will be costly to set up many
trigger nodes. In addition, if there are new blocks appearing
in the WLAN, it is hard to determine where the additional
trigger nodes should be installed. Therefore, this approach
is not very flexible. The Fuzzy logic based handoff algorithm
[3][4] is proposed to assist making handoff decisions.
This algorithmdecreases handoff delay and the number of
unnecessary handoffs by changing the RSS average window
according to the MT speed. It is worth mentioning that
some fuzzy logic based algorithms are complex and may not
be easy to be implemented in practical systems.
HANDOFF TRIGGER TABLE FOR 3G/ WLAN NETWORKS
Recently, WLAN has been expected to provide user location
information [6]-[10], which is helpful for making vertical
handoffs. We propose to use a Handoff Trigger Table (HTT)
to store such location information at the AP and utilize it to
trigger the WLAN-to-3G vertical handoff explicitly. Based
on this table, a proactive handoff scheme is proposed to assist
MTs to handoff in the right places at the right time.
With this scheme, handoff decisions can be more efficient
and handoffs are more likely to succeed compared with the
traditional schemes which trigger the handoff mainly based
on the received signal strength at the MTs.
3.1
Handoff Trigger Table
A typical integrated 3G/WLAN network is shown in Fig. 1.
The HTT is normally implemented at the APs of WLAN
and used to record the user location information which will
be helpful to make handoff decisions as explained later. An
example of the HTT is given in Table 1.
Table 1: An example of HandoffTrigger Table
X
1
+ D > x > X
1
- D
Y
1
+ D > y > Y
1
- D
X
2
+ D > x > X
2
- D
Y
2
+ D > y > Y
2
- D
X
n
+ D > x > X
n
- D Y
n
+ D > y > Y
n
- D
In the HTT, the information of the locations where an
MT needs to handoff is given. We defined Black Holes (BH)
the small areas in which the received signal strength at an
MT decreases abruptly and the link to the AP breaks in a
very short time. In such BHs, if the MT does not switch
the connection to the 3G network, it will not be able to
keep the connectivity to its correspondence node. Each BH
is proposed to be covered by a slightly larger area, namely,
proactive area. When MTs move into a proactive area, the
AP will send a message to notify the MT to start proactive
handoff (detailed in Section 3.2), regardless of the current
received signal strength at the MT. In Table 1, (X
i
, Y
i
, i =
1, 2, , n) are the location coordinates of the center of the
BHs, and D is the distance between the center of the black
hole and proactive area edges. (X
n
+ D > x > X
n
-D, Y
n
+
D > y > Y
n
- D) is an example of possible ways to describe
the proactive area. Note that other types of proactive areas
such as circles can also be adopted. Fig. 1 illustrates the
above two concepts, wherein A, B and C are three BHs
within the WLAN, and A', B', C' are the corresponding
proactive areas.
The HTT will be initialized when the AP is installed in
the WLAN, then it will be dynamically updated. In the initial
stage, MTs will handoff in a traditional way, i.e., handoff
576
Figure 1: An example of WLAN with Black Holes
when the received signal strength decreases to a threshold.
In addition, the MT sends out a handoff notification to the
AP, and the AP will record the coordinates of this handoff
location into the HTT. Usually BH is not a dot, but
a small area. Therefore, for each BH, the HTT will store
many coordinates of points where vertical handoffs occur,
then the AP will try to merge these coordinates to form a
corresponding proactive area and put it into the HTT. After
the initialization stage, the HTT will only contain the description
of proactive areas instead of individual coordinates
of BHs. Meanwhile, the AP will often check the HTT to decide
whether any MT enters the proactive areas. When a
new BH appears, the AP will be able to record its proactive
area into the HTT after some vertical handoffs take place
near this BH, similar to the initial stage. When a BH disappears
due to some reasons (e.g., restructure in the WLAN),
there will be no vertical handoffs occur in the corresponding
proactive area. As a result, the AP will remove the entry
of this proactive area fromthe HTT after some predefined
time, which can be decided by the system administrator.
With the above methods, the HTT is maintained dynamically
and can adaptive to the change of the environment.
3.2
Proactive Handoff Scheme
Based on the Handoff Trigger Table, we propose a proactive
handoff scheme to achieve better handoff performance.
In the traditional scheme, the handoff decision mechanism
is that a handoff will be triggered when the current RSS is
lower than a threshold (THm) or RSS from the candidate
network is higher than a threshold (THw). The procedure
is illustrated in Fig. 2.
In the proposed scheme, we divide the handoff into two
stages, a proactive handoff stage and a handoff stage. In the
proactive handoff stage, when an MT moves into a proactive
area, the AP will send it a proactive handoff message.
Following that the MT will send out a binding update message
to the AP and start this network layer handoff. The
procedure is given in Fig. 3.
When an MT enters the WLAN, the RSSs in sampling
intervals will be measured and their averages are computed.
At the same time, the AP will check if the MT is in any
proactive area. If yes, it will send a control message Pre-Handoff
-CMD to the MT. After receiving this message, the
MT will start to send the cellular network a message to request
a connection. When the signal decreases to a certain
threshold R
t
, the link-layer handoff starts, and the MT will
Handoff execution
RSS<Rt
No
Yes
Start handoff
Averaging
Measurements
Figure 2:
Traditional handoffscheme for Hybrid
Networks
Measurements
Averaging
Handoff Trigger
Table
Handoff execution
RSS<Rt
Pre-handoff
Yes
No
Figure 3: Proactive handoffscheme for Hybrid Networks
switch the connection to the cellular network. When the
received signal strength decreases to R
t
, but the MT still
does not receive the Pre-Handoff-CMD, it will send a message
to informthe AP to add this location coordinate into
the HTT. The threshold R
t
is a design parameter that can
be set by the administrator to best enable vertical handoff
procedure under the WLAN-specific physical situation.
PERFORMANCE EVALUATION
In this section, we compare the performance of the proposed
vertical handoff scheme with HTT and the traditional
handoff scheme by simulation. Consider the simplified network
model for 3G/WLAN networks shown in Fig. 1. We
assume that the 3G network covers the WLAN area. The
RSS of the WLAN signal at an MH is assumed as a function
of the distance d between the MH and the AP[11]:
RSS(d) = P T L 10n log(d) + f(, ) dBm , where P T
is the transmitted power, L is a constant signal power loss,
n is the path loss exponent, and f(, ) represents shadow
fading modeled as zero mean Gaussian random variable with
mean and standard deviation . In the WLAN, there are
a number of BHs, in which the RSS will decrease to almost
zero immediately and the MT will have to handoff to the
3G networks.
577
The popular Random Waypoint mobility model [12] is
used to simulate the mobility of the MTs in the WLAN.
At the beginning, the MTs' positions in the WLAN are uniformly
distributed. Each MT will move at a random speed
V , which is uniformly distributed in [V min, V max], and in
a randomdirection. After randomtime T m, the MT will
stop and stay for randomtime T s, with both T m and T s
uniformly distributed in [0, 2s]. The MT will continue the
movement as described above. Any MT that moves out of
the WLAN will be eliminated, and a new MT will be gener-ated
in the WLAN with a randomly chosen location. Some
other parameters used in the simulations are given in Table
2.
Table 2: Parameters for the numerical examples
Parameter
Value
PT
100mW
R
t
-85dbm
n
3.3
7dB
Handoff time
2s
Velocity range
0 2 m /sec
Number of BHs
3
WLAN area
100m
100m
We define the transition time as the time from when an
MT starts handoff to the moment it moves into a BH, and
handoff time as the time from when an MT sends out the
binding update message to the moment it receives the first
package fromthe Base Station.
The drop of a call only
happens when the transition time is less than the handoff
time.
In Figs. 4, 5 and 6, we present the dropping probability
of the proposed scheme and the traditional handoff scheme
based on different proactive distance, D, and two values of
user mobility rate, V , respectively. Fig. 4 illustrates the
0
0.5
1
1.5
2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
User Mobility rate
Drop Probablity
Traditional Method
D=0.4 HTT Method
D=0.5 HTT Method
Figure 4: Effect of user mobility
effect of user mobility on the connection dropping probability
. In this figure, the dropping probabilities for proactive
distance D = 0.4 and D = 0.5, are given. In these results
, handoff time is 2s. From the figure, we can see that
the handoff probability has been reduced with the explicit
handoff trigger in the proposed scheme, because mobile terminals
can have more time to execute the handoff. When
the proactive distance increases, the dropping probability
decreases. This is due to the fact that when the proactive
distance is larger, the time for the MT to execute the handoff
will be longer. We can also see that when the MTs move
faster, dropping probability will increase as well, because
shorter time is made available for the MT to execute the
vertical handoff given the same D. Compared with the HTT
scheme, the traditional scheme is more sensitive to the user
mobility rate, and the dropping probability increases rapidly
as the mobility increases.
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Distance of prehandoff
D
r
oppi
ng P
r
obabl
i
t
y
V=0.5 Traditional Method
V=0.5 HTT Method
V=1 Traditional Method
V=1 HTT Method
Figure 5: Effect of proactive distance
Fig. 5 shows that as the proactive distance increases, the
dropping probability of HTT scheme decreases very fast and
it is quite sensitive to the distance increasing. When the user
mobility rate increases, the dropping probability will also
increase, which conforms with the results in Fig. 4. Fig. 6
shows the impact of the handoff time (in practical value
range) on the dropping probability. With a given transition
time, if the handoff time increases, the dropping probability
will be higher, as expected.
Fig. 7 shows that the impact of the distance between AP
and BH on the dropping probability. The signal strength
decays with the distance to the AP in WLANs. In the proposed
scheme, MTs start network layer handoff in advance
according to the parameter D and regardless of the RSS, so
the performance of the proposed scheme is independent from
the distance d between the BH and the AP. However, for a
given RSS threshold, the performance of traditional handoff
scheme relies heavily on d, and it has better performance
for the BHs which are far fromthe AP. This is because the
signal strength is relatively high near the AP, and MT will
normally find that its RSS is above the threshold. As the
MT enters the BH, the signal strength degrades so abruptly
that the MT does not have enough time to do handoff, which
leads to high dropping probability. In contrast, the RSS of
an MT moving around a BH that is far from the AP will
be relatively low and may be close to the threshold, hence
a handoff decision may be easily triggered before the MT
enters the BH. In this case, the MT gets longer time to conduct
the handoff procedure so a relatively lower dropping
probability can be achieved.
578
1.8
2
2.2
2.4
2.6
2.8
3
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Handoff time(s)
D
r
oppi
ng P
r
o
bab
l
i
t
y
Traditional Method
HTT Method
Figure 6: Effect of Vertical handoff time
2
3
4
5
6
7
8
9
10
11
12
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Distance between AP and BH
Dr
oppi
ng P
r
obabl
i
t
y
Traditional Method
D=0.4 HTT Method
D=0.5 HTT Method
Figure 7: Effect of distance between AP and BH
FromFig. 7, we can also see that for the proposed scheme,
the proactive distance D should be set properly. Similar
requirement applies to the setting of the threshold in the
traditional scheme. If D or the threshold is set to be high,
there will be many unnecessary handoffs although the dropping
probability will decrease. On the other hand, if D or
the threshold is too low, the dropping probability will increase
.
However, the dependence on d of the traditional
handoff scheme cannot be eliminated by simply adjusting
the threshold, which makes the selection of a proper threshold
in the traditional scheme even more difficult.
We further study the performance of the two schemes in
the WLAN with different number of BHs. The locations
of the BHs are set to be uniformly distributed within the
WLAN. FromFig. 8, it can be seen that for both schemes
the dropping probability is not sensitive to the number of
BHs when their locations are uniformly distributed. The
case for BHs with non-uniformdistribution is left as future
work.
2
4
6
8
10
12
14
16
18
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
Number of BHs
Dropping Probablity
Traditional Method
D=0.4 HTT Method
Figure 8: Effect of number of BHs
CONCLUSIONS
In this paper, vertical handoffs fromWLAN to 3G cellular
network has been investigated. To support making proper
handoff decisions, a Handoff Trigger Table (HTT) has been
proposed to be implemented at the AP of WLAN to record
the location information of BHs.
Based on this table, a
proactive handoff scheme has been proposed. Simulation
results have been given to show that with the information
in the HTT, the vertical handoff decisions can be made more
efficiently and dropping probability can be decreased signif-icantly
. Possible future work include making the proactive
distance D a variable for different environments and further
investigating the unnecessary handoff events in different prehandoff
schemes. The HTT scheme can also be used in sys-temdiscovery
to assist reducing MH power consumption.
ACKNOWLEDGEMENTS
This work has been supported jointly by the Natural Sciences
and Engineering Research Council (NSERC) of Canada
under Strategic Grant No. 257682 and Research in Motion
(RIM).
REFERENCES
[1] Q. Zhang, C. Guo, Z. Guo and W. Zhu, "Efficient
mobility management for vertical handoff between
WWAN and WLAN", IEEE Communications
Magazine Vol. 41, Issue 11, Nov. 2003, pp. 102108.
[2] W.-T. Chen; J.-C. Liu; H.-K. Huang, "An adaptive
scheme for vertical handoff in wireless overlay
networks", Proceedings of International Conference on
Parallel and Distributed Systems 2004, 7-9 July 2004,
pp. 541548.
[3] P. Khadivi, T.D. Todd and D. Zhao, "Handoff trigger
nodes for hybrid IEEE 802.11 WLAN/cellular
networks", Proc. QSHINE 2004, pp. 164170.
[4] A. Majlesi and B.H. Khalaj, "An adaptive fuzzy logic
based handoff algorithmfor interworking between
WLANs and mobile networks", Proc. PIMRC'02,
Sept. 2002, pp. 24462451.
[5] K. Pahlavan, et al., "Handoff in hybrid mobile data
579
networks", IEEE Personal Communications, Vol.7,
No.2, April 2000, pp. 3447.
[6] J.-Z. Sun, J. Sauvola and D. Howie, "Features in
future: 4G visions froma technical perspective", Proc.
GLOBECOM'01, Nov. 2001, pp. 35333537.
[7] P. Bahl and V. Padmanabhan, "RADAR: An
in-building RF-based user location and tracking
system", Proc. IEEE INFOCOM'00, Tel-Aviv, Israel,
2000, pp. 775784.
[8] P.Krishnan, A.S. Krishnakumar, W.-H. Ju, C.
Mallows, and S. Ganu, "A systemfor LEASE:
location estimationassisted by stationary emitters for
indoor RF wireless networks", Proc. IEEE
INFOCOM'04, Hongkong, 2004, pp. 10011011.
[9] K.-I. Itoh, S. Watanabe, J.-S. Shih, and T. Sato,
"Performance of handoff algorithm based on distance
and RSSI measurements", IEEE Trans. Vehic.
Technol., Vol. 51, No. 6, Nov. 2002, pp.1460 - 1468.
[10] J. Makela, M. Ylianttila and K. Pahlavan, "Handoff
decision in multi-service networks", Proc. PIMRC'00,
Sept. 2000, pp. 655659.
[11] A.H. Zahran and B. Liang, "Performance evaluation
framework for vertical handoff algorithms in
heterogeneous networks", Proc ICC 2005, May 2005,
pp. 173178
[12] C Bettstetter, H Hartenstein and X Perez-Costa,
"Stochastic properties of the randomwaypoint
mobility model", Wireless Networks, Vol. 10, No. 5,
Sept. 2004, pp. 555567.
580
| WLAN;integrated networks;vertical handoff;cellular network;3G;Wireless communications;Handoff trigger table;wireles networks |