text
stringlengths 313
1.33M
|
---|
# Zine Making/Selling or giving copies to people
\_\_NOEDITSECTION\_\_ \_\_NOTOC\_\_
### `<span style="font-size:x-small; color:dimgray;">`{=html}Presentation`</span>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Making copies{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<span style="font-size:xx-large; color:teal;">`{=html}`</span>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Other resources{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Other resources{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Most people who make zines aren\'t doing it as a profit-making
enterprise, and if you are hoping to make a living from your zine then
you\'re probably wrong. Most people who sell their zines put them at a
price which kind of approximately covers the costs of
making/distributing them, so that the zine-making process doesn\'t
bankrupt them. A lot of smaller zines (e.g. ones made from a single page
of A4) are given away. It\'s much less hassle not having to collect
money etc. And a lot of zines are swapped for other zines rather than
bought. It\'s up to you what you do, but don\'t expect to see some kind
of profit margin on your zine; that\'s usually not how it works.
## Record shops / book shops
![](Elliott_Bay_Books_-_author_reading_01A.jpg "Elliott_Bay_Books_-_author_reading_01A.jpg")
Lots of independent record shops / book shops are happy to carry zines.
Some of them have stands for them and have a sale-or-return procedure in
place to let you sell them. But even if they don\'t, you can often ask
your friendly independent record/book-shop person if they wouldn\'t mind
having them on the counter or wherever.
Sale-or-return is the typical agreement that shops do: they\'ll collect
the money on your behalf, and you can come back in a month (or whatever)
and collect your money (hopefully) or your unsold zines. Some shops have
a time-limit, for example if you don\'t collect them within 6 months
they assume you\'ve forgotten about them and get rid of them. Some shops
take a cut of the profit, and some don\'t.
## Distros
People who like zines and who really enjoy photocopying often set
themselves up to distribute other people\'s zines - these are often
called \"distro\"s. There are some large and well-established distros
and millions of tiny one-person distros, but typically it\'s just a case
of if they like it, they\'ll be happy to carry it for you. If you find
distros which have stuff you like, or your stuff fits in with them, just
ask\...
Remember that it\'s typically all done for the love of it so it\'s not
quite the same as approaching some publishing company. You don\'t need
to be particularly formal, just send them a zine and/or have a
conversation with them.
## Zine fairs
People who like zines and who really enjoy herding
cats often set up a
\"zine fair\" or \"zine symposium\" or some such, in their area or for
their particular scene. Have a look around to see if there\'s anything
suitable.
## Carrying zines around
If you\'ve got a nice load of zines freshly made and you want to take
them to gigs, fairs, etc, you probably want to keep them looking nice,
so it\'s a good idea to find a sturdy box to carry them around in. If
you\'ve only got a few then anything will do, but if you get to the
point of having a lot to carry around then finding an old suitcase,
camera-case, or some such case with a handle or strap can be really
helpful.
|
# Acoustics/Fundamentals of Acoustics
![](Acoustics_Fundamentals_of_Acoustics.jpg "Acoustics_Fundamentals_of_Acoustics.jpg")
## Introduction
Sound is an oscillation of pressure transmitted through a gas, liquid,
or solid in the form of a traveling wave, and can
be generated by any localized pressure variation in a medium. An easy
way to understand how sound propagates is to consider that space can be
divided into thin layers. The vibration (the successive compression and
relaxation) of these layers, at a certain velocity, enables the sound to
propagate, hence producing a wave. The speed of sound depends on the
compressibility and density of the medium.
In this chapter, we will only consider the propagation of sound waves in
an area without any acoustic source, in a
homogeneous fluid.
## Equation of waves
Sound waves consist in the propagation of a scalar quantity, acoustic
over-pressure. The propagation of sound waves in a stationary medium
(e.g. still air or water) is governed by the following equation (see
wave equation):
```{=html}
<div class="center">
```
$\nabla ^2 p - \frac{1}{{c_0 ^2 }}\frac{{\partial ^2 p}}{{\partial t^2 }} = 0$
```{=html}
</div>
```
This equation is obtained using the conservation equations (mass,
momentum and energy) and the thermodynamic equations of state of an
ideal gas (or of an ideally compressible solid or liquid), supposing
that the pressure variations are small, and neglecting viscosity and
thermal conduction, which would give other terms, accounting for sound
attenuation.
In the propagation equation of sound waves, $c_0$ is the propagation
velocity of the sound wave (which has nothing to do with the vibration
velocity of the air layers). This propagation velocity has the following
expression:
```{=html}
<div class="center">
```
$c_0 = \frac{1}{{\sqrt {\rho _0 \chi _s } }}$
```{=html}
</div>
```
where $\rho _0$ is the density and $\chi _S$ is the compressibility
coefficient of the propagation medium.
## Helmholtz equation
Since the velocity field $\underline v$ for acoustic waves is
irrotational we can define an acoustic potential $\Phi$ by:
```{=html}
<div class="center">
```
$\underline v = \text{grad }\Phi$
```{=html}
</div>
```
Using the propagation equation of the previous paragraph, it is easy to
obtain the new equation:
```{=html}
<div class="center">
```
$\nabla ^2 \Phi - \frac{1}{{c_0 ^2 }}\frac{{\partial ^2 \Phi }}{{\partial t^2 }} = 0$
```{=html}
</div>
```
Applying the Fourier Transform, we get the widely used Helmholtz
equation:
```{=html}
<div class="center">
```
$\nabla ^2 \hat \Phi + k^2 \hat \Phi = 0$
```{=html}
</div>
```
where $k$ is the wave number associated with $\Phi$. Using this equation
is often the easiest way to solve acoustical problems.
## Acoustic intensity and decibel
The acoustic intensity represents the acoustic energy flux associated
with the wave propagation:
```{=html}
<div class="center">
```
$\underline i (t) = p\underline v$
```{=html}
</div>
```
We can then define the average intensity:
```{=html}
<div class="center">
```
$\underline I = \langle \underline i \rangle$
```{=html}
</div>
```
However, acoustic intensity does not give a good idea of the sound
level, since the sensitivity of our ears is logarithmic. Therefore we
define decibels, either using acoustic over-pressure or acoustic average
intensity:
```{=html}
<div class="center">
```
$p^{\rm dB} = 20\log \left(\frac{p}{{p_\mathrm{ref} }}\right)$ ;
$L_I = 10\log \left(\frac{I}{{I_\mathrm{ref} }}\right)$
```{=html}
</div>
```
where $p_\mathrm{ref} = 2*10^{ - 5}~{\rm Pa}$ for air, or
$p_\mathrm{ref} = 10^{ - 6}~{\rm Pa}$ for any other media, and
$I_\mathrm{ref} = 10^{ - 12}~{\rm W/m}^2$.
## Solving the wave equation
### Plane waves
If we study the propagation of a sound wave, far from the acoustic
source, it can be considered as a plane 1D wave. If the direction of
propagation is along the x axis, the solution is:
```{=html}
<div class="center">
```
$\Phi (x,t) = f\left(t - \frac{x}{{c_0 }}\right) + g\left(t + \frac{x}{{c_0 }}\right)$
```{=html}
</div>
```
where f and g can be any function. f describes the wave motion toward
increasing x, whereas g describes the motion toward decreasing x.
The momentum equation provides a relation between $p$ and $\underline v$
which leads to the expression of the specific impedance, defined as
follows:
```{=html}
<div class="center">
```
$\frac{p}{v} = Z = \pm \rho _0 c_0$
```{=html}
</div>
```
And still in the case of a plane wave, we get the following expression
for the acoustic intensity:
```{=html}
<div class="center">
```
$\underline i = \pm \frac{{p^2 }}{{\rho _0 c_0 }}\underline {e_x }$
```{=html}
</div>
```
### Spherical waves
More generally, the waves propagate in any direction and are spherical
waves. In these cases, the solution for the acoustic potential $\Phi$
is:
```{=html}
<div class="center">
```
$\Phi (r,t) = \frac{1}{r}f\left(t - \frac{r}{{c_0 }}\right) + \frac{1}{r}g\left(t + \frac{r}{{c_0 }}\right)$
```{=html}
</div>
```
The fact that the potential decreases linearly while the distance to the
source rises is just a consequence of the conservation of energy. For
spherical waves, we can also easily calculate the specific impedance as
well as the acoustic intensity.
## Boundary conditions
Concerning the boundary conditions which are used for solving the wave
equation, we can distinguish two situations. If the medium is not
absorptive, the boundary conditions are established using the usual
equations for mechanics. But in the situation of an absorptive material,
it is simpler to use the concept of acoustic impedance.
### Non-absorptive material
In that case, we get explicit boundary conditions either on stresses and
on velocities at the interface. These conditions depend on whether the
media are solids, inviscid or viscous fluids.
### Absorptive material
Here, we use the acoustic impedance as the boundary condition. This
impedance, which is often given by experimental measurements depends on
the material, the fluid and the frequency of the sound wave.
|
# Acoustics/Fundamentals of Room Acoustics
![](Acoustics_fundamentals_of_room_acoustics.JPG "Acoustics_fundamentals_of_room_acoustics.JPG")
## Introduction
Three theories are used to understand room acoustics :
1. The modal theory
2. The geometric theory
3. The theory of Sabine
## The modal theory
This theory comes from the homogeneous Helmoltz equation
$\nabla ^2 \hat \Phi + k^2 \hat \Phi = 0$. Considering a simple
geometry of a parallelepiped (L1,L2,L3), the solution of this problem is
with separated variables :
```{=html}
<center>
```
$P(x,y,z)=X(x)Y(y)Z(z)$
```{=html}
</center>
```
Hence each function X, Y and Z has this form :
```{=html}
<center>
```
$X(x) = Ae^{ - ikx} + Be^{ikx}$
```{=html}
</center>
```
With the boundary condition $\frac{{\partial P}}
{{\partial x}} = 0$, for $x=0$ and $x=L1$ (idem in the other
directions), the expression of pressure is :
```{=html}
<center>
```
$P\left( {x,y,z} \right) = C\cos \left( {\frac{{m\pi x}}
{{L1}}} \right)\cos \left( {\frac{{n\pi y}}
{{L2}}} \right)\cos \left( {\frac{{p\pi z}}
{{L3}}} \right)$
```{=html}
</center>
```
```{=html}
<center>
```
$k^2 = \left( {\frac{{m\pi }}{{L1}}} \right)^2 + \left( {\frac{{n\pi }}{{L2}}} \right)^2 + \left( {\frac{{p\pi }}{{L3}}} \right)^2$
```{=html}
</center>
```
where $m$,$n$,$p$ are whole numbers
It is a three-dimensional stationary wave. Acoustic modes appear with
their modal frequencies and their modal forms. With a non-homogeneous
problem, a problem with an acoustic source $Q$ in $r_0$, the final
pressure in $r$ is the sum of the contribution of all the modes
described above.
The modal density $\frac{{dN}}{{df}}$ is the number of modal frequencies
contained in a range of 1 Hz. It depends on the frequency $f$, the
volume of the room $V$ and the speed of sound $c_0$ :
```{=html}
<center>
```
$\frac{{dN}}{{df}} \simeq \frac{{4\pi V}}{{c_0^3 }}f^2$
```{=html}
</center>
```
The modal density depends on the square frequency, so it increase
rapidly with the frequency. At a certain level of frequency, the modes
are not distinguished and the modal theory is no longer relevant.
## The geometry theory
For rooms of high volume or with a complex geometry, the theory of
acoustical geometry is critical and can be applied. The waves are
modelised with rays carrying acoustical energy. This energy decrease
with the reflection of the rays on the walls of the room. The reason of
this phenomenon is the absorption of the walls.
The problem is this theory needs a very high power of calculation and
that is why the theory of Sabine is often chosen because it is easier.
## The theory of Sabine
### Description of the theory
This theory uses the hypothesis of the diffuse field, the acoustical
field is homogeneous and isotropic. In order to obtain this field, the
room has to be sufficiently reverberant and the frequencies have to be
high enough to avoid the effects of predominating modes.
The variation of the acoustical energy E in the room can be written as :
```{=html}
<center>
```
$\frac{{dE}}{{dt}} = W_s - W_{abs}$
```{=html}
</center>
```
Where $W_s$ and $W_{abs}$ are respectively the power generated by the
acoustical source and the power absorbed by the walls.
The power absorbed is related to the voluminal energy in the room e :
```{=html}
<center>
```
$W_{abs} = \frac{{ec_0 }}{4}a$
```{=html}
</center>
```
Where a is the equivalent absorption area defined by the sum of the
product of the absorption coefficient and the area of each material in
the room :
```{=html}
<center>
```
$a = \sum\limits_i {\alpha _i S_i }$
```{=html}
</center>
```
The final equation is : $V\frac{{de}}{{dt}} = W_s - \frac{{ec_0 }}{4}a$
The level of stationary energy is :
$e_{sat} = 4\frac{{W_{abs} }}{{ac_0 }}$
### Reverberation time
With this theory described, the reverberation time can be defined. It is
the time for the level of energy to decrease of 60 dB. It depends on the
volume of the room V and the equivalent absorption area a :
```{=html}
<center>
```
$T_{60} = \frac{{0.16V}}{a}$ Sabine formula
```{=html}
</center>
```
This reverberation time is the fundamental parameter in room acoustics
and depends trough the equivalent absorption area and the absorption
coefficients on the frequency. It is used for several measurement :
- Measurement of an absorption coefficient of a material
- Measurement of the power of a source
- Measurement of the transmission of a wall
|
# Acoustics/Fundamentals of Psychoacoustics
![](Acoustics_Psychoacoustics.jpg "Acoustics_Psychoacoustics.jpg")
Due to the famous principle enounced by Gustav Theodor Fechner, the
sensation of perception doesn't follow a linear law, but a logarithmic
one. The perception of the intensity of light, or the sensation of
weight, follow this law, as well. This observation legitimates the use
of logarithmic scales in the field of acoustics. A 80 dB (10-4 W/m²)
sound seems to be twice as loud as a 70 dB (10-5 W/m²) sound, although
there is a factor 10 between the two acoustic powers. This is quite a
naïve law, but it led to a new way of thinking about acoustics: trying
to describe the auditive sensations. That's the aim of psychoacoustics.
As the neurophysiological mechanisms of human hearing haven't been
successfully modeled, the only way of dealing with psychoacoustics is by
finding metrics that best describe the different aspects of sound.
## Perception of sound
The study of sound perception is limited by the complexity of the human
ear mechanisms. The figure below represents the domain of perception and
the thresholds of pain and listening. The pain threshold is not
frequency-dependent (around 120 dB in the audible bandwidth). At the
opposite side, the listening threshold, as all the equal loudness
curves, is frequency-dependent. In the center are typical frequency and
loudness ranges for human voice and music.
![](Audible.JPG "Audible.JPG")
## Phons and sones
### Phons
Two sounds of equal intensity do not have the same apparent loudness,
because of the frequency sensibility of the human ear. An 80 dB tone at
100 Hz does not sound as loud as an 80 dB tone at 3 kHz. A new unit, the
phon, is used to describe the loudness of a harmonic sound. X phons
means "as loud as X dB at 1000 Hz". Another tool is used : the equal
loudness curves, a.k.a. Fletcher curves.
![](Curve_isofoniche.svg "Curve_isofoniche.svg")
### Sones
Another scale currently used is the sone, based upon the rule of thumb
for loudness. This rule states that the sound must be increased in
intensity by a factor 10 to be perceived as twice as loud. In decibel
(or phon) scale, it corresponds to a 10 dB (or phons) increase. The sone
scale's purpose is to translate those scales into a linear one.
$\log (S) = 0,03(L_{ph} - 40)$
Where S is the sone level, and $L_{ph}$ the phon level. The conversion
table is as follows:
Phons Sones
------- -------
100 64
90 32
80 16
70 8
60 4
50 2
40 1
## Metrics
We will now present five psychoacoustics parameters to provide a way to
predict the subjective human sensation.
### dB A
The measurement of noise perception with the sone or phon scale is not
easy. A widely used measurement method is a weighting of the sound
pressure level, according to frequency repartition. For each frequency
of the density spectrum, a level correction is made. Different kinds of
weightings (dB A, dB B, dB C) exist in order to approximate the human
ear at different sound intensities, but the most commonly used is the dB
A filter. Its curve is made to match the ear equal loudness curve for 40
phons, and as a consequence it's a good approximation of the phon scale.
```{=html}
<center>
```
![](dba.JPG "dba.JPG")
```{=html}
</center>
```
*Example : for a harmonic 40 dB sound, at 200 Hz, the correction is -10
dB, so this sound is 30 dB A.*
### Loudness
It measures the sound strength. Loudness can be measured in sone, and is
a dominant metric in psychoacoustics.
### Tonality
As the human ear is very sensible to the pure harmonic sounds, this
metric is a very important one. It measures the number of pure tones in
the noise spectrum. A broadwidth sound has a very low tonality, for
example.
### Roughness
It describes the human perception of temporal variations of sounds. This
metric is measured in asper.
### Sharpness
Sharpness is linked to the spectral characteristics of the sound. A
high-frequency signal has a high value of sharpness. This metric is
measured in acum.
### Blocking effect
A sinusoidal sound can be masked by a white noise in a narrowing
bandwidth. A white noise is a random signal with a flat power spectral
density. In other words, the signal\'s power spectral density has equal
power in any band, at any centre frequency, having a given bandwidth. If
the intensity of the white noise is high enough, the sinusoidal sound
will not be heard. For example, in a noisy environment (in the street,
in a workshop), a great effort has to be made in order to distinguish
someone's talking.
|
# Acoustics/Sound Speed
![](Acoustics_Sound_Speed.jpg "Acoustics_Sound_Speed.jpg")
The **speed of sound** *c* (from Latin *celeritas*, \"velocity\") varies
depending on the medium through which the sound waves pass. It is
usually quoted in describing properties of substances (e.g. Sodium\'s
Speed of Sound is listed under *Other
Properties*). In conventional use and scientific
literature, sound velocity *v* is the same as sound speed *c*. Sound
velocity *c* or velocity of sound should not be confused with sound
particle velocity *v*, which is the velocity of the individual
particles.
More commonly the term refers to the speed of sound in air. The speed
varies depending on atmospheric conditions; the most important factor is
the temperature. Humidity has very little effect on the speed of sound,
while the static sound pressure (air pressure) has none. Sound travels
slower with an increased altitude (elevation if you are on solid earth),
primarily as a result of temperature and humidity changes. An
approximate speed (in metres per second) can be calculated from:
$$c_{\mathrm{air}} = (331{.}5 + (0{.}6 \cdot \theta)) \ \mathrm{m/s}\,$$
where $\theta\,$ (theta) is the temperature in degrees Celsius.
## Details
A more accurate expression for the speed of sound is
$$c = \sqrt {\kappa \cdot R\cdot T}$$ where
- *R* is the gas constant (287.05 J/(kg·K) for air). It is derived by
dividing the universal gas constant $R$ (J/(mol·K)) by the molar
mass of air (kg/mol), as is common practice in aerodynamics.
- *κ* (kappa) is the adiabatic index (1.402 for air), sometimes noted
as *γ* (gamma).
- *T* is the absolute temperature in kelvins.
In the standard atmosphere:
*T*~0~ is 273.15 K (= 0 °C = 32 °F), giving a value of 331.5 m/s (=
1087.6 ft/s = 1193 km/h = 741.5 mph = 643.9 knots).\
*T*~20~ is 293.15 K (= 20 °C = 68 °F), giving a value of 343.4 m/s (=
1126.6 ft/s = 1236 km/h = 768.2 mph = 667.1 knots).\
*T*~25~ is 298.15 K (= 25 °C = 77 °F), giving a value of 346.3 m/s (=
1136.2 ft/s = 1246 km/h = 774.7 mph = 672.7 knots).
In fact, assuming an ideal gas, the speed of sound *c* depends on
temperature only, **not on the pressure**. Air is almost an ideal gas.
The temperature of the air varies with altitude, giving the following
variations in the speed of sound using the standard atmosphere - *actual
conditions may vary*. Any qualification of the speed of sound being \"at
sea level\" is also irrelevant. Speed of sound varies with altitude
(height) only because of the changing temperature!
+----------+----------+---------+----------+---------+----------+
| **Al | **Tempe | **m/s** | **km/h** | **mph** | * |
| titude** | rature** | | | | *knots** |
+----------+----------+---------+----------+---------+----------+
| Sea | 15 °C | 340 | 1225 | 761 | 661 |
| level | (59 °F) | | | | |
| (?) | | | | | |
+----------+----------+---------+----------+---------+----------+
| 11,000 | -57 °C | 295 | 1062 | 660 | 573 |
| m | (-70 °F) | | | | |
| --20,000 | | | | | |
| m\ | | | | | |
| ( | | | | | |
| Cruising | | | | | |
| altitude | | | | | |
| of | | | | | |
| co | | | | | |
| mmercial | | | | | |
| jets,\ | | | | | |
| and | | | | | |
| first | | | | | |
| su | | | | | |
| personic | | | | | |
| flight) | | | | | |
+----------+----------+---------+----------+---------+----------+
| 29,000 m | -48 °C | 301 | 1083 | 673 | 585 |
| (Flight | (-53 °F) | | | | |
| of | | | | | |
| X-43A) | | | | | |
+----------+----------+---------+----------+---------+----------+
| | | | | | |
+----------+----------+---------+----------+---------+----------+
In a **Non-Dispersive Medium** -- Sound speed is independent of
frequency. Therefore the speed of energy transport and sound propagation
are the same. For audio sound range, air is a non-dispersive medium. We
should also note that air contains CO2 which is a dispersive medium and
it introduces dispersion to air at ultrasound frequencies (\~28 kHz).\
In a **Dispersive Medium** -- Sound speed is a function of frequency.
The spatial and temporal distribution of a propagating disturbance will
continually change. Each frequency component propagates at its own phase
speed, while the energy of the disturbance propagates at the group
velocity. Water is an example of a dispersive medium.
In general, the speed of sound *c* is given by
$$c = \sqrt{\frac{C}{\rho}}$$ where
: *C* is a coefficient of stiffness
: $\rho$ is the density
Thus the speed of sound increases with the stiffness of the material,
and decreases with the density.
In a fluid the only non-zero stiffness is to volumetric deformation (a
fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
$$c = \sqrt {\frac{K}{\rho}}$$ where
: *K* is the adiabatic bulk modulus
For a gas, *K* is approximately given by
$$K=\kappa \cdot p$$
where
: κ is the adiabatic index, sometimes called γ.
: *p* is the pressure.
Thus, for a gas the speed of sound can be calculated using:
$$c = \sqrt {{\kappa \cdot p}\over\rho}$$ which using the ideal gas law
is identical to:
$c = \sqrt {\kappa \cdot R\cdot T}$
(Newton famously considered the speed of sound before most of the
development of thermodynamics and so incorrectly used isothermal
calculations instead of adiabatic. His result was missing the factor of
κ but was otherwise correct.)
In a solid, there is a non-zero stiffness both for volumetric and shear
deformations. Hence, in a solid it is possible to generate sound waves
with different velocities dependent on the deformation mode.
In a solid rod (with thickness much smaller than the wavelength) the
speed of sound is given by:
$$c = \sqrt{\frac{E}{\rho}}$$
where
: *E* is Young\'s modulus
: $\rho$ (rho) is density
Thus in steel the speed of sound is approximately 5100 m/s.
In a solid with lateral dimensions much larger than the wavelength, the
sound velocity is higher. It is found by replacing Young\'s modulus with
the plane wave modulus, which can be expressed in terms of the Young\'s
modulus and Poisson\'s ratio as:
$$M = E \frac{1-\nu}{1-\nu-2\nu^2}$$
For air, see density of air.
The speed of sound in water is of interest to those mapping the ocean
floor. In saltwater, sound travels at about 1500 m/s and in freshwater
1435 m/s. These speeds vary due to pressure, depth, temperature,
salinity and other factors.
For general equations of state, if classical mechanics are used, the
speed of sound $c$ is given by
$$c^2=\frac{\partial p}{\partial\rho}$$ where differentiation is taken
with respect to adiabatic change.
If relativistic effects are important, the speed of sound $S$ is given
by:
$$S^2=c^2 \left. \frac{\partial p}{\partial e} \right|_{\rm adiabatic}$$
(Note that $e= \rho (c^2+e^C) \,$ is the relativistic internal energy
density).
This formula differs from the classical case in that $\rho$ has been
replaced by $e/c^2 \,$.
## Speed of sound in air
**Impact of temperature**
---------------------------
θ in °C
−10
−5
0
+5
+10
+15
+20
+25
+30
**Mach number** is the ratio of the object\'s speed to the speed of
sound in air (medium).
## Sound in solids
In solids, the velocity of sound depends on *density* of the material,
not its temperature. Solid materials, such as steel, conduct sound much
faster than air.
## Experimental methods
In air a range of different methods exist for the measurement of sound.
### Single-shot timing methods
The simplest concept is the measurement made using two microphones and a
fast recording device such as a digital storage scope. This method uses
the following idea.
If a sound source and two microphones are arranged in a straight line,
with the sound source at one end, then the following can be measured:
1. The distance between the microphones (x)
2. The time delay between the signal reaching the different microphones
(t)
Then v = x/t
An older method is to create a sound at one end of a field with an
object that can be seen to move when it creates the sound. When the
observer sees the sound-creating device act they start a stopwatch and
when the observer hears the sound they stop their stopwatch. Again using
v = x/t you can calculate the speed of sound. A separation of at least
200 m between the two experimental parties is required for good results
with this method.
### Other methods
In these methods the time measurement has been replaced by a measurement
of the inverse of time (frequency).
Kundt\'s tube is an example of an experiment which can be used to
measure the speed of sound in a small volume, it has the advantage of
being able to measure the speed of sound in any gas. This method uses a
powder to make the nodes and antinodes visible to the human eye. This is
an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping
into a barrel of water, in this system it is the case that the pipe can
be brought to resonance if the length of the air column in the pipe is
equal to ( {1+2n}/λ ) where n is an integer. As the antinodal point for
the pipe at the open end is slightly outside the mouth of the pipe it is
best to find two or more points of resonance and then measure half a
wavelength between these.
Here it is the case that v = fλ
## External links
- Calculation: Speed of sound in air and the
temperature
- The speed of sound, the temperature, and \... **not** the air
pressure
- Properties Of The U.S. Standard Atmosphere
1976
|
# Acoustics/Flow-induced Oscillations of a Helmholtz Resonator
![](Acoustics_flow_induced.JPG "Acoustics_flow_induced.JPG")
## Introduction
The importance of flow excited acoustic resonance lies in the large
number of applications in which it occurs. Sound production in organ
pipes, compressors, transonic wind tunnels, and open sunroofs are only a
few examples of the many applications in which flow excited resonance of
Helmholtz resonators can be found.\[4\] An instability of the fluid
motion coupled with an acoustic resonance of the cavity produce large
pressure fluctuations that are felt as increased sound pressure levels.
Passengers of road vehicles with open sunroofs often experience
discomfort, fatigue, and dizziness from self-sustained oscillations
inside the car cabin. This phenomenon is caused by the coupling of
acoustic and hydrodynamic flow inside a cavity which creates strong
pressure oscillations in the passenger compartment in the 10 to 50 Hz
frequency range. Some effects experienced by vehicles with open sunroofs
when buffeting include: dizziness, temporary hearing reduction,
discomfort, driver fatigue, and in extreme cases nausea. The importance
of reducing interior noise levels inside the car cabin relies primarily
in reducing driver fatigue and improving sound transmission from
entertainment and communication devices.
This Wikibook page aims to theoretically and graphically explain the
mechanisms involved in the flow-excited acoustic resonance of Helmholtz
resonators. The interaction between fluid motion and acoustic resonance
will be explained to provide a thorough explanation of the behavior of
self-oscillatory Helmholtz resonator systems. As an application example,
a description of the mechanisms involved in sunroof buffeting phenomena
will be developed at the end of the page.
## Feedback loop analysis
As mentioned before, the self-sustained oscillations of a Helmholtz
resonator in many cases is a continuous interaction of hydrodynamic and
acoustic mechanisms. In the frequency domain, the flow excitation and
the acoustic behavior can be represented as transfer functions. The flow
can be decomposed into two volume velocities.
: qr: flow associated with acoustic response of cavity
: qo: flow associated with excitation
## Acoustical characteristics of the resonator
### Lumped parameter model
The lumped parameter model of a Helmholtz resonator consists of a
rigid-walled volume open to the environment through a small opening at
one end. The dimensions of the resonator in this model are much less
than the acoustic wavelength, in this way allowing us to model the
system as a lumped system.
Figure 2 shows a sketch of a Helmholtz resonator on the left, the
mechanical analog on the middle section, and the electric-circuit analog
on the right hand side. As shown in the Helmholtz resonator drawing, the
air mass flowing through an inflow of volume velocity includes the mass
inside the neck (Mo) and an end-correction mass (Mend). Viscous losses
at the edges of the neck length are included as well as the radiation
resistance of the tube. The electric-circuit analog shows the resonator
modeled as a forced harmonic oscillator. \[1\] \[2\]\[3\]
```{=html}
<center>
```
***Figure 2***
```{=html}
</center>
```
V: cavity volume
$\rho$: ambient density
c: speed of sound
S: cross-section area of orifice
K: stiffness
$M_a$: acoustic mass
$C_a$: acoustic compliance
The equivalent stiffness K is related to the potential energy of the
flow compressed inside the cavity. For a rigid wall cavity it is
approximately:
```{=html}
<center>
```
$K = \left(\frac{\rho c^2}{V}\right)S^2$
```{=html}
</center>
```
The equation that describes the Helmholtz resonator is the following:
```{=html}
<center>
```
$S \hat{P}_e =\frac{\hat{q}_e}{j\omega S}(-\omega ^2 M + j\omega R + K)$
```{=html}
</center>
```
$\hat{P}_e$: excitation pressure
M: total mass (mass inside neck Mo plus end correction, Mend)
R: total resistance (radiation loss plus viscous loss)
From the electrical-circuit we know the following:
```{=html}
<center>
```
$M_a = \frac{L \rho}{S}$
```{=html}
</center>
```
```{=html}
<center>
```
$C_a = \frac{V}{\rho c^2}$
```{=html}
</center>
```
```{=html}
<center>
```
$L ' = \ L + \ 1.7 \ re$
```{=html}
</center>
```
The main cavity resonance parameters are resonance frequency and quality
factor which can be estimated using the parameters explained above
(assuming free field radiation, no viscous losses and leaks, and
negligible wall compliance effects)
```{=html}
<center>
```
$\omega_r^2 = \frac{1}{M_a C_a}$
```{=html}
</center>
```
```{=html}
<center>
```
$f_r = \frac{c}{2 \pi} \sqrt{\frac{S}{L' V}}$
```{=html}
</center>
```
The sharpness of the resonance peak is measured by the quality factor Q
of the Helmholtz resonator as follows:
```{=html}
<center>
```
$Q = 2 \pi \sqrt{V (\frac{L'} {S})^3}$
```{=html}
</center>
```
$f_r$: resonance frequency in Hz
$\omega_r$: resonance frequency in radians
: L: length of neck
: L\': corrected length of neck
From the equations above, the following can be deduced:
- The greater the volume of the resonator, the lower the resonance
frequencies.
- If the length of the neck is increased, the resonance frequency
decreases.
### Production of self-sustained oscillations
The acoustic field interacts with the unstable hydrodynamic flow above
the open section of the cavity, where the grazing flow is continuous.
The flow in this section separates from the wall at a point where the
acoustic and hydrodynamic flows are strongly coupled. \[5\]
The separation of the boundary layer at the leading edge of the cavity
(front part of opening from incoming flow) produces strong vortices in
the main stream. As observed in Figure 3, a shear layer crosses the
cavity orifice and vortices start to form due to instabilities in the
layer at the leading edge.
```{=html}
<center>
```
***Figure 3***
```{=html}
</center>
```
From Figure 3, L is the length of the inner cavity region, d denotes the
diameter or length of the cavity length, D represents the height of the
cavity, and $\delta$ describes the gradient length in the grazing
velocity profile (boundary layer thickness).
The velocity in this region is characterized to be unsteady and the
perturbations in this region will lead to self-sustained oscillations
inside the cavity. Vortices will continually form in the opening region
due to the instability of the shear layer at the leading edge of the
opening.
## Applications to Sunroof Buffeting
### How are vortices formed during buffeting?
In order to understand the generation and convection of vortices from
the shear layer along the sunroof opening, the animation below has been
developed. At a certain range of flow velocities, self-sustained
oscillations inside the open cavity (sunroof) will be predominant.
During this period of time, vortices are shed at the trailing edge of
the opening and continue to be convected along the length of the cavity
opening as pressure inside the cabin decreases and increases. Flow
visualization experimentation is one method that helps obtain a
qualitative understanding of vortex formation and conduction.
The animation below shows, in the middle, a side view of a car cabin
with the sunroof open. As the air starts to flow at a certain mean
velocity Uo, air mass will enter and leave the cabin as the pressure
decreases and increases again. At the right hand side of the animation,
a legend shows a range of colors to determine the pressure magnitude
inside the car cabin. At the top of the animation, a plot of circulation
and acoustic cavity pressure versus time for one period of oscillation
is shown. The symbol x moving along the acoustic cavity pressure plot is
synchronized with pressure fluctuations inside the car cabin and with
the legend on the right. For example, whenever the x symbol is located
at the point where t=0 (when the acoustic cavity pressure is minimum)
the color of the car cabin will match that of the minimum pressure in
the legend (blue).
```{=html}
<center>
```
![](theplot.gif "theplot.gif")
```{=html}
</center>
```
The perturbations in the shear layer propagate with a velocity of the
order of 1/2Uo which is half the mean inflow velocity. \[5\] After the
pressure inside the cavity reaches a minimum (blue color) the air mass
position in the neck of the cavity reaches its maximum outward position.
At this point, a vortex is shed at the leading edge of the sunroof
opening (front part of sunroof in the direction of inflow velocity). As
the pressure inside the cavity increases (progressively to red color)
and the air mass at the cavity entrance is moved inwards, the vortex is
displaced into the neck of the cavity. The maximum downward displacement
of the vortex is achieved when the pressure inside the cabin is also
maximum and the air mass in the neck of the Helmholtz resonator (sunroof
opening) reaches its maximum downward displacement. For the rest of the
remaining half cycle, the pressure cavity falls and the air below the
neck of the resonator is moved upwards. The vortex continues displacing
towards the downstream edge of the sunroof where it is convected upwards
and outside the neck of the resonator. At this point the air below the
neck reaches its maximum upwards displacement.\[4\] And the process
starts once again.
### How to identify buffeting
Flow induced tests performed over a range of flow velocities are helpful
to determine the change in sound pressure levels (SPL) inside the car
cabin as inflow velocity is increased. The following animation shows
typical auto spectra results from a car cabin with the sunroof open at
various inflow velocities. At the top right hand corner of the
animation, it is possible to see the inflow velocity and resonance
frequency corresponding to the plot shown at that instant of time.
```{=html}
<center>
```
![](curve.gif "curve.gif")
```{=html}
</center>
```
It is observed in the animation that the SPL increases gradually with
increasing inflow velocity. Initially, the levels are below 80 dB and no
major peaks are observed. As velocity is increased, the SPL increases
throughout the frequency range until a definite peak is observed around
a 100 Hz and 120 dB of amplitude. This is the resonance frequency of the
cavity at which buffeting occurs. As it is observed in the animation, as
velocity is further increased, the peak decreases and disappears.
In this way, sound pressure level plots versus frequency are helpful in
determining increased sound pressure levels inside the car cabin to find
ways to minimize them. Some of the methods used to minimize the
increased SPL levels achieved by buffeting include: notched deflectors,
mass injection, and spoilers.
# Useful websites
This link: 1 takes you to the website of EXA
Corporation, a developer of PowerFlow for Computational Fluid Dynamics
(CFD) analysis.
This link:
2 is a
small news article about the current use of(CFD) software to model
sunroof buffeting.
This link:
3
is a small industry brochure that shows the current use of CFD for
sunroof buffeting.
# References
1. Acoustics: An introduction to its Physical Principles and
Applications ; Pierce, Allan D., Acoustical Society of America,
1989.
2. Prediction and Control of the Interior Pressure Fluctuations in a
Flow-excited Helmholtz resonator ; Mongeau, Luc, and Hyungseok
Kook., Ray W. Herrick Laboratories, Purdue University, 1997.
3. Influence of leakage on the flow-induced response of vehicles with
open sunroofs ; Mongeau, Luc, and Jin-Seok Hong., Ray W. Herrick
Laboratories, Purdue University.
4. Fluid dynamics of a flow excited resonance, part I: Experiment ;
P.A. Nelson, Halliwell and Doak.; 1991.
5. An Introduction to Acoustics ; Rienstra, S.W., A. Hirschberg.,
Report IWDE 99--02, Eindhoven University of Technology, 1999.
|
# Acoustics/Active Control
![](Acoustics_active_control.JPG "Acoustics_active_control.JPG")
## Introduction
The principle of active control of noise, is to create destructive
interferences using a secondary source of noise. Thus, any noise can
theoretically disappear. But as we will see in the following sections,
only low frequencies noises can be reduced for usual applications, since
the amount of secondary sources required increases very quickly with
frequency. Moreover, predictable noises are much easier to control than
unpredictable ones. The reduction can reach up to 20 dB for the best
cases. But since good reduction can only be reached for low frequencies,
the perception we have of the resulting sound is not necessarily as good
as the theoretical reduction. This is due to psychoacoustics
considerations, which will be discussed later on.
## Fundamentals of active control of noise
### Control of a monopole by another monopole
Even for the free space propagation of an acoustic wave created by a
punctual source it is difficult to reduce noise in a large area, using
active noise control, as we will see in the section.
In the case of an acoustic wave created by a monopolar source, the
Helmholtz equation becomes:
```{=html}
<center>
```
$\Delta p + k^2 p = - j\omega \rho _0 q$
```{=html}
</center>
```
where q is the flow of the noise sources.
The solution for this equation at any M point is:
```{=html}
<center>
```
$p_p (M) = \frac{{j\omega \rho _0 q_p }}{{4\pi }}\frac{{e^{ - jkr_p } }}{{r_p }}$
```{=html}
</center>
```
where the p mark refers to the primary source.
Let us introduce a secondary source in order to perform active control
of noise. The acoustic pressure at that same M point is now:
```{=html}
<center>
```
${\rm{p(M) = }}\frac{{{\rm{j}}\omega \rho _{\rm{0}} q_p }}{{4\pi }}\frac{{e^{ - jkr_p } }}{{r_p }} + \frac{{{\rm{j}}\omega \rho _{\rm{0}} q_s }}{{4\pi }}\frac{{e^{ - jkr_s } }}{{r_s }}$
```{=html}
</center>
```
It is now obvious that if we chose
$q_s = - q_p \frac{{r_s }}{{r_p }}e^{ - jk(r_p - r_s )}$ there is no
more noise at the M point. This is the most simple example of active
control of noise. But it is also obvious that if the pressure is zero in
M, there is no reason why it should also be zero at any other N point.
This solution only allows to reduce noise in one very small area.
However, it is possible to reduce noise in a larger area far from the
source, as we will see in this section. In fact the expression for
acoustic pressure far from the primary source can be approximated by:
```{=html}
<center>
```
$p(M) = \frac{{j\omega \rho _0 }}{{4\pi }}\frac{{e^{ - jkr_p } }}{{r_p }}(q_p + q_s e^{ - jkD\cos \theta } )$
```{=html}
</center>
```
```{=html}
<center>
```
!Control of a monopole by another
monopole
```{=html}
</center>
```
As shown in the previous section we can adjust the secondary source in
order to get no noise in M. In that case, the acoustic pressure in any
other N point of the space remains low if the primary and secondary
sources are close enough. More precisely, it is possible to have a
pressure close to zero in the whole space if the M point is equally
distant from the two sources and if: $D < \lambda /6$ where D is the
distance between the primary and secondary sources. As we will see later
on, it is easier to perform active control of noise with more than on
source controlling the primary source, but it is of course much more
expensive.
A commonly admitted estimation of the number of secondary sources which
are necessary to reduce noise in an R radius sphere, at a frequency f
is:
```{=html}
<center>
```
$N = \frac{{36\pi R^2 f^2 }}{{c^2 }}$
```{=html}
</center>
```
This means that if you want no noise in a one meter diameter sphere at a
frequency below 340 Hz, you will need 30 secondary sources. This is the
reason why active control of noise works better at low frequencies.
### Active control for waves propagation in ducts and enclosures
This section requires from the reader to know the basis of modal
propagation theory, which will not be explained in this article.
#### Ducts
For an infinite and straight duct with a constant section, the pressure
in areas without sources can be written as an infinite sum of
propagation modes:
```{=html}
<center>
```
$p(x,y,z,\omega ) = \sum\limits_{n = 1}^N {a_n (\omega )\phi _n (x,y)e^{ - jk_n z} }$
```{=html}
</center>
```
where $\phi$ are the eigen functions of the Helmoltz equation and a
represent the amplitudes of the modes.
The eigen functions can either be obtained analytically, for some
specific shapes of the duct, or numerically. By putting pressure sensors
in the duct and using the previous equation, we get a relation between
the pressure matrix P (pressure for the various frequencies) and the A
matrix of the amplitudes of the modes. Furthermore, for linear sources,
there is a relation between the A matrix and the U matrix of the signal
sent to the secondary sources: $A_s = KU$ and hence:
$A = A_p + A_s = A_p + KU$.
Our purpose is to get: A=0, which means: $A_p + KU = 0$. This is
possible every time the rank of the K matrix is bigger than the number
of the propagation modes in the duct.
Thus, it is theoretically possible to have no noise in the duct in a
very large area not too close from the primary sources if the there are
more secondary sources than propagation modes in the duct. Therefore, it
is obvious that active noise control is more appropriate for low
frequencies. In fact the more the frequency is low, the less propagation
modes there will be in the duct. Experiences show that it is in fact
possible to reduce the noise from over 60 dB.
#### Enclosures
The principle is rather similar to the one described above, except the
resonance phenomenon has a major influence on acoustic pressure in the
cavity. In fact, every mode that is not resonant in the considered
frequency range can be neglected. In a cavity or enclosure, the number
of these modes rise very quickly as frequency rises, so once again, low
frequencies are more appropriate. Above a critical frequency, the
acoustic field can be considered as diffuse. In that case, active
control of noise is still possible, but it is theoretically much more
complicated to set up.
### Active control and psychoacoustics
As we have seen, it is possible to reduce noise with a finite number of
secondary sources. Unfortunately, the perception of sound of our ears
does not only depend on the acoustic pressure (or the decibels). In
fact, it sometimes happen that even though the number of decibels has
been reduced, the perception that we have is not really better than
without active control.
## Active control systems
Since the noise that has to be reduced can never be predicted exactly, a
system for active control of noise requires an auto adaptable algorithm.
We have to consider two different ways of setting up the system for
active control of noise depending on whether it is possible or not to
detect the noise from the primary source before it reaches the secondary
sources. If this is possible, a feed forward technique will be used
(aircraft engine for example). If not a feed back technique will be
preferred.
### Feedforward
In the case of a feed forward, two sensors and one secondary source are
required. The sensors measure the sound pressure at the primary source
(detector) and at the place we want noise to be reduced (control
sensor). Furthermore, we should have an idea of what the noise from the
primary source will become as he reaches the control sensor. Thus we
approximately know what correction should be made, before the sound wave
reaches the control sensor (forward). The control sensor will only
correct an eventual or residual error. The feedforward technique allows
to reduce one specific noise (aircraft engine for example) without
reducing every other sound (conversations, ...). The main issue for this
technique is that the location of the primary source has to be known,
and we have to be sure that this sound will be detected beforehand.
Therefore portative systems based on feed forward are impossible since
it would require having sensors all around the head.
```{=html}
<center>
```
!Feedforward System
```{=html}
</center>
```
### Feedback
In that case, we do not exactly know where the sound comes from; hence
there is only one sensor. The sensor and the secondary source are very
close from each other and the correction is done in real time: as soon
as the sensor gets the information the signal is treated by a filter
which sends the corrected signal to the secondary source. The main issue
with feedback is that every noise is reduced and it is even
theoretically impossible to have a standard conversation.
```{=html}
<center>
```
!Feedback System
```{=html}
</center>
```
## Applications
### Noise cancelling headphone
Usual headphones become useless when the frequency gets too low. As we
have just seen active noise cancelling headphones require the feedback
technique since the primary sources can be located all around the head.
This active control of noise is not really efficient at high frequencies
since it is limited by the Larsen effect. Noise can be reduced up to
30 dB at a frequency range between 30 Hz and 500 Hz.
### Active control for cars
Noise reduction inside cars can have a significant impact on the comfort
of the driver. There are three major sources of noise in a car: the
motor, the contact of tires on the road, and the aerodynamic noise
created by the air flow around the car. In this section, active control
for each of those sources will be briefly discussed.
#### Motor noise
This noise is rather predictable since it a consequence of the rotation
of the pistons in the motor. Its frequency is not exactly the motor's
rotational speed though. However, the frequency of this noise is in
between 20 Hz and 200 Hz, which means that an active control is
theoretically possible. The following pictures show the result of an
active control, both for low and high regime.
```{=html}
<center>
```
!Low regime
```{=html}
</center>
```
Even though these results show a significant reduction of the acoustic
pressure, the perception inside the car is not really better with this
active control system, mainly for psychoacoustics reasons which were
mentioned above. Moreover such a system is rather expensive and thus are
not used in commercial cars.
#### Tires noise
This noise is created by the contact between the tires and the road. It
is a broadband noise which is rather unpredictable since the mechanisms
are very complex. For example, the different types of roads can have a
significant impact on the resulting noise. Furthermore, there is a
cavity around the tires, which generate a resonance phenomenon. The
first frequency is usually around 200 Hz. Considering the multiple
causes for that noise and its unpredictability, even low frequencies
become hard to reduce. But since this noise is broadband, reducing low
frequencies is not enough to reduce the overall noise. In fact an active
control system would mainly be useful in the case of an unfortunate
amplification of a specific mode.
#### Aerodynamic noise
This noise is a consequence of the interaction between the air flow
around the car and the different appendixes such as the rear views for
example. Once again, it is an unpredictable broadband noise, which makes
it difficult to reduce with an active control system. However, this
solution can become interesting in the case an annoying predictable
resonance would appear.
### Active control for aeronautics
The noise of aircraft propellers is highly predictable since the
frequency is quite exactly the rotational frequency multiplied by the
number of blades. Usually this frequency is around some hundreds of Hz.
Hence, an active control system using the feedforward technique provides
very satisfying noise reductions. The main issues are the cost and the
weigh of such a system. The fan noise on aircraft engines can be reduced
in the same manner.
## Further reading
- \"Active Noise
Control\"
at Dirac delta.
|
# Acoustics/Rotor Stator interactions
![](Acoustics_RotorStatorInteractions_2.JPG "Acoustics_RotorStatorInteractions_2.JPG")
An important issue for the aeronautical industry is the reduction of
aircraft noise. The characteristics of the turbomachinery noise are to
be studied. The rotor/stator interaction is a predominant part of the
noise emission. We will present an introduction to these interaction
theory, whose applications are numerous. For example, the conception of
air-conditioning ventilators requires a full understanding of this
interaction.
## Noise emission of a Rotor-Stator mechanism
A Rotor wake induces on the downstream Stator blades a fluctuating vane
loading, which is directly linked to the noise emission.
We consider a B blades Rotor (at a rotation speed of $\Omega$) and a V
blades stator, in a unique Rotor/Stator configuration. The source
frequencies are multiples of $B \Omega$, that is to say $mB \Omega$. For
the moment we don't have access to the source levels $F_{m}$. The noise
frequencies are also $mB \Omega$, not depending on the number of blades
of the stator. Nevertheless, this number V has a predominant role in the
noise levels ($P_{m}$) and directivity, as it will be discussed later.
*Example*
*For an airplane air-conditioning ventilator, reasonable data are :*
*$B=13$ and $\Omega = 12000$ rnd/min*
*The blade passing frequency is 2600 Hz, so we only have to include the
first two multiples (2600 Hz and 5200 Hz), because of the human ear
high-sensibility limit. We have to study the frequencies m=1 and m=2.*
## Optimization of the number of blades
As the source levels can\'t be easily modified, we have to focus on the
interaction between those levels and the noise levels.
The transfer function ${{F_m } \over {P_m }}$ contains the following
part :
```{=html}
<center>
```
$\sum\limits_{s = - \infty }^{s = + \infty } {e^{ - {{i(mB - sV)\pi } \over 2}} J_{mB - sV} } (mBM)$
```{=html}
</center>
```
Where m is the Mach number and $J_{mB - sV}$ the Bessel function of
mB-sV order. In order to minimize the influence of the transfer
function, the goal is to reduce the value of this Bessel function. To do
so, the argument must be smaller than the order of the Bessel function.
*Back to the example :*
*For m=1, with a Mach number M=0.3, the argument of the Bessel function
is about 4. We have to avoid having mB-sV inferior than 4. If V=10, we
have 13-1x10=3, so there will be a noisy mode. If V=19, the minimum of
mB-sV is 6, and the noise emission will be limited.*
*Remark :*
*The case that is to be strictly avoided is when mB-sV can be nul, which
causes the order of the Bessel function to be 0. As a consequence, we
have to take care having B and V prime numbers.*
## Determination of source levels
The minimization of the transfer function ${{F_m } \over {P_m }}$ is a
great step in the process of reducing the noise emission. Nevertheless,
to be highly efficient, we also have to predict the source levels
$F_{m}$. This will lead us to choose to minimize the Bessel functions
for the most significant values of m. For example, if the source level
for m=1 is very higher than for m=2, we will not consider the Bessel
functions of order 2B-sV. The determination of the source levels is
given by the Sears theory, which will not be discussed here.
## Directivity
All this study was made for a specific direction : the axis of the
Rotor/Stator. All the results are acceptable when the noise reduction is
ought to be in this direction. In the case where the noise to reduce is
perpendicular to the axis, the results are very different, as those
figures shown :
For B=13 and V=13, which is the worst case, we see that the sound level
is very high on the axis (for $\theta = 0$)
```{=html}
<center>
```
![](Acoustics_1313.JPG "Acoustics_1313.JPG")
```{=html}
</center>
```
For B=13 and V=19, the sound level is very low on the axis but high
perpendicularly to the axis (for $\theta = Pi/2$)
```{=html}
<center>
```
![](Acoustics_1319.jpg "Acoustics_1319.jpg")
```{=html}
</center>
```
## Further reading
This module discusses rotor/stator interaction, the predominant part of
the noise emission of turbomachinery. See Acoustics/Noise from Cooling
Fans for a discussion of
other noise sources.
## External references
- Prediction of rotor wake-stator interaction noise by P. Sijtsma and
J.B.H.M.
Schulten
|
# Acoustics/Car Mufflers
![](Acoustics_car_muflers.JPG "Acoustics_car_muflers.JPG")
## Introduction
A car muffler is a component of the exhaust system of a car. The exhaust
system has mainly 3 functions:
1. Getting the hot and noxious gas from the engine away from the
vehicle
2. Reduce exhaust emission
3. Attenuating the noise output from the engine
The last specified function is the function of the car muffler. It is
necessary because the gas coming from the combustion in the pistons of
the engine would generate an extremely loud noise if it were sent
directly in the ambient air surrounding engine through the exhaust
valves. There are 2 techniques used to dampen the noise: absorption and
reflection. Each technique has its advantages and disadvantages.
!Muffler type \"Cherry
bomb\"\|right{width="150"}
## The absorber muffler
The muffler is composed of a tube covered by sound absorbing material.
The tube is perforated so that some part of the sound wave goes through
the perforation to the absorbing material. The absorbing material is
usually made of fiberglass or steel wool. The dampening material is
protected from the surrounding by a supplementary coat made of a bend
metal sheet.
The advantage of this method is low back pressure with a relatively
simple design. The inconvenience of this method is low sound damping
ability compared to the other techniques, especially at low frequency.
The mufflers using the absorption technique are usually sports vehicle
because they increase the performances of the engine because of their
low back pressure. A trick to improve their
muffling ability consists of lining up several \"straight\" mufflers.
## The reflector muffler
Principle: Sound wave reflection is used to create a maximum amount of
destructive interferences
!Destructive
interference\|right{width="400"}
### Definition of destructive interferences
Let\'s consider the noise a person would hear when a car drives past.
This sound would physically correspond to the pressure variation of the
air which would make his ear-drum vibrate. The curve A1 of the graph 1
could represent this sound. The pressure amplitude is a function of the
time at a certain fixed place. If another sound wave A2 is produced at
the same time, the pressure of the two waves will add. If the amplitude
of A1 is exactly the opposite of the amplitude A2, then the sum will be
zero, which corresponds physically to the atmospheric pressure. The
listener would thus hear nothing although there are two radiating sound
sources. A2 is called the destructive interference.
!Wave
reflection\|right{width="250"}
### Definition of the reflection
The sound is a traveling wave i.e. its position changes in function of
the time. As long as the wave travels in the same medium, there is no
change of speed and amplitude. When the wave reaches a frontier between
two mediums which have different impedances, the speed, and the pressure
amplitude change (and so does the angle if the wave does not propagate
perpendicularly to the frontier). The figure 1 shows two medium A and B
and the 3 waves: incident transmitted and reflected.
### Example
If plane sound waves are propagating across a tube and the section of
the tube changes, the impedance of the tube will change. Part of the
incident waves will be transmitted through the discontinuity and the
other part will be reflected.
Animation
Mufflers using the reflection technique are the most commonly used
because they dampen the noise much better than the absorber muffler.
However they induce a higher back pressure,
lowering the performance of the engine (an engine would be most
efficient or powerful without the use of a muffler).
!Schema{width="400"}
The upper right image represents a car muffler\'s typical architecture.
It is composed of 3 tubes. There are 3 areas separated by plates, the
part of the tubes located in the middle area are perforated. A small
quantity of pressure \"escapes\" from the tubes through the perforation
and cancel one another.
Some mufflers using the reflection principle may incorporate cavities
which dampen noise. These cavities are called Helmholtz
Resonators
in acoustics. This feature is usually only available for up market class
mufflers.
![](Muffler_resonator.png "Muffler_resonator.png"){width="300"}
## Back pressure
Car engines are 4 stroke cycle engines. Out of these 4 strokes, only one
produces the power, this is when the explosion occurs and pushes the
pistons back. The other 3 strokes are necessary evil that don\'t produce
energy. They on the contrary consume energy. During the exhaust stroke,
the remaining gas from the explosion is expelled from the cylinder. The
higher the pressure behind the exhaust valves (i.e. back pressure), the
higher the effort necessary to expel the gas out of the cylinder. So, a
low back pressure is preferable in order to have a higher engine
horsepower.
## Muffler modeling by transfer matrix method
This method is easy to use on computer to obtain theoretical values for
the transmission loss of a muffler. The transmission loss gives a value
in dB that correspond to the ability of the muffler to dampen the noise.
### Example
!Muffler working with waves
reflections{width="500"}
P stands for Pressure \[Pa\] and U stand for volume velocity \[m3/s\]
$\begin{bmatrix} P1 \\ U1 \end{bmatrix}$=$\begin{bmatrix} T1 \end{bmatrix}
\begin{bmatrix} P2 \\ U2 \end{bmatrix}$ and
$\begin{bmatrix} P2 \\ U2 \end{bmatrix}$=$\begin{bmatrix} T2 \end{bmatrix}
\begin{bmatrix} P3 \\ U3 \end{bmatrix}$ and
$\begin{bmatrix} P3 \\ U3 \end{bmatrix}$=$\begin{bmatrix} T3 \end{bmatrix}
\begin{bmatrix} P4 \\ U4 \end{bmatrix}$
So, finally: $\begin{bmatrix} P1 \\ U1 \end{bmatrix}$=
$\begin{bmatrix} T1 \end{bmatrix}
\begin{bmatrix} T2 \end{bmatrix}
\begin{bmatrix} T3 \end{bmatrix}
\begin{bmatrix} P4 \\ U4 \end{bmatrix}$
with
$\begin{bmatrix} T_i \end{bmatrix}$=$\begin{bmatrix} cos (k L_i) & j sin (k L_i) \frac{\rho c}{S_i} \\ j sin (k L_i) \frac{S_i}{\rho c} & cos (k L_i) \end{bmatrix}$
Si stands for the cross section area
k is the wave
number
$\ \rho$ is the medium density
c is the speed of sound of the medium
### Results
!Schema{width="400"}
https://commons.wikimedia.org/wiki/File:Transmission_loss.png#Source_code
Matlab
code
of the graph above.
### Comments
The higher the value of the transmission loss and the better the
muffler.
The transmission loss depends on the frequency. The sound frequency of a
car engine is approximately between 50 and 3000 Hz. At resonance
frequencies, the transmission loss is zero. These frequencies correspond
to the lower peaks on the graph.
The transmission loss is independent of the applied pressure or velocity
at the input.
The temperature (about 600 Fahrenheit) has an impact on the air
properties : the speed of sound is higher and the mass density is lower.
The elementary transfer matrix depends on the element which is modelled.
For instance the transfer matrix of a Helmotz Resonator is
$\begin{bmatrix} 1 & 0 \\ \frac{1}{Z} & 1 \end{bmatrix}$ with
$\ Z = j \rho ( \frac{\omega L_i}{S_i} - \frac{c^2}{\omega V})$
The transmission loss and the insertion loss are different terms. The
transmission loss is 10 times the logarithm of the ratio output/input.
The insertion loss is 10 times the logarithm of the ratio of the
radiated sound power with and without muffler.
## Links
- General Information about Filter Design &
Implementation
- More information about the Transfer Matrix
Method
- General information about car
mufflers
|
# Acoustics/Sonic Boom
![](Acoustics_sonic_boom.JPG "Acoustics_sonic_boom.JPG")
!Warplane passing the sound
barrier.
A **sonic boom** is the audible component of a shock wave in air. The
term is commonly used to refer to the air shocks caused by the
supersonic flight of military aircraft or passenger transports such as
Concorde (Mach 2.2, no longer flying) and the Space Shuttle (Mach 27).
Sonic booms generate enormous amounts of sound energy, sounding much
like an explosion; typically the shock front may approach 100 megawatts
per square meter, and may exceed 200 decibels.
!When an aircraft is near the sound barrier, an unusual cloud sometimes
forms in its wake. A Prandtl-Glauert Singularity results from a drop in
pressure, due to shock wave formation. This pressure change causes a
sharp drop in temperature, which in humid conditions leads the water
vapor in the air to condense into droplets and form the
cloud._-_filtered.jpg "When an aircraft is near the sound barrier, an unusual cloud sometimes forms in its wake. A Prandtl-Glauert Singularity results from a drop in pressure, due to shock wave formation. This pressure change causes a sharp drop in temperature, which in humid conditions leads the water vapor in the air to condense into droplets and form the cloud."){width="200"}
## Cause of sonic booms
As an object moves through the air it creates a series of pressure waves
in front and behind it, similar to the bow and stern waves created by a
boat. These waves travel at the speed of sound, and as the speed of the
aircraft increases the waves are forced together or \'compressed\'
because they cannot \"get out of the way\" of each other, eventually
merging into a single shock wave at the speed of sound. This critical
speed is known as Mach 1 and is 1,225 km/h (761 mph) at sea level.
In smooth flight, the shock wave starts at the nose of the aircraft and
ends at the tail. There is a sudden increase in pressure at the nose,
decreasing steadily to a negative pressure at the tail, where it
suddenly returns to normal. This \"overpressure profile\" is known as
the N-wave due to its shape. We experience the \"boom\" when there is a
sudden increase in pressure, so the N-wave causes two booms, one when
the initial pressure rise from the nose hits, and another when the tail
passes and the pressure suddenly returns to normal. This leads to a
distinctive \"double boom\" from supersonic aircraft. When maneuvering
the pressure; distribution changes into different forms, with a
characteristic U-wave shape. Since the boom is being generated
continually as long as the aircraft is supersonic, it traces out a path
on the ground following the aircraft\'s flight path, known as the **boom
carpet**. frame\|A cage around the engine reflects any shock waves. A
spike behind the engine converts them into
thrust. frame\|To
generate lift a supersonic airplane has to produce at least two shock
waves: One over-pressure downwards wave, and one under-pressure upwards
wave. Whitcomb area rule states, we can reuse air displacement without
generating additional shock waves. In this case the fuselage reuses some
displacement of the
wings.
A sonic boom or \"tunnel boom\" can also be caused by high-speed trains
in tunnels (e.g. the Japanese Shinkansen). In order to reduce the sonic
boom effect, a special shape of the train car and a widened opening of
the tunnel entrance is necessary. When a high speed train enters a
tunnel, the sonic boom effect occurs at the tunnel exit. In contrast to
the (super)sonic boom of an aircraft, this \"tunnel boom\" is caused by
a rapid change of subsonic flow (due to the sudden narrowing of the
surrounding space) rather than by a shock wave. In close range to the
tunnel exit this phenomenon can causes disturbances to residents.
## Characteristics
The power, or volume, of the shock wave is dependent on the quantity of
air that is being accelerated, and thus the size and weight of the
aircraft. As the aircraft increases speed the shocks grow \"tighter\"
around the craft, and do not become much \"louder\". At very high speeds
and altitudes the cone does not intersect the ground, and no boom will
be heard. The \"length\" of the boom from front to back is dependent on
the length of the aircraft, although to a factor of 3:2 not 1:1. Longer
aircraft therefore \"spread out\" their booms more than smaller ones,
which leads to a less powerful boom.
The nose shockwave compresses and pulls the air along with the aircraft
so that the aircraft behind its shockwave sees subsonic airflow.
However, this means that several smaller shock waves can, and usually
do, form at other points on the aircraft, primarily any convex points or
curves, the leading wing edge and especially the inlet to engines. These
secondary shockwaves are caused by the subsonic air behind the main
shockwave being forced to go supersonic again by the shape of the
aircraft (for example due to the air\'s acceleration over the top of a
curved wing).
The later shock waves are somehow faster than the first one, travel
faster and add to the main shockwave at some distance away from the
aircraft to create a much more defined N-wave shape. This maximizes both
the magnitude and the \"rise time\" of the shock, which makes it seem
louder. On most designs the characteristic distance is about 40,000 ft,
meaning that below this altitude the sonic boom will be \"softer\".
However the drag at this altitude or below makes supersonic travel
particularly inefficient, which poses a serious problem.
## Abatement
In the late 1950s when SST designs were being actively pursued, it was
thought that although the boom would be very large, they could avoid
problems by flying higher. This premise was proven false when the North
American B-70 *Valkyrie* started flying and it was found that the boom
was a very real problem even at 70,000 ft (21,000m). It was during these
tests that the N-wave was first characterized.
Richard Seebass and his colleague Albert George at Cornell University
studied the problem extensively, and eventually defined a \"figure of
merit\", **FM**, to characterize the sonic boom levels of different
aircraft. FM is proportional to the aircraft weight divided by the
three-halves of the aircraft length, FM = W/(3/2·L) = 2W/3L. The lower
this value, the less boom the aircraft generates, with figures of about
1 or lower being considered acceptable. Using this calculation they
found FM\'s of about 1.4 for Concorde, and 1.9 for the Boeing 2707. This
eventually doomed most SST projects as public resentment, somewhat blown
out of proportion, mixed with politics eventually resulted in laws that
made any such aircraft impractical (flying only over water for
instance).
Seebass-George also worked on the problem from another angle, examining
ways to reduce the \"peaks\" of the N-wave and therefore smooth out the
shock into something less annoying. Their theory suggested that body
shaping might be able to use the secondary shocks to either \"spread
out\" the N-wave, or interfere with each other to the same end. Ideally
this would raise the characteristic altitude from 40,000 ft to 60,000,
which is where most SST designs fly. The design required some fairly
sophisticated shaping in order to achieve the dual needs of reducing the
shock and still leaving an aerodynamically efficient shape, and
therefore had to wait for the advent of computer-aided design before
being able to be built.
This remained untested for decades, until DARPA started the **Quiet
Supersonic Platform** project and funded the **Shaped Sonic Boom
Demonstration** aircraft to test it. SSBD used a F-5 Freedom Fighter
modified with a new body shape, and was tested over a two year period in
what has become the most extensive study on the sonic boom to date.
After measuring the 1,300 recordings, some taken inside the shock wave
by a chase plane, the SSBD demonstrated a reduction in boom by about
one-third. Although one-third is not a huge reduction, it could reduce
Concorde below the FM = 1 limit for instance.
There are theoretical designs that do not appear to create sonic booms
at all, such as the Busemann\'s Biplane. Nobody has been able to suggest
a practical implementation of this concept, as yet.
## Perception and noise
The sound of a sonic boom depends largely on the distance between the
observer and the aircraft producing the sonic boom. A sonic boom is
usually heard as a deep double \"boom\" as the aircraft is usually some
distance away. However, as those who have witnessed landings of space
shuttles have heard, when the aircraft is nearby the sonic boom is a
sharper \"bang\" or \"crack\". The sound is much like the \"aerial
bombs\" used at firework displays.
In 1964, NASA and the FAA began the Oklahoma City sonic boom tests,
which caused eight sonic booms per day over a period of six months.
Valuable data was gathered from the experiment, but 15,000 complaints
were generated and ultimately entangled the government in a class action
lawsuit, which it lost on appeal in 1969.
In late October 2005, Israel began using nighttime sonic boom raids
against civilian populations in the Gaza Strip
1 as a
method of psychological warfare. The practice was condemned by the
United Nations. A senior Israeli army intelligence source said the
tactic was intended to break civilian support for armed Palestinian
groups.
## Media
These videos include jets achieving supersonic speeds.
First supersonic
flight
(info)
: Chuck Yeager broke the sound barrier on October 14, 1947 in the Bell
X-1.
F-14 Tomcat sonic boom flyby (with
audio)
(info)
: F-14 Tomcat flies at Mach 1 over the water, creating a sonic boom as
it passes.
F-14A Tomcat supersonic
flyby
(info)
: Supersonic F-14A Tomcat flying by the USS Theodore Roosevelt CVN-71
in 1986 for the tiger cruise.
Shuttle passes sound
barrier
(info)
: Space shuttle Columbia crosses the sound barrier at 45 seconds after
liftoff.
## External links
- NASA opens new chapter in supersonic
flight
- \"Sonic
Boom,\" a
tutorial from the \"Sonic Boom, Sound Barrier, and Condensation
Clouds\"
(or \"Sonic Boom, Sound Barrier, and Prandtl-Glauert Condensation
Clouds\") collection of tutorials by Mark S. Cramer, Ph.D. at
<http://FluidMech.net> (Tutorials, Sound Barrier).
- decibel chart including sonic
booms
|
# Acoustics/Sonar
**SONAR** (**so**und **n**avigation **a**nd **r**anging) is a technique
that uses sound propagation under water to navigate or to detect other
vessels. There are two kinds of sonar: active and passive.
## History
The French physicist Paul Langevin, working with a Russian émigré
electrical engineer, Constantin Chilowski, invented the first active
sonar-type device for detecting submarines in 1915. Although
piezoelectric transducers later superseded the electrostatic transducers
they used, their work influenced the future of sonar designs. In 1916,
under the British Board of Inventions and Research, Canadian physicist
Robert Boyle took on the project, which subsequently passed to the
**Anti-** (or **Allied**) **Submarine Detection Investigation
Committee**, producing a prototype for testing in mid-1917, hence the
British acronym **ASDIC**.
By 1918, both the U.S. and Britain had built active systems. The UK
tested what they still called ASDIC on *HMS Antrim* in 1920, and started
production of units in 1922. The 6th Destroyer Flotilla had
ASDIC-equipped vessels in 1923. An anti-submarine school, *HMS Osprey*,
and a training flotilla of four vessels were established on Portland in
1924.
The U.S. Sonar QB set arrived in 1931. By the outbreak of World War II,
the Royal Navy had five sets for different surface ship classes, and
others for submarines. The greatest advantage came when it was linked to
the Squid anti-submarine weapon.
## Active sonar
Active sonar creates a pulse of sound, often called a \"ping\", and then
listens for reflections of the pulse. To measure the distance to an
object, one measures the time from emission of a pulse to reception. To
measure the bearing, one uses several hydrophones, and measures the
relative arrival time to each in a process called beamforming.
The pulse may be at constant frequency or a chirp of changing frequency.
For a chirp, the receiver correlates the frequency of the reflections to
the known chirp. The resultant processing gain allows the receiver to
derive the same information as if a much shorter pulse of the same total
energy were emitted. In practice, the chirp signal is sent over a longer
time interval; therefore the instantaneous emitted power will be
reduced, which simplifies the design of the transmitter. In general,
long-distance active sonars use lower frequencies. The lowest have a
bass \"BAH-WONG\" sound.
The most useful small sonar looks roughly like a waterproof flashlight.
One points the head into the water, presses a button, and reads a
distance. Another variant is a \"fishfinder\" that shows a small display
with shoals of fish. Some civilian sonars approach active military
sonars in capability, with quite exotic three-dimensional displays of
the area near the boat. However, these sonars are not designed for
stealth.
When active sonar is used to measure the distance to the bottom, it is
known as echo sounding.
Active sonar is also used to measure distance through water between two
sonar transponders. A transponder is a device that can transmit and
receive signals but when it receives a specific interrogation signal it
responds by transmitting a specific reply signal. To measure distance,
one transponder transmits an interrogation signal and measures the time
between this transmission and the receipt of the other transponder\'s
reply. The time difference, scaled by the speed of sound through water
and divided by two, is the distance between the two transponders. This
technique, when used with multiple transponders, can calculate the
relative positions of static and moving objects in water.
### Analysis of active sonar data
Active sonar data is obtained by measuring detected sound for a short
period of time after the issuing of a ping; this time period is selected
so as to ensure that the ping\'s reflection will be detected. The
distance to the seabed (or other acoustically reflective object) can be
calculated from the elapsed time between the ping and the detection of
its reflection. Other properties can also be detected from the shape of
the ping\'s reflection:
- When collecting data on the seabed, some of the reflected sound will
typically reflect off the air-water interface, and then reflect off
the seabed a second time. The size of this second echo provides
information about the acoustic hardness of the seabed.
- The roughness of a seabed affects the variance in reflection time.
For a smooth seabed, all of the reflected sound will take much the
same path, resulting in a sharp spike in the data. For a rougher
seabed, sound will be reflected back over a larger area of seabed,
and some sound may bounce between seabed features before reflecting
to the surface. A less sharp spike in the data therefore indicates a
rougher seabed.
### Sonar and marine animals
Some marine animals, such as whales and dolphins, use echolocation
systems similar to active sonar to locate predators and prey. It is
feared that sonar transmitters could confuse these animals and cause
them to lose their way, perhaps preventing them from feeding and mating.
A recent article on the BBC Web site (see below) reports findings
published in the journal *Nature* to the effect that military sonar may
be inducing some whales to experience decompression sickness (and
resultant beachings).
High-powered sonar transmitters may indirectly harm marine animals,
although scientific evidence suggests that a confluence of factors must
first be present. In the Bahamas in 2000, a trial by the United States
Navy of a 230 decibel transmitter in the frequency range 3 -- 7 kHz
resulted in the beaching of sixteen whales, seven of which were found
dead. The Navy accepted blame in a report published in the Boston Globe
on 1 January 2002. However, at low powers, sonar can protect marine
mammals against collisions with ships.
A kind of sonar called mid-frequency sonar has been correlated with mass
cetacean strandings throughout the world's oceans, and has therefore
been singled out by environmentalists as causing the death of marine
mammals. International press coverage of these events can be found at
this active sonar news clipping Web
site. A lawsuit was filed in Santa Monica, California on 19 October 2005
contending that the U.S. Navy has conducted sonar exercises in violation
of several environmental laws, including the National Environmental
Policy Act, the Marine Mammal Protection Act, and the Endangered Species
Act.
## Passive sonar
Passive sonar listens without transmitting. It is usually employed in
military settings, although a few are used in science applications.
### Speed of sound
Sonar operation is affected by sound speed. Sound speed is slower in
fresh water than in sea water. In all water sound velocity is affected
by density (or the mass per unit of volume). Density is affected by
temperature, dissolved molecules (usually salinity), and pressure. The
speed of sound (in feet per second) is approximately equal to 4388 +
(11.25 × temperature (in °F)) + (0.0182 × depth (in feet) + salinity (in
parts-per-thousand)). This is an empirically derived approximation
equation that is reasonably accurate for normal temperatures,
concentrations of salinity and the range of most ocean depths. Ocean
temperature varies with depth, but at between 30 and 100 metres there is
often a marked change, called the thermocline, dividing the warmer
surface water from the cold, still waters that make up the rest of the
ocean. This can frustrate sonar, for a sound originating on one side of
the thermocline tends to be bent, or refracted, off the thermocline. The
thermocline may be present in shallower coastal waters, however, wave
action will often mix the water column and eliminate the thermocline.
Water pressure also affects sound propagation. Increased pressure
increases the density of the water and raises the sound velocity.
Increases in sound velocity cause the sound waves to refract away from
the area of higher velocity. The mathematical model of refraction is
called Snell\'s law.
Sound waves that are radiated down into the ocean bend back up to the
surface in great arcs due to the effect of pressure on sound. The ocean
must be at least 6000 feet (1850 meters) deep, or the sound waves will
echo off the bottom instead of refracting back upwards. Under the right
conditions these waves will then be focused near the surface and
refracted back down and repeat another arc. Each arc is called a
convergence zone. Where an arc intersects the surface a CZ annulus is
formed. The diameter of the CZ depends on the temperature and salinity
of the water. In the North Atlantic, for example, CZs are found
approximately every 33 nautical miles (61 km), depending on the season,
forming a pattern of concentric circles around the sound source. Sounds
that can be detected for only a few miles in a direct line can therefore
also be detected hundreds of miles away. Typically the first, second and
third CZ are fairly useful; further out than that the signal is too
weak, and thermal conditions are too unstable, reducing the reliability
of the signals. The signal is naturally attenuated by distance, but
modern sonar systems are very sensitive.
### Identifying sound sources
Military sonar has a wide variety of techniques for identifying a
detected sound. For example, U.S. vessels usually operate 60 Hz
alternating current power systems. If transformers are mounted without
proper vibration insulation from the hull, or flooded, the 60 Hz sound
from the windings and generators can be emitted from the submarine or
ship, helping to identify its nationality. In contrast, most European
submarines have 50 Hz power systems. Intermittent noises (such as a
wrench being dropped) may also be detectable to sonar.
Passive sonar systems may have large sonic databases, however most
classification is performed manually by the sonar operator. A computer
system frequently uses these databases to identify classes of ships,
actions (i.e., the speed of a ship, or the type of weapon released), and
even particular ships. Publications for classification of sounds are
provided by and continually updated by the U.S. Office of Naval
Intelligence.
### Sonar in warfare
Modern naval warfare makes extensive use of sonar. The two types
described before are both used, but from different platforms, i.e.,
types of water-borne vessels.
Active sonar is extremely useful, since it gives the exact position of
an object. Active sonar works the same way as radar: a signal is
emitted. The sound wave then travels in many directions from the
emitting object. When it hits an object, the sound wave is then
reflected in many other directions. Some of the energy will travel back
to the emitting source. The echo will enable the sonar system or
technician to calculate, with many factors such as the frequency, the
energy of the received signal, the depth, the water temperature, etc.,
the position of the reflecting object. Using active sonar is somewhat
hazardous however, since it does not allow the sonar to identify the
target, and any vessel around the emitting sonar will detect the
emission. Having heard the signal, it is easy to identify the type of
sonar (usually with its frequency) and its position (with the sound
wave\'s energy). Moreover, active sonar, similar to radar, allows the
user to detect objects at a certain range but also enables other
platforms to detect the active sonar at a far greater range.
Since active sonar does not allow an exact identification and is very
noisy, this type of detection is used by fast platforms (planes,
helicopters) and by noisy platforms (most surface ships) but rarely by
submarines. When active sonar is used by surface ships or submarines, it
is typically activated very briefly at intermittent periods, to reduce
the risk of detection by an enemy\'s passive sonar. As such, active
sonar is normally considered a backup to passive sonar. In aircraft,
active sonar is used in the form of disposable sonobuoys that are
dropped in the aircraft\'s patrol area or in the vicinity of possible
enemy sonar contacts.
Passive sonar has fewer drawbacks. Most importantly, it is silent.
Generally, it has a much greater range than active sonar, and allows an
identification of the target. Since any motorized object makes some
noise, it may be detected eventually. It simply depends on the amount of
noise emitted and the amount of noise in the area, as well as the
technology used. To simplify, passive sonar \"sees\" around the ship
using it. On a submarine, the nose mounted passive sonar detects in
directions of about 270°, centered on the ship\'s alignment, the
hull-mounted array of about 160° on each side, and the towed array of a
full 360°. The no-see areas are due to the ship\'s own interference.
Once a signal is detected in a certain direction (which means that
something makes sound in that direction, this is called broadband
detection) it is possible to zoom in and analyze the signal received
(narrowband analysis). This is generally done using a Fourier transform
to show the different frequencies making up the sound. Since every
engine makes a specific noise, it is easy to identify the object.
Another use of the passive sonar is to determine the target\'s
trajectory. This process is called Target Motion Analysis (TMA), and the
resultant \"solution\" is the target\'s range, course, and speed. TMA is
done by marking from which direction the sound comes at different times,
and comparing the motion with that of the operator\'s own ship. Changes
in relative motion are analyzed using standard geometrical techniques
along with some assumptions about limiting cases.
Passive sonar is stealthy and very useful. However, it requires
high-tech components (band pass filters, receivers) and is costly. It is
generally deployed on expensive ships in the form of arrays to enhance
the detection. Surface ships use it to good effect; it is even better
used by submarines, and it is also used by airplanes and helicopters,
mostly to a \"surprise effect\", since submarines can hide under thermal
layers. If a submarine captain believes he is alone, he may bring his
boat closer to the surface and be easier to detect, or go deeper and
faster, and thus make more sound.
In the United States Navy, a special badge known as the Integrated
Undersea Surveillance System Badge is awarded to those who have been
trained and qualified in sonar operation and warfare.
In World War II, the Americans used the term **SONAR** for their system.
The British still called their system **ASDIC**. In 1948, with the
formation of NATO, standardization of signals led to the dropping of
ASDIC in favor of sonar.
|
# Acoustics/Interior Sound Transmission
## Introduction to NVH
Noise is characterized by frequency (20--20 kHz), level (dB) and
quality. Noise may be undesirable in some cases, i.e. road NVH yet may
be desirable in other cases, i.e. powerful sounding engine.
Vibration is defined as the motion sensed by the body, mainly in
0.5 Hz - 50 Hz range. It is characterized by frequency, level and
direction.
Harshness is defined as rough, grating or discordant sensation.
Sound quality is defined according to Oxford English Dictionary, \"That
distinctive Quality of a Sound other than it\'s Pitch or Loudness\"
In generally, ground vehicle NVH and sound quality design are attributed
by the following categories:
1.) Powertrain NVH and SQ: Interior Idle NVH, Acceleration NVH,
Deceleration NVH, Cruising NVH, Sound Quality Character, Diesel
Combusition Noise, Engine Start-Up/Shut-Down
2.) Wind Noise: Motorway Speed Wind Noise (80-130kph), High Speed Wind
Noise (\>130kph), Open Glazing Wind Noise
3.) Road NVH: Road Noise, Road Vibration, Impact Noise
4.) Operational Sound Quality: Closure Open/Shut Sound Quality, Customer
Operated Feature Sound Quality, Audible Warning Sounds
5.) Squeaks and Rattles
**== Noise generation ==**
## Structural vibration response
## Structural acoustic response
## Sound propagation
## Applications to vehicle interior sound transmission
## Useful websites
## References
|
# Acoustics/Anechoic and reverberation rooms
![](Acoustics_anechoic_reverberation.JPG "Acoustics_anechoic_reverberation.JPG")
## Introduction
Acoustic experiments often require to realise measurements in rooms with
special characteristics. Two types of rooms can be distinguished:
anechoic rooms and reverberation rooms.
## Anechoic room
The principle of this room is to simulate a free field. In a free space,
the acoustic waves are propagated from the source to infinity. In a
room, the reflections of the sound on the walls produce a wave which is
propagated in the opposite direction and comes back to the source. In
anechoic rooms, the walls are very absorbent in order to eliminate these
reflections. The sound seems to die down rapidly. The materials used on
the walls are rockwool, glasswool or foams, which are materials that
absorb sound in relatively wide frequency bands. Cavities are dug in the
wool so that the large wavelength corresponding to bass frequencies are
absorbed too. Ideally the sound pressure level of a punctual sound
source decreases about 6 dB per a distance doubling.
Anechoic rooms are used in the following experiments:
```{=html}
<center>
```
Intensimetry: measurement of the acoustic power of a source.
```{=html}
</center>
```
```{=html}
<center>
```
Study of the source directivity.
```{=html}
</center>
```
## Reverberation room
The walls of a reverberation room mostly consist of concrete and are
covered with reflecting paint. Alternative design consist of sandwich
panels with metal surface. The sound reflects off the walls many times
before dying down. It gives a similar impression of a sound in a
cathedral. Ideally all sound energy is absorbed by air. Because of all
these reflections, a lot of plane waves with different directions of
propagation interfere in each point of the room. Considering all the
waves is very complicated so the acoustic field is simplified by the
diffuse field hypothesis: the field is homogeneous and isotropic. Then
the pressure level is uniform in the room. The truth of this thesis
increases with ascending frequency, resulting in a lower limiting
frequency for each reverberation room, where the density of standing
waves is sufficient.
Several conditions are required for this approximation: The absorption
coefficient of the walls must be very low (α\<0.2) The room must have
geometrical irregularities (non-parallel walls, diffusor objects) to
avoid nodes of pressure of the resonance modes.
With this hypothesis, the theory of Sabine can be applied. It deals with
the reverberation time which is the time required to the sound level to
decrease of 60 dB. T depends on the volume of the room V, the absorption
coefficient αi and the area Si of the different materials in the room :
Reverberation rooms are used in the following experiments:
```{=html}
<center>
```
measurement of the ability of a material to absorb a sound
```{=html}
</center>
```
```{=html}
<center>
```
measurement of the ability of a partition to transmit a sound
```{=html}
</center>
```
```{=html}
<center>
```
Intensimetry
```{=html}
</center>
```
```{=html}
<center>
```
measurement of sound power
```{=html}
</center>
```
|
# Acoustics/Basic Room Acoustic Treatments
![](Acoustics_basic_room_acoustic_treatments.JPG "Acoustics_basic_room_acoustic_treatments.JPG")
## Introduction
Many people use one or two rooms in their living space as \"theatrical\"
rooms where theater or music room activities commence. It is a common
misconception that adding speakers to the room will enhance the quality
of the room acoustics. There are other simple things that can be done to
increase the room\'s acoustics to produce sound that is similar to
\"theater\" sound. This site will take you through some simple
background knowledge on acoustics and then explain some solutions that
will help improve sound quality in a room.
## Room sound combinations
The sound you hear in a room is a combination of direct sound and
indirect sound. Direct sound will come directly from your speakers while
the other sound you hear is reflected off of various objects in the
room.
![](sound_lady.jpg "sound_lady.jpg")
The Direct sound is coming right out of the TV to the listener, as you
can see with the heavy black arrow. All of the other sound is reflected
off surfaces before they reach the listener.
## Good and bad reflected sound
Have you ever listened to speakers outside? You might have noticed that
the sound is thin and dull. This occurs because when sound is reflected,
it is fuller and louder than it would if it were in an open space. So
when sound is reflected, it can add a fullness, or spaciousness. The bad
part of reflected sound occurs when the reflections amplify some notes,
while cancelling out others, making the sound distorted. It can also
affect tonal quality and create an echo-like effect. There are three
types of reflected sound, pure reflection, absorption, and diffusion.
Each reflection type is important in creating a \"theater\" type
acoustic room.
![](sound.jpg "sound.jpg")
### Reflected sound
Reflected sound waves, good and bad, affect the sound you hear, where it
comes from, and the quality of the sound when it gets to you. The bad
news when it comes to reflected sound is standing waves.
These waves are created when sound is reflected back and forth between
any two parallel surfaces in your room, ceiling and floor or wall to
wall.
Standing waves can distort noises 300 Hz and down. These noises include
the lower mid frequency and bass ranges. Standing waves tend to collect
near the walls and in corners of a room, these collecting standing waves
are called room resonance modes.
#### Finding your room resonance modes
First, specify room dimensions (length, width, and height). **Then
follow this example:**
![](equationandexample.jpg "equationandexample.jpg")![](Resmodepic.jpg "Resmodepic.jpg")![](exampletable.jpg "exampletable.jpg")
#### Working with room resonance modes to increase sound quality
##### There are some room dimensions that produce the largest amount of standing waves.
1. Cube
2. Room with 2 out of the three dimensions equal
3. Rooms with dimensions that are multiples of each other
##### Move chairs or sofas away from the walls or corners to reduce standing wave effects
### Absorbed
The sound that humans hear is actually a form of acoustic energy.
Different materials absorb different amounts of this energy at different
frequencies. When considering room acoustics, there should be a good mix
of high frequency absorbing materials and low frequency absorbing
materials. A table including information on how different common
household absorb sound can be found
here.
### Diffused sound
Using devices that diffuse sound is a fairly new way of increasing
acoustic performance in a room. It is a means to create sound that
appears to be \"live\". They can replace echo-like reflections without
absorbing too much sound.
Some ways of determining where diffusive items should be placed were
found on this
\[<http://www.crutchfieldadvisor.com/S-hpU9sw2hgbG/learningcenter/home/speakers_roomacoustics.html?page=4>:
website\].
1. If you have carpet or drapes already in your room, use diffusion to
control side wall reflections.
2. A bookcase filled with odd-sized books makes an effective diffuser.
3. Use absorptive material on room surfaces between your listening
position and your front speakers, and treat the back wall with
diffusive material to re-distribute the reflections.
## How to find overall trouble spots in a room
Every surface in a room does not have to be treated in order to have
good room acoustics. Here is a simple method of finding trouble spots in
a room.
1. Grab a friend to hold a mirror along the wall near a certain speaker
at speaker height.
2. The listener sits in a spot of normal viewing.
3. The friend then moves slowly toward the listening position (stay
along the wall).
4. Mark each spot on the wall where the listener can see any of the
room speakers in the mirror.
5. Congratulations! These are the trouble spots in the room that need
an absorptive material in place. Don\'t forget that diffusive
material can also be placed in those positions.
## References
- Acoustic Room Treatment
Articles
- Room Acoustics: Acoustic
Treatments
- Home Improvement: Acoustic
Treatments
- Crutchfield
Advisor
|
# Acoustics/Human Vocal Fold
![](Acoustics_human_vocal_fold.JPG "Acoustics_human_vocal_fold.JPG")
## Physiology of vocal fold
The human vocal fold is a set of lip-like tissues located inside the
larynx, and is the source of
sound for humans and many animals.
The larynx is located at the top of the trachea. It is mainly composed
of cartilages and muscles, and the largest cartilage, thyroid, is well
known as the \"Adam\'s Apple.\"
The organ has two main functions; to act as the last protector of the
airway, and to act as a sound source for voice. This page focuses on the
latter function.
Links on Physiology:
- Discover The
Larynx
## Voice production
Although the science behind sound production for a vocal fold is
complex, it can be thought of as similar to a brass player\'s lips, or a
whistle made out of grass. Basically, vocal folds (or lips or a pair of
grass) make a constriction to the airflow, and as the air is forced
through the narrow opening, the vocal folds oscillate. This causes a
periodical change in the air pressure, which is perceived as sound.
Vocal Folds Video
When the airflow is introduced to the vocal folds, it forces open the
two vocal folds which are nearly closed initially. Due to the stiffness
of the folds, they will then try to close the opening again. And now the
airflow will try to force the folds open etc\... This creates an
oscillation of the vocal folds, which in turn, as I stated above,
creates sound. However, this is a damped oscillation, meaning it will
eventually achieve an equilibrium position and stop oscillating. So how
are we able to \"sustain\" sound?
As it will be shown later, the answer seems to be in the changing shape
of vocal folds. In the opening and the closing stages of the
oscillation, the vocal folds have different shapes. This affects the
pressure in the opening, and creates the extra pressure needed to push
the vocal folds open and sustain oscillation. This part is explained in
more detail in the \"Model\" section.
This flow-induced oscillation, as with many fluid mechanics problems, is
not an easy problem to model. Numerous attempts to model the oscillation
of vocal folds have been made, ranging from a single mass-spring-damper
system to finite element models. In this page I would like to use my
single-mass model to explain the basic physics behind the oscillation of
a vocal fold.
Information on vocal fold models: National Center for Voice and
Speech
## Model
thumb\|Figure 1:
Schematics.png "wikilink")
The most simple way of simulating the motion of vocal folds is to use a
single mass-spring-damper system as shown above. The mass represents one
vocal fold, and the second vocal fold is assumed to be symmetry about
the axis of symmetry. Position 3 represents a location immediately past
the exit (end of the mass), and position 2 represents the glottis (the
region between the two vocal folds).
### The pressure force
The major driving force behind the oscillation of vocal folds is the
pressure in the glottis. The Bernoulli\'s equation from fluid mechanics
states that:
$P_1 + \frac{1}{2}\rho U^2 + \rho gh = Constant$ \-\-\-\--EQN 1
Neglecting potential difference and applying EQN 1 to positions 2 and 3
of Figure 1,
$P_2 + \frac{1}{2}\rho U_2^2 = P_3 + \frac{1}{2}\rho U_3^2$ \-\-\-\--EQN
2
Note that the pressure and the velocity at position 3 cannot change.
This makes the right hand side of EQN 2 constant. Observation of EQN 2
reveals that in order to have oscillating pressure at 2, we must have
oscillation velocity at 2. The flow velocity inside the glottis can be
studied through the theories of the orifice flow.
The constriction of airflow at the vocal folds is much like an orifice
flow with one major difference: with vocal folds, the orifice profile is
continuously changing. The orifice profile for the vocal folds can open
or close, as well as change the shape of the opening. In Figure 1, the
profile is converging, but in another stage of oscillation it takes a
diverging shape.
The orifice flow is described by Blevins as:
$U = C\frac{2(P_1 - P_3)}{\rho}$ \-\-\-\--EQN 3
Where the constant C is the orifice coefficient, governed by the shape
and the opening size of the orifice. This number is determined
experimentally, and it changes throughout the different stages of
oscillation.
Solving equations 2 and 3, the pressure force throughout the glottal
region can be determined.
### The Collision Force
As the video of the vocal folds shows, vocal folds can completely close
during oscillation. When this happens, the Bernoulli equation fails.
Instead, the collision force becomes the dominating force. For this
analysis, Hertz collision model was applied.
$F_H = k_H delta^{3/2} (1 + b_H delta')$ \-\-\-\--EQN 4
where
$k_H = \frac{4}{3} \frac{E}{1 - \mu_H^2} \sqrt{r}$
Here delta is the penetration distance of the vocal fold past the line
of symmetry.
## Simulation of the model
The pressure and the collision forces were inserted into the equation of
motion, and the result was simulated.
thumb\|Figure 2: Area Opening and Volumetric Flow
Rate
Figure 2 shows that an oscillating volumetric flow rate was achieved by
passing a constant airflow through the vocal folds. When simulating the
oscillation, it was found that the collision force limits the amplitude
of oscillation rather than drive the oscillation. Which tells us that
the pressure force is what allows the sustained oscillation to occur.
## The acoustic output
This model showed that the changing profile of glottal opening causes an
oscillating volumetric flow rate through the vocal folds. This will in
turn cause an oscillating pressure past the vocal folds. This method of
producing sound is unusual, because in most other means of sound
production, air is compressed periodically by a solid such as a speaker
cone.
Past the vocal folds, the produced sound enters the vocal tract.
Basically this is the cavity in the mouth as well as the nasal cavity.
These cavities act as acoustic filters, modifying the character of the
sound. These are the characters that define the unique voice each person
produces.
## Related links
- FEA
Model
- Two Mass
Model
## References
1. Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
2. Acoustics: An introduction to its Physical Principles and
Applications; Pierce, Allan D., Acoustical Society of America, 1989.
3. Blevins, R.D. (1984). Applied Fluid Dynamics Handbook. Van Nostrand
Reinhold Co. 81-82.
4. Titze, I. R. (1994). Principles of Voice Production. Prentice-Hall,
Englewood Cliffs, NJ.
5. Lucero, J. C., and Koenig, L. L. (2005). Simulations of temporal
patterns of oral airflow in men and women using a two-mass model of
the vocal folds under dynamic control, Journal of the Acostical
Society of America 117, 1362-1372.
6. Titze, I.R. (1988). The physics of small-amplitude oscillation of
the vocal folds. Journal of the Acoustical Society of America 83,
1536--1552
|
# Acoustics/How an Acoustic Guitar Works
![](Acoustics_how_an_acoustic_guitar_works.JPG "Acoustics_how_an_acoustic_guitar_works.JPG")
What are sound vibrations that contribute to sound production? First of
all, there are the strings. Any string that is under tension will
vibrate at a certain frequency. The weight and length of the string, the
tension in the string, and the compliance of the string determine the
frequency at which it vibrates. The guitar controls the length and
tension of six differently weighted strings to cover a very wide range
of frequencies. Second, there is the body of the guitar. The guitar body
is very important for the lower frequencies of the guitar. The air mass
just inside the sound hole oscillates, compressing and decompressing the
compliant air inside the body. In practice this concept is called a
Helmholtz resonator. Without this, it would be difficult to produce the
wonderful timbre of the guitar.
![](Acoustic_guitar-en.svg "Acoustic_guitar-en.svg"){width="320"}
## The strings
The strings of the guitar vary in linear density, length, and tension.
This gives the guitar a wide range of attainable frequencies. The larger
the linear density is, the slower the string vibrates. The same goes for
the length; the longer the string is the slower it vibrates. This causes
a low frequency. Inversely, if the strings are less dense and/or shorter
they create a higher frequency. The resonance frequencies of the strings
can be calculated by
$$f_1 = \frac{\sqrt{\frac{T}{\rho_l}}}{2 L}\quad \text{with}\quad T = \text{string tension},\ \rho_l = \text{linear density of string},\ L = \text{string length}.$$
The string length, $L$, in the equation is what changes when a player
presses on a string at a certain fret. This will shorten the string
which in turn increases the frequency it produces when plucked. The
spacing of these frets is important. The length from the nut to bridge
determines how much space goes between each fret. If the length is 25
inches, then the position of the first fret should be located
(25/17.817) inches from the nut. Then the second fret should be located
inches from the
first fret. This results in the equation
$$d = \frac{L}{17.817} \quad \text{with}\quad d = \text{spacing between frets}, L = \text{length from previous fret to bridge}.$$
When a string is plucked, a disturbance is formed and travels in both
directions away from point where the string was plucked. These \"waves\"
travel at a speed that is related to the tension and linear density and
can be calculated by
$$c = \sqrt{\frac{T}{\rho_l}} \quad \text{with}\quad c = \text{wave speed},\ T = \text{string tension},\ \rho_l = \text{linear density}.$$
The waves travel until they reach the boundaries on each end where they
are reflected back. The link below displays how the waves propagate in a
string.
Plucked String @
www.phys.unsw.edu
The strings themselves do not produce very much sound because they are
so thin. They can\'t \"push\" the air that surrounds them very
effectively. This is why they are connected to the top plate of the
guitar body. They need to transfer the frequencies they are producing to
a large surface area which can create more intense pressure
disturbances.
## The body
The body of the guitar transfers the vibrations of the bridge to the air
that surrounds it. The top plate contributes to most of the pressure
disturbances, because the player dampens the back plate and the sides
are relatively stiff. This is why it is important to make the top plate
out of a light springy wood, like spruce. The more the top plate can
vibrate, the louder the sound it produces will be. It is also important
to keep the top plate flat, so a series of braces are located on the
inside to strengthen it. Without these braces the top plate would bend
and crack under the large stress created by the tension in the strings.
This would also affect the magnitude of the sound being transmitted. The
warped plate would not be able to \"push\" air very efficiently. A good
experiment to try, in order to see how important this part of the guitar
is in the amplification process, is as follows:
1. Start with an ordinary rubber band, a large bowl, adhesive tape, and
plastic wrap.
2. Stretch the rubber band and pluck it a few times to get a good sense
for how loud it is.
3. Stretch the plastic wrap over the bowl to form a sort of drum.
4. Tape down one end of the rubber band to the plastic wrap.
5. Stretch the rubber band and pluck it a few times.
6. The sound should be much louder than before.
## The air
The final part of the guitar is the air inside the body. This is very
important for the lower range of the instrument. The air just inside the
soundhole oscillates compressing and expanding the air inside the body.
This is just like blowing across the top of a bottle and listening to
the tone it produces. This forms what is called a Helmholtz resonator.
For more information on Helmholtz resonators go to Helmholtz
Resonance. This link
also shows the correlation to acoustic guitars in great detail. The
acoustic guitar makers often tune these resonators to have a resonance
frequency between F#2 and A2 (92.5 to 110.0 Hz). Having such a low
resonance frequency is what aids the amplification of the lower
frequency strings. To demonstrate the importance of the air in the
cavity, simply play an open A on the guitar (the fifth string - second
lowest note). Now, as the string is vibrating, place a piece of
cardboard over the soundhole. The sound level is reduced dramatically.
This is because you\'ve stopped the vibration of the air mass just
inside the soundhole, causing only the top plate to vibrate. Although
the top plate still vibrates a transmits sound, it isn\'t as effective
at transmitting lower frequency waves, thus the need for the Helmholtz
resonator.
|
# Acoustics/Basic Acoustics of the Marimba
![](Acoustics_basic_acoustics_marimba.JPG "Acoustics_basic_acoustics_marimba.JPG")
## Introduction
Like a xylophone, a marimba has octaves of wooden bars that are struck
with mallets to produce tones. Unlike the harsh sound of a xylophone, a
marimba produces a deep, rich tone. Marimbas are not uncommon and are
played in most high school bands.
Now, while all the trumpet and flute and clarinet players are busy
tuning up their instruments, the marimba player is back in the
percussion section with her feet up just relaxing. This is a bit
surprising, however, since the marimba is a melodic instrument that
needs to be in tune to sound good. So what gives? Why is the marimba
never tuned? How would you even go about tuning a marimba? To answer
these questions, the acoustics behind (or within) a marimba must be
understood.
## Components of sound
What gives the marimba its unique sound? It can be boiled down to two
components: the bars and the resonators. Typically, the bars are made of
rosewood (or some synthetic version of wood). They are cut to size
depending on what note is desired, then the tuning is refined by shaving
wood from the underside of the bar.
### Example
Rosewood bar, middle C, 1 cm thick
The equation that relates the length of the bar with the desired
frequency comes from the theory of modeling a bar that is free at both
ends. This theory yields the following equation:
$Length = \sqrt{\frac{3.011^2\cdot \pi \cdot t \cdot c}{8 \cdot \sqrt{12}\cdot f}}$
where t is the thickness of the bar, c is the speed of sound in the bar,
and f is the frequency of the note.
For rosewood, c = 5217 m/s. For middle C, f=262 Hz.
Therefore, to make a middle C key for a rosewood marimba, cut the bar to
be:
$Length = \sqrt{\frac{3.011^2\cdot \pi \cdot .01 \cdot 5217}{8 \cdot \sqrt{12}\cdot 262}}= .45 m = 45 cm$
\*\*\*
The resonators are made from metal (usually aluminum) and their lengths
also differ depending on the desired note. It is important to know that
each resonator is open at the top but closed by a stopper at the bottom
end.
### Example
Aluminum resonator, middle C
The equation that relates the length of the resonator with the desired
frequency comes from modeling the resonator as a pipe that is driven at
one end and closed at the other end. A \"driven\" pipe is one that has a
source of excitation (in this case, the vibrating key) at one end. This
model yields the following:
$Length = \frac {c}{4\cdot f}$
where c is the speed of sound in air and f is the frequency of the note.
For air, c = 343 m/s. For middle C, f = 262 Hz.
Therefore, to make a resonator for the middle C key, the resonator
length should be:
$Length = \frac {343}{4 \cdot 262} = .327m = 32.7 cm$
### Resonator shape
The shape of the resonator is an important factor in determining the
quality of sound that can be produced. The ideal shape is a sphere. This
is modeled by the Helmholtz resonator. However, mounting big, round,
beach ball-like resonators under the keys is typically impractical. The
worst choices for resonators are square or oval tubes. These shapes
amplify the non-harmonic pitches sometimes referred to as "junk
pitches". The round tube is typically chosen because it does the best
job (aside from the sphere) at amplifying the desired harmonic and not
much else.
As mentioned in the second example above, the resonator on a marimba can
be modeled by a closed pipe. This model can be used to predict what type
of sound (full and rich vs dull) the marimba will produce. Each pipe is
a \"quarter wave resonator\" that amplifies the sound waves produced by
of the bar. This means that in order to produce a full, rich sound, the
length of the resonator must exactly match one-quarter of the
wavelength. If the length is off, the marimba will produce a dull or
off-key sound for that note.
## Why would the marimba need tuning?
In the theoretical world where it is always 72 degrees with low
humidity, a marimba would not need tuning. But, since weather can be a
factor (especially for the marching band) marimbas do not always perform
the same way. Hot and cold weather can wreak havoc on all kinds of
percussion instruments, and the marimba is no exception. On hot days,
the marimba tends to be sharp and for cold days it tends to be flat.
This is the exact opposite of what happens to string instruments. Why?
The tone of a string instrument depends mainly on the tension in the
string, which decreases as the string expands with heat. The decrease in
tension leads to a flat note. Marimbas on the other hand produce sound
by moving air through the resonators. The speed at which this air is
moved is the speed of sound, which varies proportionately with
temperature! So, as the temperature increases, so does the speed of
sound. From the equation given in example 2 from above, you can see that
an increase in the speed of sound (c) means a longer pipe is needed to
resonate the same note. If the length of the resonator is not increased,
the note will sound sharp. Now, the heat can also cause the wooden bars
to expand, but the effect of this expansion is insignificant compared to
the effect of the change in the speed of sound.
## Tuning myths
It is a common myth among percussionists that the marimba can be tuned
by simply moving the resonators up or down (while the bars remain in the
same position.) The thought behind this is that by moving the resonators
down, for example, you are in effect lengthening them. While this may
sound like sound reasoning, it actually does not hold true in practice.
Judging by how the marimba is constructed (cutting bars and resonators
to specific lengths), it seems that there are really two options to
consider when looking to tune a marimba: shave some wood off the
underside of the bars, or change the length of the resonator. For
obvious reasons, shaving wood off the keys every time the weather
changes is not a practical solution. Therefore, the only option left is
to change the length of the resonator.
As mentioned above, each resonator is plugged by a stopper at the bottom
end. So, by simply shoving the stopper farther up the pipe, you can
shorten the resonator and sharpen the note. Conversely, pushing the
stopper down the pipe can flatten the note. Most marimbas do not come
with tunable resonators, so this process can be a little challenging.
(Broomsticks and hammers are common tools of the trade.)
### Example
Middle C Resonator lengthened by 1 cm
For ideal conditions, the length of the middle C (262 Hz) resonator
should be 32.7 cm as shown in example 2. Therefore, the change in
frequency for this resonator due to a change in length is given by:
$\Delta Frequency = 262 Hz - \frac {c}{4\cdot (.327 + \Delta L)}$
If the length is increased by 1 cm, the change in frequency will be:
$\Delta Frequency = \frac {343}{4\cdot (.327 + .01)} - 262 Hz = 7.5 Hz$
The acoustics behind the tuning a marimba go back to the design that
each resonator is to be ¼ of the total wavelength of the desired note.
When marimbas get out of tune, this length is no longer exactly equal to
¼ the wavelength due to the lengthening or shortening of the resonator
as described above. Because the length has changed, resonance is no
longer achieved, and the tone can become muffled or off-key.
## Conclusions
Some marimba builders are now changing their designs to include tunable
resonators. Since any leak in the end-seal will cause major loss of
volume and richness of the tone, this is proving to be a very difficult
task. At least now, though, armed with the acoustic background of their
instruments, percussionists everywhere will now have something to do
when the conductor says, "tune up!"
## Links and References
1. <http://www.gppercussion.com/html/resonators.html>
2. <http://www.mostlymarimba.com/>
3. <http://www.craftymusicteachers.com/bassmarimba/>
|
# Acoustics/Bessel Functions and the Kettledrum
![](Acoustics_bessel_ketteldrum.JPG "Acoustics_bessel_ketteldrum.JPG")
## Introduction
In class, we have begun to discuss the solutions of multidimensional
wave equations. A particularly interesting aspect of these
multidimensional solutions are those of bessel functions for circular
boundary conditions. The practical application of these solutions is the
kettledrum. This page will explore in qualitative and quantitative terms
how the kettledrum works. More specifically, the kettledrum will be
introduced as a circular membrane and its solution will be discussed
with visuals (e.g. visualization of bessel functions, video of
kettledrums and audio forms (wav files of kettledrums playing. In
addition, links to more information about this material, including
references will be included.
## The math behind the kettledrum: the brief version
When one looks at how a kettledrum produces sound, one should look no
farther than the drum head. The vibration of this circular membrane (and
the air in the drum enclosure) is what produces the sound in this
instrument. The mathematics behind this vibrating drum are relatively
simple. If one looks at a small element of the drum head, it looks
exactly like the mathematical model for a vibrating string (see:). The
only difference is that there are two dimensions where there are forces
on the element, the two dimensions that are planar to the drum. As this
is the same situation, we have the same equation, except with another
spatial term in the other planar dimension. This allows us to model the
drum head using a helmholtz equation. The next step (solved in detail
below) is to assume that the displacement of the drum head (in polar
coordinates) is a product of two separate functions for theta and r.
This allows us to turn the PDE into two ODES which are readily solved
and applied to the situation of the kettledrum head. For more info, see
below.
## The math behind the kettledrum: the derivation
So starting with the trusty general Helmholtz equation:
$$\nabla^2\Psi+k^2\Psi=0.$$
Where $k$ is the wave number, the frequency of the forced oscillations
divided by the speed of sound in the membrane.
Since we are dealing with a circular object, it makes sense to work in
polar coordinates (in terms of radius and angle) instead of rectangular
coordinates. For polar coordinates the Laplacian term of the Helmholtz
relation ($\nabla^2$) becomes
$\frac{\partial^2 \Psi}{\partial r^2} + \frac{1}{r} \frac{\partial\Psi}{\partial r} +\frac{1}{r^2} \frac{\partial^2 \Psi}{\partial \theta^2}$
Using the method of separation of variables (see Reference 3 for more
info), we will assume a solution of the form
$$\Psi (r,\theta) = R(r) \Theta(\theta).$$
Substituting this result back into our trusty Helmholtz equation, then
multiplying through by $r^2/(R\Theta)$ gives
$$\frac{1}{R} \left(r^2\frac{d^2 R}{dr^2} + r \frac{dR}{dr}\right) + k^2 r^2 = -\frac{1}{\Theta} \frac{d^2 \Theta}{d\theta^2},$$
where we moved the $\theta$-dependent terms to the right hand side.
Since we separated the variables of the solution into two
one-dimensional functions, the partial derivatives become ordinary
derivatives. In order for the above equality to hold regardless of
changes in $r$ and $\theta$, both sides must be equal to some constant.
For simplicity, I will use $\lambda^2$ as this constant. This results in
the following two equations:
$$\frac{d^2 \Theta}{d\theta^2} = -\lambda^2 \Theta,$$
$$r^2\frac{d^2 R}{dr^2} + r \frac{dR}{dr} + (k^2 r^2 - \lambda^2) R = 0.$$
The first of these equations readily seen as the standard second order
ordinary differential equation which has a harmonic solution of sines
and cosines with the frequency based on $\lambda$. The second equation
is what is known as Bessel\'s Equation. The solution to this equation is
cryptically called Bessel functions of order $\lambda$ of the first and
second kind. These functions, while sounding very intimidating, are
simply oscillatory functions of the radius times the wave number. Both
sets of functions diminish as $kr$ becomes large, but are unbounded as
$kr$ goes to zero for the Bessel functions of the second kind. !Bessel
functions of the first
kind..svg "Bessel functions of the first kind."){width="320"}
Now that we have the general solution to this equation, we can now model
a infinite radius kettledrum head. However, since i have yet to see an
infinite kettle drum, we need to constrain this solution of a vibrating
membrane to a finite radius. We can do this by applying what we know
about our circular membrane: along the edges of the kettledrum, the drum
head is attached to the drum. This means that there can be no
displacement of the membrane at the termination at the radius of the
kettle drum. This boundary condition can be mathematically described as
the following:
$$R(a) = 0$$
Where a is the arbitrary radius of the kettledrum. In addition to this
boundary condition, the displacement of the drum head at the center must
be finite. This second boundary condition removes the bessel function of
the second kind from the solution. This reduces the $R$ part of our
solution to:
$$R(r) = AJ_{\lambda}(kr)$$
Where $J_{\lambda}$ is a bessel function of the first kind of order
$\lambda$. Apply our other boundary condition at the radius of the drum
requires that the wave number $k$ must have discrete values,
($j_{mn}/a$) which can be looked up. Combining all of these gives us our
solution to how a drum head behaves (which is the real part of the
following):
$$y_{\lambda n}(r,\theta,t) = A_{\lambda n} J_{\lambda n}(k_{\lambda n} r)e^{j \lambda \theta+j w_{\lambda n} t}$$
## The math behind the kettledrum: the entire drum
The above derivation is just for the drum head. An actual kettledrum has
one side of this circular membrane surrounded by an enclosed cavity.
This means that air is compressed in the cavity when the membrane is
vibrating, adding more complications to the solution. In mathematical
terms, this makes the partial differential equation non-homogeneous or
in simpler terms, the right side of the Helmholtz equation does not
equal zero. This result requires significantly more derivation, and will
not be done here. If the reader cares to know more, these results are
discussed in the two books under references 6 and 7.
## Sites of interest
As one can see from the derivation above, the kettledrum is very
interesting mathematically. However, it also has a rich historical music
tradition in various places of the world. As this page\'s emphasis is on
math, there are few links provided below that reference this rich
history.
- A discussion of Persian kettledrums: Kettle drums of Iran and other
countries
- A discussion of kettledrums in classical music: Kettle drum
Lit.
- A massive resource for kettledrum history, construction and
technique\" Vienna Symphonic
Library
## References
1. Eric W. Weisstein. \"Bessel Function of the First Kind.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html>
2. Eric W. Weisstein. \"Bessel Function of the Second Kind.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselFunctionoftheSecondKind.html>
3. Eric W. Weisstein. \"Bessel Function.\" From MathWorld---A Wolfram
Web Resource. <http://mathworld.wolfram.com/BesselFunction.html>
4. Eric W. Weisstein et al. \"Separation of Variables.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/SeparationofVariables.html>
5. Eric W. Weisstein. \"Bessel Differential Equation.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselDifferentialEquation.html>
6. Kinsler and Frey, \"Fundamentals of Acoustics\", fourth edition,
Wiley & Sons
7. Haberman, \"Applied Partial Differential Equations\", fourth
edition, Prentice Hall Press
|
# Acoustics/Acoustics in Violins
For a detailed anatomy of the violin, please refer to Atelierla
Bussiere.
![](Violin_front_view.jpg "Violin_front_view.jpg")![](backview.jpg "backview.jpg")
## How does a violin make sound?
### General concept
When a violinist bows a string, which can produce vibrations with
abundant harmonics, the vibrations of the strings are structurally
transmitted to the bridge and the body of the instrument through the
bridge. The bridge transmits the vibrational energy produced by the
strings to the body through its feet, further triggering the vibration
of body. The vibration of the body determines sound radiation and sound
quality, along with the resonance of the cavity.
![](Acoustics_in_violins_procedure.jpg "Acoustics_in_violins_procedure.jpg")
### String
The vibration pattern of the strings can be easily be observed. To the
naked eye, the string appears to move back and forth in a parabolic
shape (see figure), which resembles the first mode of free vibration of
a stretched string. The vibration of strings was first investigated by
Hermann Von
Helmholtz,
the famous mathematician and physicist in 19th century. A surprising
scenario was discovered that the string actually moves in an inverse "V"
shape rather than parabolas (see figure). What we see is just an
envelope of the motion of the string. To honor his findings, the motion
of bowed strings had been called "Helmholtz motion."
![](String.jpg "String.jpg")
![](Helmholtzmotion.jpg "Helmholtzmotion.jpg")
## Bridge
The primary role of the bridge is to transform the motion of vibrating
strings into periodic driving forces by its feet to the top plate of the
violin body. The configuration of the bridge can be referred to the
figure. The bridge stands on the belly between f holes, which have two
primary functions. One is to connect the air inside the body with
outside air, and the other one is to make the belly between f holes move
more easily than other parts of the body. The fundamental frequency of a
violin bridge was found to be around 3000 Hz when it is on a rigid
support, and it is an effective energy-transmitting medium to transmit
the energy from the string to body at frequencies from 1 kHz to 4 kHz,
which is in the range of keen sensitivity of human hearing. In order to
darken the sound of violin, the player attaches a mute on the bridge.
The mute is actually an additional mass which reduces the fundamental
frequency of the bridge. As a result, the sound at higher frequencies is
diminished since the force transferred to the body has been decreased.
On the other hand, the fundamental frequency of the bridge can be raised
by attaching an additional stiffness in the form of tiny wedges, and the
sound at higher frequencies will be amplified accordingly.
The sound post connects the flexible belly to the much stiffer back
plate. The sound post can prevent the collapse of the belly due to high
tension force in the string, and, at the same time, couples the
vibration of the plate. The bass bar under the belly extends beyond the
f holes and transmits the force of the bridge to a larger area of the
belly. As can be seen in the figure, the motion of the treble foot is
restricted by the sound post, while, conversely, the foot over bass bar
can move up and down more easily. As a result, the bridge tends to move
up and down, pivoting about the treble foot. The forces appearing at the
two feet remain equal and opposite up to 1 kHz. At higher frequencies,
the forces become uneven. The force on the soundpost foot predominates
at some frequencies, while it is the bass bar foot at some.
![](crossview.jpg "crossview.jpg")
### Body
The body includes top plate, back plate, the sides, and the air inside,
all of which serve to transmit the vibration of the bridge into the
vibration of air surrounding the violin. For this reason, the violin
needs a relatively large surface area to push enough amount of air back
and forth. Thus, the top and back plates play important roles in the
mechanism. Violin makers have traditionally paid much attention to the
vibration of the top and back plates of the violin by listening to the
tap tones, or, recently, by observing the vibration mode shapes of the
body plates. The vibration modes of an assembled violin are, however,
much more complicated.
The vibration modes of top and back plates can be easily observed in a
similar technique first performed by Ernest Florens Friedrich Chaldni
(1756--1827), who is often respectfully referred "the father of
acoustics." First, the fine sand is uniformly sprinkled on the plate.
Then, the plate can be resonated, either by a powerful sound wave tuned
to the desired frequencies, by being bowed by a violin bow, or by being
excited mechanically or electromechanically at desired frequencies.
Consequently, the sand disperses randomly due to the vibration of plate.
Some of the sand falls outside the region of plate, while some of the
sand is collected by the nodal regions, which have relatively small
movement, of the plate. Hence, the mode shapes of the plate can be
visualized in this manner, which can be referred to the figures in the
reference site, Violin
Acoustics. The first
seven modes of the top and back plates of violin are presented, with
nodal lines depicted by using black sands.
The air inside the body is also important, especially in the range of
lower frequencies. It is like the air inside a bottle when you blow into
the neck, or, as known as Helmholtz resonance, which has its own modes
of vibration. The air inside the body can communicate with air outside
through the f holes, and the outside air serves as medium carrying waves
from the violin.
See www.violinbridges.co.uk for more
articles on bridges and acoustics.
### Sound radiation
A complete description of sound radiation of a violin should include the
information about radiation intensity as functions both of frequency and
location. The sound radiation can be measured by a microphone connected
to a pressure level meter which is rotatably supported on a stand arm
around the violin, while the violin is fastened at the neck by a clip.
The force is introduced into the violin by using a miniature impact
hammer at the upper edge of the bridge in the direction of bowing. The
detail can be referred to Martin Schleske, master studio for
violinmaking.
The radiation intensity of different frequencies at different locations
can be represented by directional characteristics, or acoustic maps. The
directional characteristics of a violin can be shown in the figure in
the website of Martin
Schleske, where
the radial distance from the center point represents the absolute value
of the sound level (re 1Pa/N) in dB, and the angular coordinate of the
full circle indicates the measurement point around the instrument.
According to the directional characteristics of violins, the principal
radiation directions for the violin in the horizontal plane can be
established. For more detail about the principal radiation direction for
violins at different frequencies, please refer to reference (Meyer
1972).
## References and other links
- Violin Acoustics
- Paul Galluzzo\'s
Homepage
- Martin Schleske, master studio for
violinmaking
- Atelierla Bussiere
- Fletcher, N. H., and Rossing, T. D., *The physics of musical
instrument*, Springer-Verlag, 1991
- Meyer, J., \"Directivity of bowed stringed instruments and its
effect on orchestral sound in concert halls\", J. Acoustic. Soc.
Am., 51, 1972, pp. 1994--2009
|
# Acoustics/Microphone Technique
![](Acoustics_microphone_technique.JPG "Acoustics_microphone_technique.JPG")
## General technique
1. A microphone should be used whose frequency response will suit the
frequency range of the voice or instrument being recorded.
2. Vary microphone positions and distances until you achieve the
monitored sound that you desire.
3. In the case of poor room acoustics, place the microphone very close
to the loudest part of the instrument being recorded or isolate the
instrument.
4. Personal taste is the most important component of microphone
technique. Whatever sounds right to you, *is* right.
## Types of microphones
### Dynamic microphones
These are the most common general-purpose microphones. They do not
require power to operate. If you have a microphone that is used for live
performance, it is probably a dynamic mic.
They have the advantage that they can withstand very high sound pressure
levels (high volume) without damage or distortion, and tend to provide a
richer, more intense sound than other types. Traditionally, these mics
did not provide as good a response on the highest frequencies
(particularly above 10 kHz), but some recent models have come out that
attempt to overcome this limitation.
In the studio, dynamic mics are often used for high sound pressure level
instruments such as drums, guitar amps and brass instruments. Models
that are often used in recording include the Shure SM57 and the
Sennheiser MD421.
### Condenser microphones
These microphones are often the most expensive microphones a studio
owns. They require power to operate, either from a battery or phantom
power, provided using the mic cable from an external mixer or pre-amp.
These mics have a built-in pre-amplifier that uses the power. Some
vintage microphones have a tube amplifier, and are referred to as tube
condensers.
While they cannot withstand the very high sound pressure levels that
dynamic mics can, they provide a flatter frequency response, and often
the best response at the highest frequencies. Not as good at conveying
intensity, they are much better at providing a balanced accurate sound.
Condenser mics come with a variety of sizes of transducers. They are
usually grouped into smaller format condensers, which often are long
cylinders about the size of a nickel coin in diameter, and larger format
condensers, the transducers of which are often about an inch in diameter
or slightly larger.
In the studio, condenser mics are often used for instruments with a wide
frequency range, such as an acoustic piano, acoustic guitar, voice,
violin, cymbals, or an entire band or chorus. On louder instruments they
do not use close miking with condensers. Models that are often used in
recording include the Shure SM81 (small format), AKG C414 (large format)
and Neumann U87 (large format).
### Ribbon microphones
Ribbon microphones are often used as an alternative to condenser
microphones. Some modern ribbon microphones do not require power, and
some do. The first ribbon microphones, developed at RCA in the 1930s,
required no power, were quite fragile and could be destroyed by just
blowing air through them. Modern ribbon mics are much more resiliant,
and can be used with the same level of caution as condenser mics.
Ribbon microphones provide a warmer sound than a condenser mic, with a
less brittle top end. Some vocalists (including Paul McCartney) prefer
them to condenser mics. In the studio they are used on vocals, violins,
and even drums. Popular models for recording include the Royer R121 and
the AEA R84.
## Working distance
### Close miking
When miking at a distance of 1 inch to about 1 foot from the sound
source, it is considered close miking. This technique generally provides
a tight, present sound quality and does an effective job of isolating
the signal and excluding other sounds in the acoustic environment.
#### Bleed
Bleeding occurs when the signal is not properly isolated and the
microphone picks up another nearby instrument. This can make the mixdown
process difficult if there are multiple voices on one track. Use the
following methods to prevent leakage:
- Place the microphones closer to the instruments.
- Move the instruments farther apart.
- Put some sort of acoustic barrier between the instruments.
- Use directional microphones.
#### A B miking
The A B miking distance rule (ratio 3 - 1) is a general rule of thumb
for close miking. To prevent phase anomalies and bleed, the microphones
should be placed at least three times as far apart as the distance
between the instrument and the microphone.
!A B Miking
### Distant miking
Distant miking refers to the placement of microphones at a distance of 3
feet or more from the sound source. This technique allows the full range
and balance of the instrument to develop and it captures the room sound.
This tends to add a live, open feeling to the recorded sound, but
careful consideration needs to be given to the acoustic environment.
### Accent miking
Accent miking is a technique used for solo passages when miking an
ensemble. A soloist needs to stand out from an ensemble, but placing a
microphone too close will sound unnaturally present compared the distant
miking technique used with the rest of the ensemble. Therefore, the
microphone should be placed just close enough to the soloist so that the
signal can be mixed effectively without sounding completely excluded
from the ensemble.
### Ambient miking
Ambient miking is placing the microphones at such a distance that the
room sound is more prominent than the direct signal. This technique is
used to capture audience sound or the natural reverberation of a room or
concert hall.
## Stereo and surround technique
### Stereo
Stereo miking is simply using two microphones to obtain a stereo
left-right image of the sound. A simple method is the use of a spaced
pair, which is placing two identical microphones several feet apart and
using the difference in time and amplitude to create the image. Great
care should be taken in the method as phase anomalies can occur due to
the signal delay. This risk of phase anomaly can be reduced by using the
X/Y method, where the two microphones are placed with the grills as
close together as possible without touching. There should be an angle of
90 to 135 degrees between the mics. This technique uses only amplitude,
not time, to create the image, so the chance of phase discrepancies is
unlikely.
!Spaced Pair !X/Y
Method
### Surround
To take advantage of 5.1 sound or some other surround setup, microphones
may be placed to capture the surround sound of a room. This technique
essentially stems from stereo technique with the addition of more
microphones. Because every acoustic environment is different, it is
difficult to define a general rule for surround miking, so placement
becomes dependent on experimentation. Careful attention must be paid to
the distance between microphones and potential phase anomalies.
## Placement for varying instruments
### Amplifiers
When miking an amplified speaker, such as for electric guitars, the mic
should be placed 2 to 12 inches from the speaker. Exact placement
becomes more critical at a distance of less than 4 inches. A brighter
sound is achieved when the mic faces directly into the center of the
speaker cone and a more mellow sound is produced when placed slightly
off-center. Placing off-center also reduces amplifier noise.
A bigger sound can often be achieved by using two mics. The first mic
should be a dynamic mic, placed as described in the previous paragraph.
Add to this a condenser mic placed at least 3 times further back
(remember the 3:1 rule), which will pickup the blended sound of all
speakers, as well as some room ambience. Run the mics into separate
channels and combine them to your taste.
### Brass instruments
High sound-pressure levels are produced by brass instruments due to the
directional characteristics of mid to mid-high frequencies. Therefore,
for brass instruments such as trumpets, trombones, and tubas,
microphones should face slightly off of the bell\'s center at a distance
of one foot or more to prevent overloading from wind blasts.
### Guitars
Technique for acoustic guitars is dependent on the desired sound.
Placing a microphone close to the sound hole will achieve the highest
output possible, but the sound may be bottom-heavy because of how the
sound hole resonates at low frequencies. Placing the mic slightly
off-center at 6 to 12 inches from the hole will provide a more balanced
pickup. Placing the mic closer to the bridge with the same working
distance will ensure that the full range of the instrument is captured.
A technique that some engineers use places a large-format condenser mic
12-18 inches away from the 12th fret of the guitar, and a small-format
condenser very close to the strings nearby. Combining the two signals
can produce a rich tone.
### Pianos
Ideally, microphones would be placed 4 to 6 feet from the piano to allow
the full range of the instrument to develop before it is captured. This
isn\'t always possible due to room noise, so the next best option is to
place the microphone just inside the open lid. This applies to both
grand and upright pianos.
### Percussion
One overhead microphone can be used for a drum set, although two are
preferable. If possible, each component of the drum set should be miked
individually at a distance of 1 to 2 inches as if they were their own
instrument. This also applies to other drums such as congas and bongos.
For large, tuned instruments such as xylophones, multiple mics can be
used as long as they are spaced according to the 3:1 rule. Typically,
dynamic mics are used for individual drum miking, while small-format
condensers are used for the overheads.
### Voice
Standard technique is to put the microphone directly in front of the
vocalist\'s mouth, although placing slightly off-center can alleviate
harsh consonant sounds (such as \"p\") and prevent overloading due to
excessive dynamic range.
### Woodwinds
A general rule for woodwinds is to place the microphone around the
middle of the instrument at a distance of 6 inches to 2 feet. The
microphone should be tilted slightly towards the bell or sound hole, but
not directly in front of it.
## Sound Propagation
It is important to understand how sound propagates due to the nature of
the acoustic environment so that microphone technique can be adjusted
accordingly. There are four basic ways that this occurs:
### Reflection
Sound waves are reflected by surfaces if the object is as large as the
wavelength of the sound. It is the cause of echo (simple delay),
reverberation (many reflections cause the sound to continue after the
source has stopped), and standing waves (the distance between two
parallel walls is such that the original and reflected waves in phase
reinforce one another).
### Absorption
Sound waves are absorbed by materials rather than reflected. This can
have both positive and negative effects depending on whether you desire
to reduce reverberation or retain a live sound.
### Diffraction
Objects that may be between sound sources and microphones must be
considered due to diffraction. Sound will be stopped by obstacles that
are larger than its wavelength. Therefore, higher frequencies will be
blocked more easily than lower frequencies.
### Refraction
Sound waves bend as they pass through mediums with varying density. Wind
or temperature changes can cause sound to seem like it is literally
moving in a different direction than expected.
## Sources
- Huber, Dave Miles, and Robert E. Runstein. *Modern Recording
Techniques*. Sixth Edition. Burlington: Elsevier, Inc., 2005.
- Shure, Inc. (2003). *Shure Product Literature.* Retrieved November
28, 2005, from
<http://www.shure.com/scripts/literature/literature.aspx>.
|
# Acoustics/Microphone Design and Operation
![](Acoustics_microphone_design_and_operation.JPG "Acoustics_microphone_design_and_operation.JPG")
## Introduction
Microphones are devices which convert pressure fluctuations into
electrical signals. There are two main methods of accomplishing this
task that are used in the mainstream entertainment industry. They are
known as dynamic microphones and condenser microphones. Piezoelectric
crystals can also be used as microphones but are not commonly used in
the entertainment industry. For further information on piezoelectric
transducers Click
Here.
## Dynamic microphones
This type of microphone converts pressure fluctuations into electrical
current. These microphones work by means of the principle known as
Faraday's Law. The principle states that when an electrical conductor is
moved through a magnetic field, an electrical current is induced within
the conductor. The magnetic field within the microphone is created using
permanent magnets and the conductor is produced in two common
arrangements.
!Figure 1: Sectional View of Moving-Coil Dynamic
Microphone{width="300"}
The first conductor arrangement is made of a coil of wire. The wire is
typically copper and is attached to a circular membrane or piston
usually made from lightweight plastic or occasionally aluminum. The
impinging pressure fluctuation on the piston causes it to move in the
magnetic field and thus creates the desired electrical current. Figure 1
provides a sectional view of a moving-coil microphone.
!Figure 2: Dynamic Ribbon
Microphone{width="300"}
The second conductor arrangement is a ribbon of metallic foil suspended
between magnets. The metallic ribbon is what moves in response to a
pressure fluctuation and in the same manner, an electrical current is
produced. Figure 2 provides a sectional view of a ribbon microphone. In
both configurations, dynamic microphones follow the same principles as
acoustical transducers. For further information about transducers Click
Here.
## Condenser microphones
This type of microphone converts pressure fluctuations into electrical
potentials through the use of changing an electrical capacitor. This is
why condenser microphones are also known as capacitor microphones. An
electrical capacitor is created when two charged electrical conductors
are placed at a finite distance from each other. The basic relation that
describes capacitors is:
**Q=C\*V**
where Q is the electrical charge of the capacitor's conductors, C is the
capacitance, and V is the electric potential between the capacitor's
conductors. If the electrical charge of the conductors is held at a
constant value, then the voltage between the conductors will be
inversely proportional to the capacitance. Also, the capacitance is
inversely proportional to the distance between the conductors. Condenser
microphones utilize these two concepts.
!Figure 3: Sectional View of Condenser
Microphone{width="600"}
The capacitor in a condenser microphone is made of two parts: the
diaphragm and the back plate. Figure 3 shows a section view of a
condenser microphone. The diaphragm is what moves due to impinging
pressure fluctuations and the back plate is held in a stationary
position. When the diaphragm moves closer to the back plate, the
capacitance increases and therefore a change in electric potential is
produced. The diaphragm is typically made of metallic coated Mylar. The
assembly that houses both the back plate and the diaphragm is commonly
referred to as a capsule.
To keep the diaphragm and back plate at a constant charge, an electric
potential must be presented to the capsule. There are various ways of
performing this operation. The first of which is by simply using a
battery to supply the needed DC potential to the capsule. A simplified
schematic of this technique is displayed in figure 4. The resistor
across the leads of the capsule is very high, in the range of 10 mega
ohms, to keep the charge on the capsule close to constant.
!Figure 4: Internal Battery Powered Condenser
Microphone{width="500"}
Another technique of providing a constant charge on the capacitor is to
supply a DC electric potential through the microphone cable that carries
the microphones output signal. Standard microphone cable is known as XLR
cable and is terminated by three pin connectors. Pin one connects to the
shield around the cable. The microphone signal is transmitted between
pins two and three. Figure 5 displays the layout of dynamic microphone
attached to a mixing console via XLR cable.
!Figure 5: Dynamic Microphone Connection to Mixing Console via XLR
Cable{width="700"}
**Phantom Supply/Powering** (Audio Engineering Society, DIN 45596): The
first and most popular method of providing a DC potential through a
microphone cable is to supply +48 V to both of the microphone output
leads, pins 2 and 3, and use the shield of the cable, pin 1, as the
ground to the circuit. Because pins 2 and 3 see the same potential, any
fluctuation of the microphone powering potential will not affect the
microphone signal seen by the attached audio equipment. This
configuration can be seen in figure 6. The +48 V will be stepped down at
the microphone using a transformer and provide the potential to the back
plate and diaphragm in a similar fashion as the battery solution. In
fact, 9, 12, 24, 48 or 52 V can be supplied, but 48 V is the most
frequent.
!Figure 6: Condenser Microphone Powering
Techniques{width="600"}
The second method of running the potential through the cable is to
supply 12 V between pins 2 and 3. This method is referred to as
**T-powering** (also known as Tonaderspeisung, AB powering; DIN 45595).
The main problem with T-powering is that potential fluctuation in the
powering of the capsule will be transmitted into an audio signal because
the audio equipment analyzing the microphone signal will not see a
difference between a potential change across pins 2 and 3 due to a
pressure fluctuation and one due to the power source electric potential
fluctuation.
Finally, the diaphragm and back plate can be manufactured from a
material that maintains a fixed charge. These microphones are termed
electrets. In early electret designs, the charge on the material tended
to become unstable over time. Recent advances in science and
manufacturing have allowed this problem to be eliminated in present
designs.
## Conclusion
Two branches of microphones exist in the entertainment industry. Dynamic
microphones are found in the moving-coil and ribbon configurations. The
movement of the conductor in dynamic microphones induces an electric
current which is then transformed into the reproduction of sound.
Condenser microphones utilize the properties of capacitors. Creating the
charge on the capsule of condenser microphones can be accomplished by
battery, phantom powering, T-powering, and by using fixed charge
materials in manufacturing.
## References
- Sound Recording Handbook. Woram, John M. 1989.
- Handbook of Recording Engineering Fourth Edition. Eargle, John.
2003.
## Microphone manufacturer links
- AKG
- Audio
Technica
- Audix
- While Bruel & Kjær produces
microphones for measurement purposes,
DPA is the equipment sold for
recording purposes
- Electrovoice
- Josephson Engineering
- Neumann
(currently a subsidiary of Sennheiser)
- Rode
- Schoeps
- Sennheiser
- Shure
- Wharfedale
|
# Acoustics/Acoustic Loudspeaker
![](Acoustics_loudspeakers.JPG "Acoustics_loudspeakers.JPG")
The purpose of the acoustic transducer is to convert electrical energy
into acoustic energy. Many variations of acoustic transducers exist,
although the most common is the moving coil-permanent magnet transducer.
The classic loudspeaker is of the moving coil-permanent magnet type.
The classic electrodynamic loudspeaker driver can be divided into three
key components:
1. The Magnet Motor Drive System
2. The Loudspeaker Cone System
3. The Loudspeaker Suspension
!Figure 1 Cut-away of a moving coil-permanent magnet
loudspeaker
## The Magnet Motor Drive System
The main purpose of the Magnet Motor Drive System is to establish a
symmetrical magnetic field in which the voice coil will operate. The
Magnet Motor Drive System is comprised of a front focusing plate,
permanent magnet, back plate, and a pole piece. In figure 2, the
assembled drive system is illustrated. In most cases, the back plate and
the pole piece are built into one piece called the yoke. The yoke and
the front focusing plate are normally made of a very soft cast iron.
Iron is a material that is used in conjunction with magnetic structures
because the iron is easily saturated when exposed to a magnetic field.
Notice in figure 2, that an air gap was intentionally left between the
front focusing plate and the yoke. The magnetic field is coupled through
the air gap. The magnetic field strength (B) of the air gap is typically
optimized for uniformity across the gap. \[1\]
```{=html}
<center>
```
Figure 2 Permanent Magnet Structure
```{=html}
</center>
```
When a coil of wire with a current flowing is placed inside the
permanent magnetic field, a force is produced. B is the magnetic field
strength, $l$ is the length of the coil, and $I$ is the current flowing
through the coil. The electro-magnetic force is given by the expression
of Laplace :
```{=html}
<center>
```
$d\underline F = I\underline {dl} \times \underline B$
```{=html}
</center>
```
$\underline B$ and $\underline {dl}$ are orthogonal, so the force is
obtained by integration on the length of the wire (Re is the radius of a
spire, n is the number of spires and $\underline e_x$ is on the axis of
the coil):
```{=html}
<center>
```
$\underline F = 2\pi R_eBnI\underline e_x$
```{=html}
</center>
```
This force is directly proportional to the current flowing through the
coil.
```{=html}
<center>
```
![](Magnet2.gif "Magnet2.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 3 Voice Coil Mounted in Permanent Magnetic Structure
```{=html}
</center>
```
The coil is excited with the AC signal that is intended for sound
reproduction, when the changing magnetic field of the coil interacts
with the permanent magnetic field then the coil moves back and forth in
order to reproduce the input signal. The coil of a loudspeaker is known
as the voice coil.
## The loudspeaker cone system
On a typical loudspeaker, the cone serves the purpose of creating a
larger radiating area allowing more air to be moved when excited by the
voice coil. The cone serves a piston that is excited by the voice coil.
The cone then displaces air creating a sound wave. In an ideal
environment, the cone should be infinitely rigid and have zero mass, but
in reality neither is true. Cone materials vary from carbon fiber,
paper, bamboo, and just about any other material that can be shaped into
a stiff conical shape. The loudspeaker cone is a very critical part of
the loudspeaker. Since the cone is not infinitely rigid, it tends to
have different types of resonance modes form at different frequencies,
which in turn alters and colors the reproduction of the sound waves. The
shape of the cone directly influences the directivity and frequency
response of the loudspeaker. When the cone is attached to the voice
coil, a large gap above the voice coil is left exposed. This could be a
problem if foreign particles make their way into the air gap of the
voice coil and the permanent magnet structure. The solution to this
problem is to place what is known as a dust cap on the cone to cover the
air gap. Below a figure of the cone and dust cap are shown.
```{=html}
<center>
```
![](loud_cone.gif "loud_cone.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 6 Cone and Dust Cap attached to Voice Coil
```{=html}
</center>
```
The speed of the cone can be expressed with an equation of a mass-spring
system with a damping coefficient \\xi :
```{=html}
<center>
```
$m\frac{{dv}}{{dt}} + \xi v + k\int {v \, dt} = Bli$
```{=html}
</center>
```
The current intensity $i$ and the speed $v$ can also be related by this
equation ($U$ is the voltage, $R$ the electrical resistance and $L_b$
the inductance) :
```{=html}
<center>
```
$L_b \frac{{di}}{{dt}} + Ri = U - Blv$
```{=html}
</center>
```
By using a harmonic solution, the expression of the speed is :
```{=html}
<center>
```
$v = \frac{{Bli}}{{\xi + j\left(m\omega - \frac{k}{\omega}\right)}}$
```{=html}
</center>
```
The electrical impedance can be determined as the ratio of the voltage
on the current intensity :
```{=html}
<center>
```
$Z = \frac{U}{i} = R + jL\omega + \frac{{B^2 l^2 }}{{\xi + j \left(m\omega - \frac{k}{\omega}\right)}}$
```{=html}
</center>
```
The frequency response of the loudspeaker is provided in Figure 7.
```{=html}
<center>
```
![](Electreson.gif "Electreson.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 7 Electrical impedance
```{=html}
</center>
```
A phenomena of electrical resonance is observable around the frequency
of 100 Hz. Besides, the inductance of the coil makes the impedance
increase from the frequency of 400 Hz. So the range of frequency where
the loudspeaker is used is 100 -- 4000 Hz
## The loudspeaker suspension
Most moving coil loudspeakers have a two piece suspension system, also
known as a flexure system. The combination of the two flexures allows
the voice coil to maintain linear travel as the voice coil is energized
and provide a restoring force for the voice coil system. The two piece
system consists of large flexible membrane surrounding the outside edge
of the cone, called the surround, and an additional flexure connected
directly to the voice coil, called the spider. The surround has another
purpose and that is to seal the loudspeaker when mounted in an
enclosure. Commonly, the surround is made of a variety of different
materials, such as, folded paper, cloth, rubber, and foam. Construction
of the spider consists of different woven cloth or synthetic materials
that are compressed to form a flexible membrane. The following two
figures illustrate where the suspension components are physically at on
the loudspeaker and how they function as the loudspeaker operates.
```{=html}
<center>
```
![](loud_suspension.gif "loud_suspension.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 8 Loudspeaker Suspension System
```{=html}
</center>
```
```{=html}
<center>
```
![](loudspk.gif "loudspk.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 9 Moving Loudspeaker
```{=html}
</center>
```
## Modeling the loudspeaker as a lumped system
Before implementing a loudspeaker into a specific application, a series
of parameters characterizing the loudspeaker must be extracted. The
equivalent circuit of the loudspeaker is key when developing enclosures.
The circuit models all aspects of the loudspeaker through an equivalent
electrical, mechanical, and acoustical circuit. Figure 9 shows how the
three equivalent circuits are connected. The electrical circuit is
comprised of the DC resistance of the voice coil, $R_e$, the imaginary
part of the voice coil inductance, $L_e$, and the real part of the voice
coil inductance, $R_{evc}$. The mechanical system has electrical
components that model different physical parameters of the loudspeaker.
In the mechanical circuit, $M_m$, is the electrical capacitance due to
the moving mass, $C_m$, is the electrical inductance due to the
compliance of the moving mass, and $R_m$, is the electrical resistance
due to the suspension system. In the acoustical equivalent circuit,
$M_a$ models the air mass and $R_a$ models the radiation impedance\[2\].
This equivalent circuit allows insight into what parameters change the
characteristics of the loudspeaker. Figure 10 shows the electrical input
impedance as a function of frequency developed using the equivalent
circuit of the loudspeaker.
```{=html}
<center>
```
![](Eq_circuit.gif "Eq_circuit.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 9 Loudspeaker Analogous Circuit
```{=html}
</center>
```
```{=html}
<center>
```
![](Freq_resp.gif "Freq_resp.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 10 Electrical Input Impedance
```{=html}
</center>
```
## The acoustical enclosure
### Function of the enclosure
The loudspeaker emits two waves : a front wave and a back wave. With a
reflection on a wall, the back wave can be added with the front wave and
produces destructive interferences. As a result, the sound pressure
level in the room is not uniform. At certain positions, the interaction
is additive, and the sound pressure level is higher. On the contrary,
certain positions offer destructive interaction between the waves and
the sound pressure level is lower.
```{=html}
<center>
```
![](louds_without_baffle.gif "louds_without_baffle.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 11 Loudspeaker without baffle producing destructive interferences
```{=html}
</center>
```
The solution is to put a baffle round the loudspeaker in order to
prevent the back wave from interfering with the front wave. The sound
pressure level is uniform in the room and the quality of the loudspeaker
is higher.
```{=html}
<center>
```
![](loudspeakers_baffled.gif "loudspeakers_baffled.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 12 Loudspeakers with infinite baffle and enclosure
```{=html}
</center>
```
### Loudspeaker-external fluid interaction
The external fluid exerts a pressure on the membrane of the loudspeaker
cone. This additive force can be evaluate as an additive mass and an
additive damping in the equation of vibration of the membrane.
```{=html}
<center>
```
$- (m + MS)\omega ^2 \xi + i\omega (C + RS)\xi + K\xi = fe^{i\omega t}$
```{=html}
</center>
```
When the fluid is the air, this additive mass and additive damping are
negligible. For example, at the frequency of 1000 Hz, the additive mass
is 3g.
### Loudspeaker-internal fluid interaction
The volume of air in the enclosure constitutes an additive stiffness.
This is called the acoustic load. In low frequencies, this additive
stiffness can be four times the stiffness of the loudspeaker cone. The
internal air stiffness is very high because of the boundary conditions
inside the enclosure. The walls impose a condition of zero airspeed that
makes the stiffness increase.
```{=html}
<center>
```
Figure 13 Stiffness of the loudspeaker cone and stiffness of the
internal air
```{=html}
</center>
```
The stiffness of the internal air (in red) is fourth time higher than
the stiffness of the loudspeaker cone (in blue). That is why the design
of the enclosure is relevant in order to improve the quality of the
sound and avoid a decrease of the sound pressure level in the room at
some frequencies.
## References
1. The Loudspeaker Design Cookbook 5th Edition; Dickason, Vance., Audio
Amateur Press, 1997.
2. Beranek, L. L. Acoustics. 2nd ed. Acoustical Society of America,
Woodbridge, NY. 1993.
|
# Acoustics/Sealed Box Subwoofer Design
A sealed or closed box baffle is the most basic but often the cleanest
sounding sub-woofer box design. The sub-woofer box in its most simple
form, serves to isolate the back of the speaker from the front, much
like the theoretical infinite baffle. The sealed box provides simple
construction and controlled response for most sub-woofer applications.
The slow low end roll-off provides a clean transition into the extreme
frequency range. Unlike ported boxes, the cone excursion is reduced
below the resonant frequency of the box and driver due to the added
stiffness provided by the sealed box baffle.
Closed baffle boxes are typically constructed of a very rigid material
such as MDF (medium density fiber board) or plywood .75 to 1 inch thick.
Depending on the size of the box and material used, internal bracing may
be necessary to maintain a rigid box. A rigid box is important to design
in order to prevent unwanted box resonance.
As with any acoustics application, the box must be matched to the
loudspeaker driver for maximum performance. The following will outline
the procedure to tune the box or maximize the output of the sub-woofer
box and driver combination.
## Closed baffle circuit
The sealed box enclosure for sub-woofers can be modelled as a lumped
element system if the dimensions of the box are significantly shorter
than the shortest wavelength reproduced by the sub-woofer. Most
sub-woofer applications are crossed over around 80 to 100 Hz. A 100 Hz
wave in air has a wavelength of about 11 feet. Sub-woofers typically
have all dimensions much shorter than this wavelength, thus the lumped
element system analysis is accurate. Using this analysis, the following
circuit represents a sub-woofer enclosure system.
![](Circuit_schema.jpg "Circuit_schema.jpg")
where all of the following parameters are in the mechanical mobility
analog
: $V_e$ - voltage supply
: $R_e$ - electrical resistance
: $M_m$ - driver mass
: $C_m$ - driver compliance
: $R_m$ - resistance
: $R_{Ar}$ - rear cone radiation resistance into the air
: $X_{Af}$ - front cone radiation reactance into the air
: $R_{Br}$ - rear cone radiation resistance into the box
: $X_{Br}$ - rear cone radiation reactance into the box
## Driver parameters
In order to tune a sealed box to a driver, the driver parameters must be
known. Some of the parameters are provided by the manufacturer, some are
found experimentally, and some are found from general tables. For ease
of calculations, all parameters will be represented in the SI units
meter/kilogram/second. The parameters that must be known to determine
the size of the box are as follows:
: $f_0$ - driver free-air resonance
: $C_{MS}$ - mechanical compliance of the driver
: $S_D$ - effective area of the driver
#### Resonance of the driver
The resonance of the driver is usually either provided by the
manufacturer or must be found experimentally. It is a good idea to
measure the resonance frequency even if it is provided by the
manufacturer to account for inconsistent manufacturing processes.
The following diagram shows verga and the setup for finding resonance:
Where voltage $V_1$ is held constant and the variable frequency source
is varied until $V_2$ is a maximum. The frequency where $V_2$ is a
maximum is the resonance frequency for the driver.
#### Mechanical compliance
By definition compliance is the inverse of stiffness or what is commonly
referred to as the spring constant. The compliance of a driver can be
found by measuring the displacement of the cone when known masses are
place on the cone when the driver is facing up. The compliance would
then be the displacement of the cone in meters divided by the added
weight in Newtons.
#### Effective area of the driver
The physical diameter of the driver does not lead to the effective area
of the driver. The effective diameter can be found using the following
diagram:
![](Effective_area.jpg "Effective_area.jpg")
From this diameter, the area is found from the basic area of a circle
equation.
## Acoustic compliance
From the known mechanical compliance of the cone, the acoustic
compliance can be found from the following equation:
$V_{as} = P C^2 C_{ms} S_d^2$
Where $P$ is air density and $C$ the speed of sound at a given
temperature and pressure.
From the driver acoustic compliance, the box acoustic compliance is
found. This is where the final application of the sub-woofer is
considered. The acoustic compliance of the box will determine the
percent shift upwards of the resonant frequency. If a large shift is
desire for high SPL applications, then a large ratio of driver to box
acoustic compliance would be required. If a flat response is desired for
high fidelity applications, then a lower ratio of driver to box acoustic
compliance would be required. Specifically, the ratios can be found in
the following figure using line (b) as reference.
$C_{AS} = C_{AB}r$
$r$ - driver to box acoustic compliant ratio
![](Compliance.jpg "Compliance.jpg")
## Sealed box design
#### Volume of box
The volume of the sealed box can now be found from the box acoustic
compliance. The following equation is used to calculate the box volume
V~B~= C~AB~&gam
#### Box dimensions
From the calculated box volume, the dimensions of the box can then be
designed. There is no set formula for finding the dimensions of the box,
but there are general guidelines to be followed. If the driver was
mounted in the center of a square face, the waves generated by the cone
would reach the edges of the box at the same time, thus when combined
would create a strong diffracted wave in the listening space. In order
to best prevent this, the driver should be either be mounted offset of a
square face, or the face should be rectangular.
The face of the box which the driver is set in should not be a square.
|
# Acoustics/Bass-Reflex Enclosure Design
```{=html}
<div style="float:right;margin:0 0 1em 1em;">
```
![](Bassreflex-Gehäuse_(enclosure).png "Bassreflex-Gehäuse_(enclosure).png")
```{=html}
</div>
```
Bass-reflex enclosures improve the low-frequency response of loudspeaker
systems. Bass-reflex enclosures are also called \"vented-box design\" or
\"ported-cabinet design\". A bass-reflex enclosure includes a vent or
port between the cabinet and the ambient environment. This type of
design, as one may observe by looking at contemporary loudspeaker
products, is still widely used today. Although the construction of
bass-reflex enclosures is fairly simple, their design is not simple, and
requires proper tuning. This reference focuses on the technical details
of bass-reflex design. General loudspeaker information can be found
here.
## Effects of the Port on the Enclosure Response
Before discussing the bass-reflex enclosure, it is important to be
familiar with the simpler sealed enclosure system performance. As the
name suggests, the sealed enclosure system attaches the loudspeaker to a
sealed enclosure (except for a small air leak included to equalize the
ambient pressure inside). Ideally, the enclosure would act as an
acoustical compliance element, as the air inside the enclosure is
compressed and rarified. Often, however, an acoustic material is added
inside the box to reduce standing waves, dissipate heat, and other
reasons. This adds a resistive element to the acoustical lumped-element
model. A non-ideal model of the effect of the enclosure actually adds an
acoustical mass element to complete a series lumped-element circuit
given in Figure 1. For more on sealed enclosure design, see the Sealed
Box Subwoofer
Design
page.
In the case of a bass-reflex enclosure, a port is added to the
construction. Typically, the port is cylindrical and is flanged on the
end pointing outside the enclosure. In a bass-reflex enclosure, the
amount of acoustic material used is usually much less than in the sealed
enclosure case, often none at all. This allows air to flow freely
through the port. Instead, the larger losses come from the air leakage
in the enclosure. With this setup, a lumped-element acoustical circuit
has the form shown in the diagram below.
![](Vented_box_ckt.gif "Vented_box_ckt.gif")
In this figure, $Z_{RAD}$ represents the radiation impedance of the
outside environment on the loudspeaker diaphragm. The loading on the
rear of the diaphragm has changed when compared to the sealed enclosure
case. If one visualizes the movement of air within the enclosure, some
of the air is compressed and rarified by the compliance of the
enclosure, some leaks out of the enclosure, and some flows out of the
port. This explains the parallel combination of $M_{AP}$, $C_{AB}$, and
$R_{AL}$. A truly realistic model would incorporate a radiation
impedance of the port in series with $M_{AP}$, but for now it is
ignored. Finally, $M_{AB}$, the acoustical mass of the enclosure, is
included as discussed in the sealed enclosure case. The formulas which
calculate the enclosure parameters are listed in Appendix
B.
It is important to note the parallel combination of $M_{AP}$ and
$C_{AB}$. This forms a Helmholtz resonator (click here for more
information).
Physically, the port functions as the "neck" of the resonator and the
enclosure functions as the "cavity." In this case, the resonator is
driven from the piston directly on the cavity instead of the typical
Helmholtz case where it is driven at the "neck." However, the same
resonant behavior still occurs at the enclosure resonance frequency,
$f_{B}$. At this frequency, the impedance seen by the loudspeaker
diaphragm is large (see Figure 3 below). Thus, the load on the
loudspeaker reduces the velocity flowing through its mechanical
parameters, causing an anti-resonance condition where the displacement
of the diaphragm is a minimum. Instead, the majority of the volume
velocity is actually emitted by the port itself instead of the
loudspeaker. When this impedance is reflected to the electrical circuit,
it is proportional to $1/Z$, thus a minimum in the impedance seen by the
voice coil is small. Figure 3 shows a plot of the impedance seen at the
terminals of the loudspeaker. In this example, $f_B$ was found to be
about 40 Hz, which corresponds to the null in the voice-coil impedance.
![](Za0_Zvc_plots.gif "Za0_Zvc_plots.gif")
## Quantitative Analysis of Port on Enclosure
The performance of the loudspeaker is first measured by its velocity
response, which can be found directly from the equivalent circuit of the
system. As the goal of most loudspeaker designs is to improve the bass
response (leaving high-frequency production to a tweeter), low frequency
approximations will be made as much as possible to simplify the
analysis. First, the inductance of the voice coil, $\it{L_E}$, can be
ignored as long as $\omega \ll R_E/L_E$. In a typical loudspeaker,
$\it{L_E}$ is of the order of 1 mH, while $\it{R_E}$ is typically
8$\Omega$, thus an upper frequency limit is approximately 1 kHz for this
approximation, which is certainly high enough for the frequency range of
interest.
Another approximation involves the radiation impedance, $\it{Z_{RAD}}$.
It can be shown \[1\] that this value is given by the following equation
(in acoustical ohms):
Where $J_1(x)$ and $H_1(x)$ are types of Bessel functions. For small
values of *ka*,
```{=html}
<table align=center width=50% cellpadding=10>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$J_1(2ka) \approx ka$
```{=html}
</td>
```
```{=html}
<td>
```
and
```{=html}
</td>
```
```{=html}
<td>
```
$H_1(2ka) \approx \frac{8(ka)^2}{3\pi}$
```{=html}
</td>
```
```{=html}
<td>
```
$\Rightarrow Z_{RAD} \approx j\frac{8\rho_0\omega}{3\pi^2a} = jM_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Hence, the low-frequency impedance on the loudspeaker is represented
with an acoustic mass $M_{A1}$ \[1\]. For a simple analysis, $R_E$,
$M_{MD}$, $C_{MS}$, and $R_{MS}$ (the transducer parameters, or
*Thiele-Small* parameters) are converted to their acoustical
equivalents. All conversions for all parameters are given in Appendix
A.
Then, the series masses, $M_{AD}$, $M_{A1}$, and $M_{AB}$, are lumped
together to create $M_{AC}$. This new circuit is shown below.
![](VB_LF_ckt.gif "VB_LF_ckt.gif")
Unlike sealed enclosure analysis, there are multiple sources of volume
velocity that radiate to the outside environment. Hence, the diaphragm
volume velocity, $U_D$, is not analyzed but rather
$U_0 = U_D + U_P + U_L$. This essentially draws a "bubble" around the
enclosure and treats the system as a source with volume velocity $U_0$.
This "lumped" approach will only be valid for low frequencies, but
previous approximations have already limited the analysis to such
frequencies anyway. It can be seen from the circuit that the volume
velocity flowing *into* the enclosure, $U_B = -U_0$, compresses the air
inside the enclosure. Thus, the circuit model of Figure 3 is valid and
the relationship relating input voltage, $V_{IN}$ to $U_0$ may be
computed.
In order to make the equations easier to understand, several parameters
are combined to form other parameter names. First, $\omega_B$ and
$\omega_S$, the enclosure and loudspeaker resonance frequencies,
respectively, are:
```{=html}
<table align=center width=40%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_B = \frac{1}{\sqrt{M_{AP}C_{AB}}}$
```{=html}
</td>
```
```{=html}
<td>
```
$\omega_S = \frac{1}{\sqrt{M_{AC}C_{AS}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Based on the nature of the derivation, it is convenient to define the
parameters $\omega_0$ and *h*, the Helmholtz tuning ratio:
```{=html}
<table align=center width=25%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_0 = \sqrt{\omega_B\omega_S}$
```{=html}
</td>
```
```{=html}
<td>
```
$h = \frac{\omega_B}{\omega_S}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
A parameter known as the *compliance ratio* or *volume ratio*, $\alpha$,
is given by:
{C\_{AB}} = \\frac{V\_{AS}}{V\_{AB}}`</math>`{=html}}}
Other parameters are combined to form what are known as *quality
factors*:
```{=html}
<table align=center width=45%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$Q_L = R_{AL}\sqrt{\frac{C_{AB}}{M_{AP}}}$
```{=html}
</td>
```
```{=html}
<td>
```
$Q_{TS} = \frac{1}{R_{AE}+R_{AS}}\sqrt{\frac{M_{AC}}{C_{AS}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
This notation allows for a simpler expression for the resulting transfer
function \[1\]:
= G(s) =
\\frac{(s\^3/\\omega_0\^4)}{(s/\\omega_0)\^4+a_3(s/\\omega_0)\^3+a_2(s/\\omega_0)\^2+a_1(s/\\omega_0)+1}`</math>`{=html}}}
where
```{=html}
<table align=center width=70%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = \frac{1}{Q_L\sqrt{h}}+\frac{\sqrt{h}}{Q_{TS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = \frac{\alpha+1}{h}+h+\frac{1}{Q_L Q_{TS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \frac{1}{Q_{TS}\sqrt{h}}+\frac{\sqrt{h}}{Q_L}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Development of Low-Frequency Pressure Response
It can be shown \[2\] that for $ka < 1/2$, a loudspeaker behaves as a
spherical source. Here, *a* represents the radius of the loudspeaker.
For a 15" diameter loudspeaker in air, this low frequency limit is about
150 Hz. For smaller loudspeakers, this limit increases. This limit
dominates the limit which ignores $L_E$, and is consistent with the
limit that models $Z_{RAD}$ by $M_{A1}$.
Within this limit, the loudspeaker emits a volume velocity $U_0$, as
determined in the previous section. For a simple spherical source with
volume velocity $U_0$, the far-field pressure is given by \[1\]:
{4\\pi r}`</math>`{=html} }}
It is possible to simply let $r = 1$ for this analysis without loss of
generality because distance is only a function of the surroundings, not
the loudspeaker. Also, because the transfer function magnitude is of
primary interest, the exponential term, which has a unity magnitude, is
omitted. Hence, the pressure response of the system is given by \[1\]:
= \\frac{\\rho_0s}{4\\pi}\\frac{U_0}{V\_{IN}} = \\frac{\\rho_0Bl}{4\\pi
S_DR_EM_AS}H(s)`</math>`{=html} }}
Where $H(s) = sG(s)$. In the following sections, design methods will
focus on $|H(s)|^2$ rather than $H(s)$, which is given by:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$|H(s)|^2 = \frac{\Omega^8}{\Omega^8 + \left(a^2_3 - 2a_2\right)\Omega^6 + \left(a^2_2 + 2 - 2a_1a_3\right)\Omega^4 + \left(a^2_1 - 2a_2\right)\Omega^2 + 1}$
```{=html}
</td>
```
```{=html}
<td>
```
$\Omega = \frac{\omega}{\omega_0}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
This also implicitly ignores the constants in front of $|H(s)|$ since
they simply scale the response and do not affect the shape of the
frequency response curve.
## Alignments
A popular way to determine the ideal parameters has been through the use
of alignments. The concept of alignments is based upon filter theory.
Filter development is a method of selecting the poles (and possibly
zeros) of a transfer function to meet a particular design criterion. The
criteria are the desired properties of a magnitude-squared transfer
function, which in this case is $|H(s)|^2$. From any of the design
criteria, the poles (and possibly zeros) of $|H(s)|^2$ are found, which
can then be used to calculate the numerator and denominator. This is the
"optimal" transfer function, which has coefficients that are matched to
the parameters of $|H(s)|^2$ to compute the appropriate values that will
yield a design that meets the criteria.
There are many different types of filter designs, each which have
trade-offs associated with them. However, this design is limited because
of the structure of $|H(s)|^2$. In particular, it has the structure of a
fourth-order high-pass filter with all zeros at *s* = 0. Therefore, only
those filter design methods which produce a low-pass filter with only
poles will be acceptable methods to use. From the traditional set of
algorithms, only Butterworth and Chebyshev low-pass filters have only
poles. In addition, another type of filter called a quasi-Butterworth
filter can also be used, which has similar properties to a Butterworth
filter. These three algorithms are fairly simple, thus they are the most
popular. When these low-pass filters are converted to high-pass filters,
the $s \rightarrow 1/s$ transformation produces $s^8$ in the numerator.
More details regarding filter theory and these relationships can be
found in numerous resources, including \[5\].
## Butterworth Alignment
The Butterworth algorithm is designed to have a *maximally flat* pass
band. Since the slope of a function corresponds to its derivatives, a
flat function will have derivatives equal to zero. Since as flat of a
pass band as possible is optimal, the ideal function will have as many
derivatives equal to zero as possible at *s* = 0. Of course, if all
derivatives were equal to zero, then the function would be a constant,
which performs no filtering.
Often, it is better to examine what is called the *loss function*. Loss
is the reciprocal of gain, thus
The loss function can be used to achieve the desired properties, then
the desired gain function is recovered from the loss function.
Now, applying the desired Butterworth property of maximal pass-band
flatness, the loss function is simply a polynomial with derivatives
equal to zero at *s* = 0. At the same time, the original polynomial must
be of degree eight (yielding a fourth-order function). However,
derivatives one through seven can be equal to zero if \[3\]
With the high-pass transformation $\Omega \rightarrow 1/\Omega$,
It is convenient to define $\Omega = \omega/\omega_{3dB}$, since
$\Omega = 1 \Rightarrow |H(s)|^2 = 0.5$ or -3 dB. This definition allows
the matching of coefficients for the $|H(s)|^2$ describing the
loudspeaker response when $\omega_{3dB} = \omega_0$. From this matching,
the following design equations are obtained \[1\]:
```{=html}
<table align=center cellspacing=20>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = a_3 = \sqrt{4+2\sqrt{2}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = 2+\sqrt{2}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Quasi-Butterworth Alignment
The quasi-Butterworth alignments do not have as well-defined of an
algorithm when compared to the Butterworth alignment. The name
"quasi-Butterworth" comes from the fact that the transfer functions for
these responses appear similar to the Butterworth ones, with (in
general) the addition of terms in the denominator. This will be
illustrated below. While there are many types of quasi-Butterworth
alignments, the simplest and most popular is the 3rd order alignment
(QB3). The comparison of the QB3 magnitude-squared response against the
4th order Butterworth is shown below.
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\left|H_{QB3}(\omega)\right|^2 = \frac{(\omega/\omega_{3dB})^8}{(\omega/\omega_{3dB})^8 + B^2(\omega/\omega_{3dB})^2 + 1}$
```{=html}
</td>
```
```{=html}
<td>
```
$\left|H_{B4}(\omega)\right|^2 = \frac{(\omega/\omega_{3dB})^8}{(\omega/\omega_{3dB})^8 + 1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Notice that the case $B = 0$ is the Butterworth alignment. The reason
that this QB alignment is called 3rd order is due to the fact that as
*B* increases, the slope approaches 3 dec/dec instead of 4 dec/dec, as
in 4th order Butterworth. This phenomenon can be seen in Figure 5.
![](QB3_gradient.GIF "QB3_gradient.GIF")
Equating the system response $|H(s)|^2$ with $|H_{QB3}(s)|^2$, the
equations guiding the design can be found \[1\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$B^2 = a^2_1 - 2a_2$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2^2 + 2 = 2a_1a_3$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \sqrt{2a_2}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 > 2 + \sqrt{2}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Chebyshev Alignment
The Chebyshev algorithm is an alternative to the Butterworth algorithm.
For the Chebyshev response, the maximally-flat passband restriction is
abandoned. Now, a *ripple*, or fluctuation is allowed in the pass band.
This allows a steeper transition or roll-off to occur. In this type of
application, the low-frequency response of the loudspeaker can be
extended beyond what can be achieved by Butterworth-type filters. An
example plot of a Chebyshev high-pass response with 0.5 dB of ripple
against a Butterworth high-pass response for the same $\omega_{3dB}$ is
shown below.
![](Butt_vs_Cheb_HP.gif "Butt_vs_Cheb_HP.gif")
The Chebyshev response is defined by \[4\]:
$C_n(\Omega)$ is called the *Chebyshev polynomial* and is defined by
\[4\]:
```{=html}
<table align=center>
```
```{=html}
<tr>
```
```{=html}
<td valign=center rowspan=2>
```
$C_n(\Omega) = \big\lbrace$
```{=html}
</td>
```
```{=html}
<td>
```
$\cos[n \cos^{-1}(\Omega)]$
```{=html}
</td>
```
```{=html}
<td>
```
$|\Omega| < 1$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\cosh[n \cosh^{-1}(\Omega)]$
```{=html}
</td>
```
```{=html}
<td>
```
$|\Omega| > 1$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Fortunately, Chebyshev polynomials satisfy a simple recursion formula
\[4\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<td>
```
$C_0(x) = 1$
```{=html}
</td>
```
```{=html}
<td>
```
$C_1(x) = x$
```{=html}
</td>
```
```{=html}
<td>
```
$C_n(x) = 2xC_{n-1} - C_{n-2}$
```{=html}
</td>
```
```{=html}
</table>
```
For more information on Chebyshev polynomials, see the Wolfram
Mathworld: Chebyshev
Polynomials
page.
When applying the high-pass transformation to the 4th order form of
$|\hat{H}(j\Omega)|^2$, the desired response has the form \[1\]:
The parameter $\epsilon$ determines the ripple. In particular, the
magnitude of the ripple is $10\rm{log}[1+\epsilon^2]$ dB and can be
chosen by the designer, similar to *B* in the quasi-Butterworth case.
Using the recursion formula for $C_n(x)$,
Applying this equation to $|H(j\Omega)|^2$ \[1\],
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td colspan=2>
```
$\Rightarrow |H(\Omega)|^2 = \frac{\frac{1 + \epsilon^2}{64\epsilon^2}\Omega^8}{\frac{1 + \epsilon^2}{64\epsilon^2}\Omega^8 + \frac{1}{4}\Omega^6 + \frac{5}{4}\Omega^4 - 2\Omega^2 + 1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\Omega = \frac{\omega}{\omega_n}$
```{=html}
</td>
```
```{=html}
<td>
```
$\omega_n = \frac{\omega_{3dB}}{2}\sqrt{2 + \sqrt{2 + 2\sqrt{2+\frac{1}{\epsilon^2}}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Thus, the design equations become \[1\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_0 = \omega_n\sqrt[8]{\frac{64\epsilon^2}{1+\epsilon^2}}$
```{=html}
</td>
```
```{=html}
<td>
```
$k = \tanh\left[\frac{1}{4}\sinh^{-1}\left(\frac{1}{\epsilon}\right)\right]$
```{=html}
<td>
```
$D = \frac{k^4 + 6k^2 + 1}{8}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = \frac{k\sqrt{4 + 2\sqrt{2}}}{\sqrt[4]{D}},$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = \frac{1 + k^2(1+\sqrt{2})}{\sqrt{D}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \frac{a_1}{\sqrt{D}}\left[1 - \frac{1 - k^2}{2\sqrt{2}}\right]$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Choosing the Correct Alignment
With all the equations that have already been presented, the question
naturally arises, "Which one should I choose?" Notice that the
coefficients $a_1$, $a_2$, and $a_3$ are not simply related to the
parameters of the system response. Certain combinations of parameters
may indeed invalidate one or more of the alignments because they cannot
realize the necessary coefficients. With this in mind, general
guidelines have been developed to guide the selection of the appropriate
alignment. This is very useful if one is designing an enclosure to suit
a particular transducer that cannot be changed.
The general guideline for the Butterworth alignment focuses on $Q_L$ and
$Q_{TS}$. Since the three coefficients $a_1$, $a_2$, and $a_3$ are a
function of $Q_L$, $Q_{TS}$, *h*, and $\alpha$, fixing one of these
parameters yields three equations that uniquely determine the other
three. In the case where a particular transducer is already given,
$Q_{TS}$ is essentially fixed. If the desired parameters of the
enclosure are already known, then $Q_L$ is a better starting point.
In the case that the rigid requirements of the Butterworth alignment
cannot be satisfied, the quasi-Butterworth alignment is often applied
when $Q_{TS}$ is not large enough.. The addition of another parameter,
*B*, allows more flexibility in the design.
For $Q_{TS}$ values that are too large for the Butterworth alignment,
the Chebyshev alignment is typically chosen. However, the steep
transition of the Chebyshev alignment may also be utilized to attempt to
extend the bass response of the loudspeaker in the case where the
transducer properties can be changed.
In addition to these three popular alignments, research continues in the
area of developing new algorithms that can manipulate the low-frequency
response of the bass-reflex enclosure. For example, a 5th order
quasi-Butterworth alignment has been developed \[6\]. Another example
\[7\] applies root-locus techniques to achieve results. In the modern
age of high-powered computing, other researchers have focused their
efforts in creating computerized optimization algorithms that can be
modified to achieve a flatter response with sharp roll-off or introduce
quasi-ripples which provide a boost in sub-bass frequencies \[8\].
## References
\[1\] Leach, W. Marshall, Jr. *Introduction to Electroacoustics and
Audio Amplifier Design*. 2nd ed. Kendall/Hunt, Dubuque, IA. 2001.
\[2\] Beranek, L. L. *Acoustics*. 2nd ed. Acoustical Society of America,
Woodbridge, NY. 1993.
\[3\] DeCarlo, Raymond A. "The Butterworth Approximation." Notes from
ECE 445. Purdue University. 2004.
\[4\] DeCarlo, Raymond A. "The Chebyshev Approximation." Notes from ECE
445. Purdue University. 2004.
\[5\] VanValkenburg, M. E. *Analog Filter Design*. Holt, Rinehart and
Winston, Inc. Chicago, IL. 1982.
\[6\] Kreutz, Joseph and Panzer, Joerg. \"Derivation of the
Quasi-Butterworth 5 Alignments.\" *Journal of the Audio Engineering
Society*. Vol. 42, No. 5, May 1994.
\[7\] Rutt, Thomas E. \"Root-Locus Technique for Vented-Box Loudspeaker
Design.\" *Journal of the Audio Engineering Society*. Vol. 33, No. 9,
September 1985.
\[8\] Simeonov, Lubomir B. and Shopova-Simeonova, Elena.
\"Passive-Radiator Loudspeaker System Design Software Including
Optimization Algorithm.\" *Journal of the Audio Engineering Society*.
Vol. 47, No. 4, April 1999.
## Appendix A: Equivalent Circuit Parameters
```{=html}
<table align=center border=2>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Name
```{=html}
</th>
```
```{=html}
<th>
```
Electrical Equivalent
```{=html}
</th>
```
```{=html}
<th>
```
Mechanical Equivalent
```{=html}
</th>
```
```{=html}
<th>
```
Acoustical Equivalent
```{=html}
</th>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Voice-Coil Resistance
```{=html}
</th>
```
```{=html}
<td>
```
$R_E$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{ME} = \frac{(Bl)^2}{R_E}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AE} = \frac{(Bl)^2}{R_ES^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Mass
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MD}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AD} = \frac{M_{MD}}{S^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Suspension Compliance
```{=html}
</th>
```
```{=html}
<td>
```
$L_{CES} = (Bl)^2C_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{AS} = S^2_DC_{MS}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Suspension Resistance
```{=html}
</th>
```
```{=html}
<td>
```
$R_{ES} = \frac{(Bl)^2}{R_{MS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AS} = \frac{R_{MS}}{S^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Compliance
```{=html}
</th>
```
```{=html}
<td>
```
$L_{CEB} = \frac{(Bl)^2C_{AB}}{S^2_D}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{MB} = \frac{C_{AB}}{S^2_D}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{AB}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Air-Leak Losses
```{=html}
</th>
```
```{=html}
<td>
```
$R_{EL} = \frac{(Bl)^2}{S^2_DR_{AL}}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{ML} = S^2_DR_{AL}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AL}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Acoustic Mass of Port
```{=html}
</th>
```
```{=html}
<td>
```
$C_{MEP} = \frac{S^2_DM_{AP}}{(Bl)^2}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MP} = S^2_DM_{AP}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AP}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
See $M_{MC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AB}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Low-Frequency Radiation Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
See $M_{MC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Combination Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
$C_{MEC} = \frac{S^2_DM_{AC}}{(Bl)^2}$\
$= \frac{S^2_D(M_{AB} + M_{A1}) + M_{MD}}{(Bl)^2}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MC} = S^2_D(M_{AB} + M_{A1}) + M_{MD}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AC} = M_{AD} + M_{AB} + M_{A1}$\
$= \frac{M_{MD}}{S^2_D} + M_{AB} + M_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Appendix B: Enclosure Parameter Formulas
![](Vented_enclosure.gif "Vented_enclosure.gif")
Based on these dimensions \[1\],
```{=html}
<table align=center cellpadding=5>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$C_{AB} = \frac{V_{AB}}{\rho_0c^2_0}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AB} = \frac{B\rho_{eff}}{\pi a}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$B = \frac{d}{3}\left(\frac{S_D}{S_B}\right)^2\sqrt{\frac{\pi}{S_D}} + \frac{8}{3\pi}\left[1 - \frac{S_D}{S_B}\right]$
```{=html}
</td>
```
```{=html}
<td>
```
$\rho_0 \leq \rho_{eff} \leq \rho_0\left(1 - \frac{V_{fill}}{V_B}\right) + \rho_{fill}\frac{V_{fill}}{V_B}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td colspan=2>
```
$V_{AB} = V_B\left[1-\frac{V_{fill}}{V_B}\right]\left[1 + \frac{\gamma - 1}{1 + \gamma\left(\frac{V_B}{V_{fill}} - 1\right)\frac{\rho_0c_{air}}{\rho_{fill}c_{fill}}}\right]$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$V_B = hwd$ (inside enclosure volume)
```{=html}
</td>
```
```{=html}
<td>
```
$S_B = wh$ (inside area of the side the speaker is mounted on)
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$c_{air} =$specific heat of air at constant volume
```{=html}
</td>
```
```{=html}
<td>
```
$c_{fill} =$specific heat of filling at constant volume ($V_{filling}$)
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$\rho_0 =$mean density of air (about 1.3 kg/$\rm m^3$)
```{=html}
<td>
```
$\rho_{fill} =$ density of filling
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$\gamma =$ ratio of specific heats for air (1.4)
```{=html}
</td>
```
```{=html}
<td>
```
$c_0 =$ speed of sound in air (about 344 m/s)
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td colspan=2>
```
$\rho_{eff}$ = effective density of enclosure. If little or no filling
(acceptable assumption in a bass-reflex system but not for sealed
enclosures), $\rho_{eff} \approx \rho_0$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
|
# Acoustics/Polymer-Film Acoustic Filters
```{=html}
<center>
```
![](Acoustics_polymer_film_acoustic_filters.JPG "Acoustics_polymer_film_acoustic_filters.JPG")
```{=html}
</center>
```
## Introduction
Acoustic filters are used in many devices such as mufflers, noise
control materials (absorptive and reactive), and loudspeaker systems to
name a few. Although the waves in simple (single-medium) acoustic
filters usually travel in gases such as air and carbon-monoxide (in the
case of automobile mufflers) or in materials such as fiberglass,
polyvinylidene fluoride (PVDF) film, or polyethylene (Saran Wrap), there
are also filters that couple two or three distinct media together to
achieve a desired acoustic response. General information about basic
acoustic filter design can be perused at the following wikibook page
Acoustic Filter Design &
Implementation.
The focus of this article will be on acoustic filters that use
multilayer air/polymer film-coupled media as its acoustic medium for
sound waves to propagate through; concluding with an example of how
these filters can be used to detect and extrapolate audio frequency
information in high-frequency \"carrier\" waves that carry an audio
signal. However, before getting into these specific type of acoustic
filters, we need to briefly discuss how sound waves interact with the
medium(media) in which it travels and how these factors can play a role
when designing acoustic filters.
## Changes in Media Properties Due to Sound Wave Characteristics
As with any system being designed, the filter response characteristics
of an acoustic filter are tailored based on the frequency spectrum of
the input signal and the desired output. The input signal may be
infrasonic (frequencies below human hearing), sonic (frequencies within
human hearing range), or ultrasonic (frequencies above human hearing
range). In addition to the frequency content of the input signal, the
density, and, thus, the characteristic impedance of the medium (media)
being used in the acoustic filter must also be taken into account. In
general, the characteristic impedance $Z_0 \,$ for a particular medium
is expressed as\...
```{=html}
<center>
```
` `$Z_0 = \pm \rho_0 c \,$` `$(Pa \cdot s/m)$` `
```{=html}
</center>
```
where
```{=html}
<center>
```
` `$\pm \rho_0 \,$` = (equilibrium) density of medium `$(kg/m^3)\,$\
` `$c \,$` = speed of sound in medium `$(m/s) \,$` `\
` `
```{=html}
</center>
```
The characteristic impedance is important because this value
simultaneously gives an idea of how fast or slow particles will travel
as well as how much mass is \"weighting down\" the particles in the
medium (per unit area or volume) when they are excited by a sound
source. The speed in which sound travels in the medium needs to be taken
into consideration because this factor can ultimately affect the time
response of the filter (i.e. the output of the filter may not radiate or
attenuate sound fast or slow enough if not designed properly). The
intensity $I_A \,$ of a sound wave is expressed as\...
```{=html}
<center>
```
` `$I_A = \frac{1}{T}\int_{0}^{T} pu\quad dt = \pm \frac{P^2}{2\rho_0c} \,$` `$(W/m^2) \,$`. `
```{=html}
</center>
```
$I_A \,$ is interpreted as the (time-averaged) rate of energy
transmission of a sound wave through a unit area normal to the direction
of propagation, and this parameter is also an important factor in
acoustic filter design because the characteristic properties of the
given medium can change relative to intensity of the sound wave
traveling through it. In other words, the reaction of the particles
(atoms or molecules) that make up the medium will respond differently
when the intensity of the sound wave is very high or very small relative
to the size of the control area (i.e. dimensions of the filter, in this
case). Other properties such as the elasticity and mean propagation
velocity (of a sound wave) can change in the acoustic medium as well,
but focusing on frequency, impedance, and/or intensity in the design
process usually takes care of these other parameters because most of
them will inevitably be dependent on the aforementioned properties of
the medium.
## Why Coupled Acoustic Media in Acoustic Filters?
In acoustic transducers, media coupling is employed in acoustic
transducers to either increase or decrease the impedance of the
transducer, and, thus, control the intensity and speed of the signal
acting on the transducer while converting the incident wave, or initial
excitation sound wave, from one form of energy to another (e.g.
converting acoustic energy to electrical energy). Specifically, the
impedance of the transducer is augmented by inserting a solid structure
(not necessarily rigid) between the transducer and the initial
propagation medium (e.g. air). The reflective properties of the inserted
medium is exploited to either increase or decrease the intensity and
propagation speed of the incident sound wave. It is the ability to
alter, and to some extent, control, the impedance of a propagation
medium by (periodically) inserting (a) solid structure(s) such as thin,
flexible films in the original medium (air) and its ability to
concomitantly alter the frequency response of the original medium that
makes use of multilayer media in acoustic filters attractive. The
reflection factor and transmission factor $\hat{R} \,$ and $\hat{T} \,$,
respectively, between two media, expressed as\...
```{=html}
<center>
```
$\hat{R} = \frac{\text{pressure of reflected portion of incident wave}}{\text{pressure of incident wave}} = \frac{\rho c - Z_\mathrm{in}}{\rho c + Z_\mathrm{in}}$
```{=html}
</center>
```
and
```{=html}
<center>
```
$\hat{T} = \frac{\text{pressure of transmitted portion of incident wave}}{\text{pressure of incident wave}} = 1 + \hat{R}$,
```{=html}
</center>
```
are the tangible values that tell how much of the incident wave is being
reflected from and transmitted through the junction where the media
meet. Note that $Z_{in} \,$ is the (total) input impedance seen by the
incident sound wave upon just entering an air-solid acoustic media
layer. In the case of multiple air-columns as shown in Fig. 2,
$Z_\mathrm{in} \,$ is the aggregate impedance of each air-column layer
seen by the incident wave at the input. Below in Fig. 1, a simple
illustration explains what happens when an incident sound wave
propagating in medium (1) and comes in contact with medium (2) at the
junction of the both media (x=0), where the sound waves are represented
by vectors.
As mentioned above, an example of three such successive air-solid
acoustic media layers is shown in Fig. 2 and the electroacoustic
equivalent circuit for Fig. 2 is shown in Fig. 3 where
$L = \rho_s h_s \,$ = (density of solid material)(thickness of solid
material) = unit-area (or volume) mass, $Z = \rho c = \,$ characteristic
acoustic impedance of medium, and $\beta = k = \omega/c = \,$
wavenumber. Note that in the case of a multilayer, coupled acoustic
medium in an acoustic filter, the impedance of each air-solid section is
calculated by using the following general purpose impedance ratio
equation (also referred to as transfer matrices)\...
```{=html}
<center>
```
$\frac{Z_a}{Z_0} = \frac{\left( \frac{Z_b}{Z_0} \right) + j\ \tan(kd)}{1 + j\ \left( \frac{Z_b}{Z_0} \right) \tan(kd)} \,$
```{=html}
</center>
```
where $Z_b \,$ is the (known) impedance at the edge of the solid of an
air-solid layer (on the right) and $Z_a \,$ is the (unknown) impedance
at the edge of the air column of an air-solid layer.
## Effects of High-Intensity, Ultrasonic Waves in Acoustic Media in Audio Frequency Spectrum
When an ultrasonic wave is used as a carrier to transmit audio
frequencies, three audio effects are associated with extrapolating the
audio frequency information from the carrier wave: (a) beating effects,
(b) parametric array effects, and (c) radiation pressure.
Beating occurs when two ultrasonic waves with distinct frequencies
$f_1 \,$ and $f_2 \,$ propagate in the same direction, resulting in
amplitude variations which consequently make the audio signal
information go in and out of phase, or "beat", at a frequency of
$f_1 - f_2 \,$.
Parametric array 1
effects occur when the intensity of an ultrasonic wave is so high in a
particular medium that the high displacements of particles (atoms) per
wave cycle changes properties of that medium so that it influences
parameters like elasticity, density, propagation velocity, etc. in a
non-linear fashion. The results of parametric array effects on
modulated, high-intensity, ultrasonic waves in a particular medium (or
coupled media) is the generation and propagation of audio frequency
waves (not necessarily present in the original audio information) that
are generated in a manner similar to the nonlinear process of amplitude
demodulation commonly inherent in diode circuits (when diodes are
forward biased).
Another audio effect that arises from high-intensity ultrasonic beams of
sound is a static (DC) pressure called radiation pressure. Radiation
pressure is similar to parametric array effects in that amplitude
variations in the signal give rise to audible frequencies via amplitude
demodulation. However, unlike parametric array effects, radiation
pressure fluctuations that generate audible signals from amplitude
demodulation can occur due to any low-frequency modulation and not just
from pressure fluctuations occurring at the modulation frequency
$\omega_M \,$ or beating frequency $f_1 - f_2 \,$.
## An Application of Coupled Media in Acoustic Filters
Figs. 1 - 3 were all from a research paper entitled New Type of
Acoustics Filter Using Periodic Polymer Layers for Measuring Audio
Signal Components Excited by Amplitude-Modulated High_Intensity
Ultrasonic
Waves
submitted to the Audio Engineering Society (AES) by Minoru Todo, Primary
Innovator at Measurement Specialties, Inc., in the October 2005 edition
of the AES Journal. Figs. 4 and 5 below, also from this paper, are
illustrations of test setups referred to in this paper. Specifically,
Fig. 4 is a test setup used to measure the transmission (of an incident
ultrasonic sound wave) through the acoustic filter described by Figs. 1
and 2. Fig. 5 is a block diagram of the test setup used for measuring
radiation pressure, one of the audio effects mentioned in the previous
section. It turns out that out of all of the audio effects mentioned in
the previous section that are caused by high-intensity ultrasonic waves
propagating in a medium, sound waves produced from radiated pressure are
the hardest to detect when microphones and preamplifiers are used in the
detection/receiver system. Although nonlinear noise artifacts occur due
to overloading of the preamplifier present in the detection/receiver
system, the bulk of the nonlinear noise comes from the inherent
nonlinear noise properties of microphones. This is true because all
microphones, even specialized measurement microphones designed for audio
spectrum measurements that have sensitivity well beyond the threshold of
hearing, have nonlinearities artifacts that (periodically) increase in
magnitude with respect to increase at ultrasonic frequencies. These
nonlinearities essentially mask the radiation pressure generated because
the magnitude of these nonlinearities are orders of magnitude greater
than the radiation pressure. The acoustic (low-pass) filter referred to
in this paper was designed in order to filter out the \"detrimental\"
ultrasonic wave that was inducing high nonlinear noise artifacts in the
measurement microphones. The high-intensity, ultrasonic wave was
producing radiation pressure (which is audible) within the initial
acoustic medium (i.e. air). By filtering out the ultrasonic wave, the
measurement microphone would only detect the audible radiation pressure
that the ultrasonic wave was producing in air. Acoustic filters like
these could possibly be used to detect/receive any high-intensity,
ultrasonic signal that may carry audio information which may need to be
extrapolated with an acceptable level of fidelity.
## References
\[1\] Minoru Todo, \"New Type of Acoustic Filter Using Periodic Polymer
Layers for Measuring Audio Signal Components Excited by
Amplitude-Modulated High-Intensity Ultrasonic Waves,\" Journal of Audio
Engineering Society, Vol. 53, pp. 930--41 (2005 October)
\[2\] Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
\[3\] ME 513 Course Notes, Dr. Luc Mongeau, Purdue University
\[4\]
<http://www.ieee-uffc.org/archive/uffc/trans/Toc/abs/02/t0270972.htm>
Created by Valdez L. Gant
|
# Acoustics/Noise in Hydraulic Systems
```{=html}
<center>
```
![](Acoustics_noise_in_hydraulic_systems.JPG "Acoustics_noise_in_hydraulic_systems.JPG")
```{=html}
</center>
```
## Noise in Hydraulic Systems
Hydraulic systems are the most preferred source of power transmission in
most of the industrial and mobile equipments due to their power density,
compactness, flexibility, fast response and efficiency. The field
hydraulics and pneumatics is also known as \'Fluid Power Technology\'.
Fluid power systems have a wide range of applications which include
industrial, off-road vehicles, automotive systems, and aircraft. But,
one of the main problems with the hydraulic systems is the noise
generated by them. The health and safety issues relating to noise have
been recognized for many years and legislation is now placing clear
demands on manufacturers to reduce noise levels \[1\]. Hence, noise
reduction in hydraulic systems demands lot of attention from the
industrial as well as academic researchers. It needs a good
understanding of how the noise is generated and propagated in a
hydraulic system in order to reduce it.
## Sound in fluids
The speed of sound in fluids can be determined using the following
relation.
$c = \sqrt {\frac{K}{\rho}}$ where K - fluid bulk modulus, $\rho$- fluid
density, c - velocity of sound
Typical value of bulk modulus range from **2e9 to 2.5e9 N/m2**. For a
particular oil, with a density of **889 kg/m3**,
speed of sound $c = \sqrt {\frac{2e9}{889}}= 1499.9 m/s$
## Source of Noise
The main source of noise in hydraulic systems is the pump which supplies
the flow. Most of the pumps used are positive displacement pumps. Of the
positive displacement pumps, axial piston swash plate type is mostly
preferred due to their reliability and efficiency.
The noise generation in an axial piston pump can be classified under two
categories (i) fluidborne noise and (ii) Structureborne noise
## Fluidborne Noise (FBN)
Among the positive displacement pumps, highest levels of FBN are
generated by axial piston pumps and lowest levels by screw pumps and in
between these lie the external gear pump and vane pump \[1\]. The
discussion in this page is mainly focused on **axial piston swash plate
type pumps**. An axial piston pump has a fixed number of displacement
chambers arranged in a circular pattern separated from each other by an
angular pitch equal to $\phi = \frac {360}{n}$ where n is the number of
displacement chambers. As each chamber discharges a specific volume of
fluid, the discharge at the pump outlet is sum of all the discharge from
the individual chambers. The discontinuity in flow between adjacent
chambers results in a kinematic flow ripple. The amplitude of the
kinematic ripple can be theoretical determined given the size of the
pump and the number of displacement chambers. The kinematic ripple is
the main cause of the fluidborne noise. The kinematic ripples is a
theoretical value. The actual **flow ripple** at the pump outlet is much
larger than the theoretical value because the **kinematic ripple** is
combined with a **compressibility component** which is due to the fluid
compressibility. These ripples (also referred as flow pulsations)
generated at the pump are transmitted through the pipe or flexible hose
connected to the pump and travel to all parts of the hydraulic circuit.
The pump is considered an ideal flow source. The pressure in the system
will be decided by resistance to the flow or otherwise known as system
load. The flow pulsations result in pressure pulsations. The pressure
pulsations are superimposed on the mean system pressure. Both the **flow
and pressure pulsations** easily travel to all part of the circuit and
affect the performance of the components like control valve and
actuators in the system and make the component vibrate, sometimes even
resonate. This vibration of system components adds to the noise
generated by the flow pulsations. The transmission of FBN in the circuit
is discussed under transmission below.
A typical axial piston pump with 9 pistons running at 1000 rpm can
produce a sound pressure level of more than 70 dBs.
## Structureborne Noise (SBN)
In swash plate type pumps, the main sources of the structureborne noise
are the fluctuating forces and moments of the swash plate. These
fluctuating forces arise as a result of the varying pressure inside the
displacement chamber. As the displacing elements move from suction
stroke to discharge stroke, the pressure varies accordingly from few
bars to few hundred bars. These pressure changes are reflected on the
displacement elements (in this case, pistons) as forces and these force
are exerted on the swash plate causing the swash plate to vibrate. This
vibration of the swash plate is the main cause of **structureborne
noise**. There are other components in the system which also vibrate and
lead to structureborne noise, but the swash is the major contributor.
```{=html}
<center>
```
![](Pump_noise.png "Pump_noise.png")
```{=html}
</center>
```
```{=html}
<center>
```
**Fig. 1 shows an exploded view of axial piston pump. Also the flow
pulsations and the oscillating forces on the swash plate, which cause
FBN and SBN respectively are shown for one revolution of the pump.**
```{=html}
</center>
```
## Transmission
### FBN
The transmission of FBN is a complex phenomenon. Over the past few
decades, considerable amount of research had gone into mathematical
modeling of pressure and flow transient in the circuit. This involves
the solution of wave equations, with piping treated as a distributed
parameter system known as a transmission line \[1\] & \[3\].
Lets consider a simple pump-pipe-loading valve circuit as shown in Fig.
2. The pressure and flow ripple at any location in the pipe can be
described by the relations:
```{=html}
<center>
```
$\frac {}{} P = Ae^{-i k x} + Be^{+i k x}$ \...\...\...(1)
```{=html}
</center>
```
```{=html}
<center>
```
$Q = \frac {1}{Z_{0}}(Ae^{-i k x} - Be^{+i k x})$\.....(2)
```{=html}
</center>
```
where $\frac {}{} A$ and $\frac {}{} B$ are frequency dependent complex
coefficients which are directly proportional to pump (source) flow
ripple, but also functions of the source impedance $\frac {}{} Z_{s}$,
characteristic impedance of the pipe $\frac {}{} Z_{0}$ and the
termination impedance $\frac {}{} Z_{t}$. These impedances ,usually vary
as the system operating pressure and flow rate changes, can be
determined experimentally.
For complex systems with several system components, the pressure and
flow ripples are estimated using the transformation matrix approach. For
this, the system components can be treated as lumped impedances (a
throttle valve or accumulator), or distributed impedances (flexible hose
or silencer). Various software packages are available today to predict
the pressure pulsations.
### SBN
The transmission of SBN follows the classic source-path-noise model. The
vibrations of the swash plate, the main cause of SBN, are transferred to
the pump casing which encloses all the rotating group in the pump
including displacement chambers (also known as cylinder block), pistons,
and the swash plate. The pump case, apart from vibrating itself,
transfers the vibration down to the mount on which the pump is mounted.
The mount then passes the vibrations down to the main mounted structure
or the vehicle. Thus the SBN is transferred from the swash plate to the
main structure or vehicle via pumpcasing and mount.
Some of the machine structures, along the path of transmission, are good
at transmitting this vibrational energy and they even resonate and
reinforce it. By converting only a fraction of 1% of the pump
structureborne noise into sound, a member in the transmission path could
radiate more ABN than the pump itself \[4\].
## Airborne noise (ABN)
Both FBN and SBN , impart high fatigue loads on the system components
and make them vibrate. All of these vibrations are radiated as
**airborne noise** and can be heard by a human operator. Also, the flow
and pressure pulsations make the system components such as a control
valve to resonate. This vibration of the particular component again
radiates airborne noise.
## Noise reduction
The reduction of the noise radiated from the hydraulic system can be
approached in two ways.
\(i\) **Reduction at Source** - which is the reduction of noise at the
pump. A large amount of open literature are available on the reduction
techniques with some techniques focusing on reducing FBN at source and
others focusing on SBN. Reduction in FBN and SBN at the source has a
large influence on the ABN that is radiated. Even though, a lot of
progress had been made in reducing the FBN and SBN separately, the
problem of noise in hydraulic systems is not fully solved and lot need
to be done. The reason is that the FBN and SBN are interrelated, in a
sense that, if one tried to reduce the FBN at the pump, it tends to
affect the SBN characteristics. Currently, one of the main researches in
noise reduction in pumps, is a systematic approach in understanding the
coupling between FBN and SBN and targeting them simultaneously instead
of treating them as two separate sources. Such an unified approach,
demands not only well trained researchers but also sophisticated
computer based mathematical model of the pump which can accurately
output the necessary results for optimization of pump design. The
amplitude of fluid pulsations can be reduced, at the source, with the
use of an hydraulic attenuator(5).
\(ii\) **Reduction at Component level** - which focuses on the reduction
of noise from individual component like hose, control valve, pump mounts
and fixtures. This can be accomplished by a suitable design modification
of the component so that it radiates least amount of noise. Optimization
using computer based models can be one of the ways.
## Hydraulic System noise
```{=html}
<center>
```
![](Noise.png "Noise.png")
```{=html}
</center>
```
```{=html}
<center>
```
**Fig.3 Domain of hydraulic system noise generation and transmission
(Figure recreated from \[1\])**
```{=html}
</center>
```
## References
1\. *Designing Quieter Hydraulic Systems - Some Recent Developments and
Contributions*, Kevin Edge, 1999, Fluid Power: Forth JHPS International
Symposium.
2\. *Fundamentals of Acoustics* L.E. Kinsler, A.R. Frey, A.B.Coppens,
J.V. Sanders. Fourth Edition. John Wiley & Sons Inc.
3\. *Reduction of Axial Piston Pump Pressure Ripple* A.M. Harrison. PhD
thesis, University of Bath. 1997
4\. *Noise Control of Hydraulic Machinery* Stan Skaistis, 1988. MARCEL
DEKKER , INC.
5 Hydraulic Power System Analysis, A. Akers, M. Gassman, & R. Smith,
Taylor & Francis, New York, 2006,
|
# Acoustics/Noise from Cooling Fans
```{=html}
<center>
```
![](Acoustics_noise_from_cooling_fans.JPG "Acoustics_noise_from_cooling_fans.JPG")
```{=html}
</center>
```
## Proposal
As electric/electronic devices get smaller and functional, the noise of
cooling device becomes important. This page will explain the origins of
noise generation from small axial cooling fans used in electronic goods
like desktop/laptop computers. The source of fan noises includes
aerodynamic noise as well as operating sound of the fan itself. This
page will be focused on the aerodynamic noise generation mechanisms.
## Introduction
If one opens a desktop computer, they may find three (or more) fans. For
example, a fan is typically found on the heat sink of the CPU, in the
back panel of the power supply unit, on the case ventilation hole, on
the graphics card, and even on the motherboard chipset if it is a recent
one. Computer noise which annoys many people is mostly due to cooling
fans, if the hard drive(s) is fairly quiet. When Intel Pentium
processors were first introduced, there was no need to have a fan on the
CPU, however, contemporary CPUs cannot function even for several seconds
without a cooling fan. As CPU densities increase, the heat transfer for
nominal operation requires increased airflow, which causes more and more
noise. The type of fans commonly used in desktop computers are axial
fans, and centrifugal blowers in laptop computers. Several fan types are
shown here (pdf
format). Different
fan types have different characteristics of noise generation and
performance. The axial flow fan is mainly considered in this page.
## Noise Generation Mechanisms
The figure below shows a typical noise spectrum of a 120 **mm** diameter
electronic device cooling fan. One microphone was used at a point 1
**m** from the upstream side of the fan. The fan has 7 blades, 4 struts
for motor mounting and operates at 13V. Certain amount of load is
applied. The blue plot is background noise of anechoic chamber, and the
green one is sound loudness spectrum when the fan is running.
```{=html}
<center>
```
![](Noisespectrum.gif "Noisespectrum.gif")
```{=html}
</center>
```
(\*BPF = Blade Passing Frequency)\
Each noise elements shown in this figure is caused by one or more of
following generation mechanisms.
### Blade Thickness Noise - Monopole (But very weak)
Blade thickness noise is generated by volume displacement of fluid. A
fan blade has its thickness and volume. As the rotor rotates, the volume
of each blade displaces fluid volume, then they consequently fluctuate
pressure of near field, and noise is generated. This noise is tonal at
the running frequency and generally very weak for cooling fans, because
their RPM is relatively low. Therefore, thickness of fan blades hardly
affects to electronic cooling fan noise.\
(This kind of noise can become severe for high speed turbomachines like
helicopter rotors.)
### Tonal Noise by Aerodynamic Forces - Dipole
#### Uniform Inlet Flow (Negligible)
The sound generation due to uniform and steady aerodynamic force has a
characteristic very similar to the blade thickness noise. It is very
weak for low speed fans, and depends on fan RPM. Since some steady blade
forces are necessary for a fan to do its duty even in an ideal
condition, this kind of noise is impossible to avoid. It is known that
this noise can be reduced by increasing the number of blades.
#### Non-uniform Inlet Flow
Non-uniform (still steady) inlet flow causes non-uniform aerodynamic
forces on blades as their angular positions change. This generates noise
at blade passing frequency and its harmonics. It is one of the major
noise sources of electronic cooling fans.
#### Rotor-Casing interaction
If the fan blades are very close to a structure which is not symmetric,
unsteady interaction forces to blades are generated. Then the fan
experiences a similar running condition as lying in non-uniform flow
field. See Acoustics/Rotor Stator
Interactions for
details.
#### Impulsive Noise (Negligible)
This noise is caused by the interaction between a blade and
blade-tip-vortex of the preceding blade, and not severe for cooling
fans.
#### Noise from Stall
#### Rotating Stall
: Click here "wikilink") to read the definition and
an aerodynamic description of **stall**.
The noise due to stall is a complex phenomenon that occurs at low flow
rates. For some reason, if flow is locally disturbed, it can cause stall
on one of the blades. As a result, the upstream passage on this blade is
partially blocked. Therefore, the mean flow is diverted away from this
passage. This causes increasing of the angle of attack on the closest
blade at the upstream side of the originally stalled blade, the flow is
again stalled there. On the other hand, the other side of the first
blade is un-stalled because of reduction of flow angle.
```{=html}
<center>
```
![](Stall.gif "Stall.gif")
```{=html}
</center>
```
repeatedly, the stall cell turns around the blades at about 30\~50% of
the running frequency, and the direction is opposite to the blades. This
series of phenomenon causes unsteady blade forces, and consequently
generates noise and vibrations.
#### Non-uniform Rotor Geometry
Asymmetry of rotor causes noise at the rotating frequency and its
harmonics (not blade passing frequency obviously), even when the inlet
flow is uniform and steady.
#### Unsteady Flow Field
Unsteady flow causes random forces on the blades. It spreads the
discrete spectrum noises and makes them continuous. In case of
low-frequency variation, the spread continuous spectral noise is around
rotating frequency, and narrowband noise is generated. The stochastic
velocity fluctuations of inlet flow generates broadband noise spectrum.
The generation of random noise components is covered by the following
sections.
### Random Noise by Unsteady Aerodynamic Forces
#### Turbulent Boundary Layer
Even in the steady and uniform inlet flow, there exist random force
fluctuations on the blades. That is from turbulent blade boundary layer.
Some noise is generated for this reason, but dominant noise is produced
by the boundary layer passing the blade trailing edge. The blade
trailing edges scatter the non-propagating near-field pressure into a
propagatable sound field.
#### Incident Turbulent
Velocity fluctuations of the intake flow with a stochastic time history
generate random forces on blades, and a broadband spectrum noise.
#### Vortex Shedding
For some reason, a vortex can separate from a blade. Then the
circulating flow around the blade starts to be changed. This causes
non-uniform forces on blades, and noises. A classical example for this
phenomenon is \'Karman vortex
street\'.
(some images and
animations.)
Vortex shedding mechanism can occur in a laminar boundary layer of low
speed fan and also in a turbulent boundary layer of high frequency fan.
#### Flow Separation
Flow separation causes stall explained above. This phenomenon can cause
random noise, which spreads all the discrete spectrum noises, and turns
the noise into broadband.
#### Tip Vortex
Since cooling fans are ducted axial flow machines, the annular gap
between the blade tips and the casing is important parameter for noise
generation. While rotating, there is another flow through the annular
gap due to pressure difference between upstream and downstream of fan.
Because of this flow, tip vortex is generated through the gap, and
broadband noise increases as the annular gap gets bigger.
## Installation Effects
Once a fan is installed, even though the fan is well designed
acoustically, unexpected noise problem can come up. It is called as
installation effects, and two types are applicable to cooling fans.
### Effect of Inlet Flow Conditions
A structure that affects the inlet flow of a fan causes installation
effects. For example Hoppe & Neise \[3\] showed that with and without a
bellmouth nozzle at the inlet flange of 500**mm** fan can change the
noise power by 50**dB** (This application is for much larger and noisier
fan though).
### Acoustic Loading Effect
This effect is shown on duct system applications. Some high performance
graphic cards apply duct system for direct exhaustion.\
The sound power generated by a fan is not only a function of its
impeller speed and operating condition, but also depends on the acoustic
impedances of the duct systems connected to its inlet and outlet.
Therefore, fan and duct system should be matched not only for
aerodynamic noise reasons but also because of acoustic considerations.
## Closing Comment
Noise reduction of cooling fans has some restrictions:\
1. Active noise control is not economically effective. 80mm cooling fans
are only 5\~10 US dollars. It is only applicable for high-end electronic
products.\
2. Restricting certain aerodynamic phenomenon for noise reduction can
cause serious performance reduction of the fan. Increasing RPM of the
fan is of course much more dominant factor for noise.\
Different stories of fan noise are introduced at some of the linked
sites below like active RPM control or noise comparison of various
bearings used in fans. If blade passing noise is dominant, a muffler
would be beneficial.
## Links to Interesting Sites about Fan Noise
Some practical issue of PC noise are presented at the following sites.\
Cooling Fan Noise Comparison - Sleeve Bearing vs. Ball Bearing (pdf
format)\
Brief explanation of fan noise origins and noise reduction
suggestions\
Effect of sweep angle
comparison\
Comparisons of noise from various 80mm
fans\
Noise reduction of a specific desktop
case\
Noise reduction of another specific desktop
case\
Informal study for noise from CPU cooling
fan\
Informal study for noise from PC case fans\
Active fan speed optimizators for minimum noise from desktop
computers
- Some general fan noise reduction
techniques
- Brüel & Kjær - acoustic testing device company. Various
applications are presented in \"applications\"
tap
- \"Fan Noise Solutions\" by cpemma
- How To Build A Computer/Quiet
PC
## References
\[1\] Neise, W., and Michel, U., \"Aerodynamic Noise of Turbomachines\"\
\[2\] Anderson, J., \"Fundamentals of Aerodynamics\", 3rd edition, 2001,
McGrawHill\
\[3\] Hoppe, G., and Neise, W., \"Vergleich verschiedener
Gerauschmessnerfahren fur Ventilatoren. Forschungsbericht FLT 3/1/31/87,
Forschungsvereinigung fur Luft- und Trocknungstechnik e. V.,
Frankfurt/Main, Germany
|
# Acoustics/Piezoelectric Transducers
```{=html}
<center>
```
![](Acoustics_piezoelectric_transducers.JPG "Acoustics_piezoelectric_transducers.JPG")
```{=html}
</center>
```
# Introduction
Piezoelectricity from the Greek word \"piezo\" means pressure
electricity. Certain crystalline substances generate electric charges
under mechanical stress and conversely experience a mechanical strain in
the presence of an electric field. The piezoelectric effect describes a
situation where the transducing material senses input mechanical
vibrations and produces a charge at the frequency of the vibration. An
AC voltage causes the piezoelectric material to vibrate in an
oscillatory fashion at the same frequency as the input current.
Quartz is the best known single crystal material with piezoelectric
properties. Strong piezoelectric effects can be induced in materials
with an ABO3, Perovskite crystalline structure. \'A\' denotes a large
divalent metal ion such as lead and \'B\' denotes a smaller tetravalent
ion such as titanium or zirconium.
For any crystal to exhibit the piezoelectric effect, its structure must
have no center of symmetry. Either a tensile or compressive stress
applied to the crystal alters the separation between positive and
negative charge sights in the cell causing a net polarization at the
surface of the crystal. The polarization varies directly with the
applied stress and is direction dependent so that compressive and
tensile stresses will result in electric fields of opposite voltages.
# Vibrations & Displacements
Piezoelectric ceramics have non-centrosymmetric unit cells below the
Curie temperature and centrosymmetric unit cells above the Curie
temperature. Non-centrosymmetric structures provide a net electric
dipole moment. The dipoles are randomly oriented until a strong DC
electric field is applied causing permanent polarization and thus
piezoelectric properties.
A polarized ceramic may be subjected to stress causing the crystal
lattice to distort changing the total dipole moment of the ceramic. The
change in dipole moment due to an applied stress causes a net electric
field which varies linearly with stress.
# Dynamic Performance
The dynamic performance of a piezoelectric material relates to how it
behaves under alternating stresses near the mechanical resonance. The
parallel combination of C2 with L1, C1, and R1 in the equivalent circuit
below control the transducers reactance which is a function of
frequency.
## Equivalent Electric Circuit
## Frequency Response
The graph below shows the impedance of a piezoelectric transducer as a
function of frequency. The minimum value at fn corresponds to the
resonance while the maximum value at fm corresponds to anti-resonance.
^Superscript\ text*Italic\ text*^
# Resonant Devices
Non resonant devices may be modeled by a capacitor representing the
capacitance of the piezoelectric with an impedance modeling the
mechanically vibrating system as a shunt in the circuit. The impedance
may be modeled as a capacitor in the non resonant case which allows the
circuit to reduce to a single capacitor replacing the parallel
combination.
For resonant devices the impedance becomes a resistance or static
capacitance at resonance. This is an undesirable effect. In mechanically
driven systems this effect acts as a load on the transducer and
decreases the electrical output. In electrically driven systems this
effect shunts the driver requiring a larger input current. The adverse
effect of the static capacitance experienced at resonant operation may
be counteracted by using a shunt or series inductor resonating with the
static capacitance at the operating frequency.
# Applications
## Mechanical Measurement
Because of the dielectric leakage current of piezoelectrics they are
poorly suited for applications where force or pressure have a slow rate
of change. They are, however, very well suited for highly dynamic
measurements that might be needed in blast gauges and accelerometers.
## Ultrasonic
High intensity ultrasound applications utilize half wavelength
transducers with resonant frequencies between 18 kHz and 45 kHz. Large
blocks of transducer material is needed to generate high intensities
which is makes manufacturing difficult and is economically impractical.
Also, since half wavelength transducers have the highest stress
amplitude in the center the end sections act as inert masses. The end
sections are often replaced with metal plates possessing a much higher
mechanical quality factor giving the composite transducer a higher
mechanical quality factor than a single-piece transducer.
The overall electro-acoustic efficiency is:
` Qm0 = unloaded mechanical quality factor`\
` QE = electric quality factor`\
` QL = quality factor due to the acoustic load alone`
The second term on the right hand side is the dielectric loss and the
third term is the mechanical loss.
Efficiency is maximized when:
then:
The maximum ultrasonic efficiency is described by:
Applications of ultrasonic transducers include:
` Welding of plastics`\
` Atomization of liquids`\
` Ultrasonic drilling`\
` Ultrasonic cleaning`\
` Ultrasonic foils in the paper machine wet end for more uniform fibre distribution`\
` Ultrasound`\
` Non-destructive testing`\
` etc.`
# More Information and Source of Information
MorganElectroCeramics
Ultra Technology
|
# Acoustics/Generation and Propagation of Thunder
**Thunder** is the sound made by lightning. Depending on the nature of
the lightning and distance of the listener, thunder can range from a
sharp, loud crack to a long, low rumble (brontide). The sudden increase
in pressure and temperature from lightning produces rapid expansion of
the air surrounding and within a bolt of lightning. In turn, this
expansion of air creates a sonic shock wave which produces the sound of
thunder, often referred to as a *clap*, *crack*, or *peal of thunder*.
The distance of the lightning can be calculated by the listener
depending on when the sound is heard vs. the vision of the lightning
strike.
!A lightning bolt.{width="350"}
## Etymology
The *d* in Modern English *thunder* (from earlier Old English *þunor*)
is epenthetic, and is now found as well in Modern Dutch *donder* (cp
Middle Dutch *donre*, and Old Norse *þorr*, Old Frisian *þuner*, Old
High German *donar* descended from Proto-Germanic \**þunraz*). In Latin
the term was *tonare* \"to thunder\". The name of the Germanic god Thor
comes from the Old Norse word for thunder.
***NOTE*:The text above is taken from the wikipedia
entry.**
|
# New Zealand History/Introduction
## Introduction to A Concise New Zealand History
This is a concise textbook on New Zealand history, designed so it can be
read by virtually anyone wanting to find out more about New Zealand
history.
The textbook covers the time span of human settlement in New Zealand. It
includes:
- The discovery and colonisation of New Zealand by Polynesians.
- Maori culture up to the year 1840.
- Discovery of New Zealand by Europeans.
- Early New Zealand economy and Missionaries in New Zealand.
- The Treaty of Waitangi.
- European colonisation, and conflict with the Maori people.
- Colonial, Twentieth Century and Modern Government.
- Important events in the twentieth century and recent times.
Find out how events in New Zealand\'s humble beginnings have shaped the
way the country is in the present day.
New Zealand
|
# New Zealand History/The Colonial Government
New
## The Colonial Government
!The Colonial New Zealand
flag
After New Zealand was annexed by Britain, it was initially set up as a
dependency of New South Wales. However, by 1841, New Zealand was made a
colony in its own right. As a colony, it inherited political practices
and institutions of government from the United Kingdom.
The United Kingdom Government started the first New Zealand Government
by appointing governors, being advised by appointed executive and
legislative councils.
In 1852, the British Parliament passed the New Zealand Constitution Act,
which provided for the elected House of Representatives and Legislative
Council. The General Assembly (the House and Council combined) first met
in 1854.
New Zealand was effectively self-governing in all domestic matters
except \'native policy\' by 1856. Control over native policy was passed
to the Colonial Government in the mid-1860s.
The first capital of the country was Russell, located in the Bay of
Islands, declared by Governor Hobson after New Zealand was formally
annexed. In September 1840, Hobson changed the capital to the shores of
the Waitematā Harbour where Auckland was founded. The seat of Government
was centralised in Wellington by 1865.
**Provincial Governments in New Zealand** !The boundaries of the former
New Zealand
provinces
From 1841 until 1876, provinces had their own provincial governments.
Originally, there were only three provinces, set up by the Royal
Charter:
- New Ulster (North Island north of Patea River)
- New Munster (North Island south of Patea River, plus the South
Island)
- New Leinster (Stewart Island)
In 1846, the provinces were reformed. The New Leinster province was
removed, and the two remaining provinces were enlarged and separated
from the Colonial Government. The reformed provinces were:
- New Ulster (All of North Island)
- New Munster (The South Island plus Stewart Island)
The provinces were reformed yet again by the New Zealand Constitution
Act 1852. In this constitution, the old provinces of New Ulster and New
Munster were abolished and six new provinces were set up:
- Auckland
- New Plymouth
- Wellington
- Nelson
- Canterbury
- Otago
Each province had its own legislature that elected its own Speaker and
Superintendent. Any male 21 years or older that owned freehold property
worth £50 a year could vote. Elections were held every four years.
Four new provinces were introduced between November 1858 and December
1873. Hawkes Bay broke away from Wellington, Marlborough from Nelson,
Westland from Canterbury, and Southland from Otago.
Not long after they had begun, provincial governments were a matter for
political debate in the General Assembly. Eventually, under the
premiership of Harry Atkinson, the Colonial Government passed the
Abolition of Provinces Act 1876, which wiped out the provincial
governments, replacing them with regions. Provinces finally ceased to
exist on the 1st of January 1877.
Twentieth Century
|
# New Zealand History/Famous New Zealanders
## Famous New Zealanders
!Sir Edmund Hillary in Poland,
2004.
**Edmund Hillary**
On the 29th of May 1953, New Zealander Edmund Hillary became the first
person to reach the summit of Mount Everest with Nepalese climber
Tenzing Norgay (the summit at the time was 29,028 feet above sea level).
He was knighted by Queen Elizabeth II on his return. Sir Edmund Hillary
was famous after news spread he had reached the summit, but he didn\'t
finish at Mount Everest. He led the New Zealand section of the
Trans-Antarctic expedition from 1955 to 1958.
In the 1960s, he returned to Nepal to build clinics, hospitals and
schools for the Nepalese people. He also convinced their government to
pass laws to protect their forests and the area around Mount Everest.
In the 1970s, several books were published by Hillary about his journey
up Mount Everest.
Edmund Hillary is one of the most famous New Zealanders, and appears on
the New Zealand five dollar note.
He died on the 11th of January 2008.
**Ernest Rutherford**
Ernest Rutherford was a nuclear physicist who became known as the
\'father\' of nuclear physics. He pioneered the Bohr model of the atom
through his discovery of Rutherford scattering off the atomic nucleus
with his Geiger-Marsden experiment (gold foil experiment).
He was born in Brightwater, New Zealand, but lived in England for a
number of years.
He received the Commonwealth Order of Merit, the Nobel Prize in
Chemistry in 1908, and was a member of the Privy Council of the United
Kingdom and the Royal Society.
Rutherford appears on the New Zealand one hundred dollar note.
Twentieth Century
|
# Speech-Language Pathology/Stuttering/Print version
# Core Stuttering Behaviors
# Incidence and Prevalence of Stuttering
# Development of Childhood Stuttering
# Neurology of Stuttering
# Genetics of Stuttering
# Physiology, Psychology, and Personality of Stutterers
# Belief-Related Changes in Stuttering
# Stress-Related Changes in Stuttering
# Measurement of Stuttering
# Other Fluency Disorders
# Research I\'d Like to See
# Choosing a Speech-Language Pathologist
# Why Do Stutterers Avoid Speech Therapy?
# Stuttering Therapies for Pre-School Children
# Stuttering Therapies for School-Age Children
# Stuttering Therapies for Teenagers
# Stuttering Therapies for Mentally Retarded Individuals
# Fluency Shaping Stuttering Therapy
# Stuttering Modification Therapy
# Treating Speech-Related Fears and Anxieties
# Personal Construct Therapy: You Always Have Choices
# Treating Psychological Issues
# Improving Self-Awareness of Stuttering Behaviors
# Anti-Stuttering Devices
# Anti-Stuttering Medications
# Alternative Medicine Therapies for Stuttering
# How We Treat Stuttering
# What Worked for Me
# Practice Word Lists
# You\'re Not Alone: Join a Support Group
# Famous People Who Stutter
# Stuttering and Employment
# How to Handle Telephone Calls
# Public Perceptions of Stutterers
# How the Media Presents Stuttering
# Cultural and Ethnic Differences in Stuttering
# High School Science Projects
# Acting and Theater
# Spouses of People Who Stutter
# Stuttering in the Military
# Advice for Listeners
# Stuttering in Movies and Television
# My Life in Stuttering
# Recommended Books and Periodicals
|
# Transportation Economics/Decision Making
**Decision Making** is the process by which one alternative is selected
over another. Decision making generally occurs in the planning phases of
transportation projects, but last minute decision making has been shown
to occur, sometimes successfully. Several procedures for making
decisions have been outlined in effort to minimize inefficiencies or
redundancies. These are idealized (or normative) processes, and describe
how decisions might be made in an ideal world, and how they are
described in official documents. Real-world processes are not as
orderly.
**Applied systems analysis** is the use of rigorous methods to assist in
determining optimal plans, designs and solutions to large scale problems
through the application of analytical methods. Applied systems analysis
focuses upon the use of methods, concepts and relationships between
problems and the range of techniques available. Any problem can have
multiple solutions. The optimal solution will depend upon technical
feasibility (engineering) and costs and valuation (economics). Applied
systems analysis is an attempt to move away from the engineering
practice of design detail and to integrate feasible engineering
solutions with desirable economic solutions. The systems designer faces
the same problem as the economist, \"efficient resource allocation\" for
a given objective function.
Systems analysis emerged during World War II, especially with the
deployment of radar in a coordinated way. It spread to other fields such
as fighter tactics, mission planning and weapons evaluation. Ultimately
the use of mathematical techniques in such problems came to be known as
operations research, while other statistical and econometric techniques
are being applied. Optimization applies to cases where data is
under-determined (fewer observations than dependent variables) and
statistics where data is over-determined (more observations than
dependent variables). After World War II, techniques spread to
universities. Systems analysis saw further mathematical development and
application to a broad variety of problems.
It has been said of Systems Analysis, that it is:
- \"A coordinated set of procedures which addresses the fundamental
issues of design and management: that of specifying how men, money
and materials should be combined to achieve a higher purpose\" - De
Neufville
- \"\... primarily a methodology, a philosophical approach to solving
problems for and for planning innovative advances\" - Baker
- \"Professionals who endeavor to analyze systematically the choices
available to public and private agencies in making changes in the
transportation system and services in a particular region\" -
Manheim
- \"Systems analysis is not easy to write about: brief, one sentence
definitions frequently are trivial\" - Thomas
The most prominent decision-making process to emerge from systems
analysis is **rational
planning**, which will be
discussed next, followed by some critiques and alternatives.
![](TE-Systems-RationalPlanningAndDecisionMaking.png "TE-Systems-RationalPlanningAndDecisionMaking.png")
*How does one (rationally) decide what to do?*
The figure identifies three layers of abstraction. The first layer (top
row) describes the high level process, which we can summarize in six
steps. A second layer details many of the components of the first layer.
A third layer, identified by the blue box, \"abstract into model or
framework\" depends on the problem at hand
## Video
! Decision Making
## Overview data
The first step is observational, review and gather data about the system
under consideration. An understanding of the world around is required,
including specifying the system.
The problem (defined in the next step) lies within a larger system, that
comprises
1. Objectives - measure the effectiveness or performance
2. Environment - things which affect the system but are not affected by
it
3. Resources - factor inputs to do the work
4. Components - set of activities or tasks of the system
5. Management - sets goals, allocates resources and exercises control
over components
6. Model of how variables in 1-5 relate to each other
the detailed objectives are identified in the following step, and the
detailed model for analysis of the problem is specified in the step
after that.
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
For instance in the case of intercity transportation in California, data
about existing demand conditions, existing supply conditions, future
demand expectations, and proposed changes to supply would be important
inputs. Changes in technology and environmental conditions are important
considerations for long-term projects. We would also want to know the
certainty of the forecasts, not just a central tendency, and the
potential for alternative scenarios which may be widely different.
```{=html}
</div>
```
## Define the problem
The second step is to *define the problem* more narrowly, in a sense to
*identify needs*.
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
Rather than an amorphous issue (intercity transportation), we might be
interested in a more detailed question, e.g. how to serve existing and
future demands between two cities (say metropolitan Los Angeles and San
Francisco). The problem might be that demand is expected to grow and
outstrip supply.
```{=html}
</div>
```
## Formulate goal
The third step is to formulate a goal. For major transportation
projects, or projects with intense community interest, this may involve
the public. For instance
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
*To serve future passenger demand between Los Angeles and San Francisco,
quickly, safely, cleanly, and inexpensively.*
```{=html}
</div>
```
The goal will need to be testable, the process below \"formulate goal\"
in the flowchart suggests this process in more detail.
The first aspect is to operationalize the goal. We need to measure the
adverbs in the goal (e.g. how do we measure \"quickly\", \"safely\",
\"cleanly\", or \"inexpensively\"). Some are straight-forward.
\"Quickly\" is a measure of travel time or speed. But it needs to
account for both the access and egress time, the waiting time, and the
travel time, and these may not be weighted the same.
The second step is identifying the decision criteria. Each adverb may
have a certain value, but it might be that an alternative has not merely
have the most points in one area, but establish at least minimum
satisfactory points in all areas. So a very fast mode must meet a
specific safety test, and going faster does not necessarily mean it can
also be more dangerous (despite what a rational economist might think
about trade-offs).
The third is to weight those criteria. E.g. how important is speed vs.
safety? This is in many ways a value question, though economics can try
to value each of these aspects in monetary form, enabling
Evaluation. For
instance, many Negative
externalities
have been monetized, giving a value of time in delay, a value of
pollution damages, and a value of life.
## Generate alternatives
*Examining, evaluating, and recommending alternatives* is often the job
of professionals, engineers, planners, and economists. Final selection
is generally the job of elected or appointed officials for important
projects.
There are several sub-problems here, the first is to *generate
alternatives*. This may require significant creativity. Within major
alternatives, there may be many sub-alternatives, e.g. the main
alternative may be mode of travel, the sub-alternatives may be different
alignments. For network problems there may be many combinations of
alternative alignments. If the analyst is lucky, these are *separable
problems*, that is, the choice of one sub-alignment is independent of
the choice of alternative sub-alignments.
1. Algorithms-systematic search over available alternatives
1. Analytical
2. Exact numerical
3. Heuristic numerical
2. Generate alternatives selectively, evaluate subjectively
1. Fatal flaw analysis
2. Simple rating schemes
3. Delphi exercises
3. Generate alternatives judgmentally, evaluate scientifically using
system model
A critical issue is how many alternatives to consider. In principle, an
infinite number of more or less similar alternatives may be generated,
not all are practical, and some may be minor variations. In practice a
stopping rule to consider a reasonable number of alternatives is used.
Major exemplars of the alternatives may be used, with fine-tuning
awaiting a later step after the first set of alternatives is analyzed.
The process may be iterative, winnowing down alternatives and detailing
alternatives as more information is gained throughout the analysis.
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
Several major alternatives may be suggested, expand highways, expand air
travel, or construct new high-speed rail line, along with a no-build
alternative.
```{=html}
</div>
```
## Abstract into model or framework
\"All Models are Wrong, Some Models are Less Wrong than
Others\"---Anonymous
\"All Models are Wrong, Some Models are Useful\"---George Box [^1]
The term **Model** refers here to a *mathematical representation of a
system*, while a **Framework** is a *qualitative organizing principle
for analyzing a system*. The terms are sometimes used interchangeably.
### Framework Example: Porter's Diamond of Advantage
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
! thumb\|right\|250px\| Michael Porter\'s Diamond of
Advantage
To illustrate the idea of a framework, consider Porter\'s *Diamond of
Advantage*
Michael Porter proposes four key determinants of competitiveness, which
he calls the \"Diamond of Advantage,\" based on cases from around the
world:
1. factor conditions, such as a specialized labor pool, specialized
infrastructure and sometimes selective disadvantages that drive
innovation;
2. home demand, or local customers who push companies to innovate,
especially if their tastes or needs anticipate global demand;
3. related and supporting industries, specifically internationally
competitive local supplier industries, creating a high quality,
supportive business infrastructure, and spurring innovation and
spin-off industries; and
4. industry strategy/rivalry, involving both intense local rivalry
among area industries that is more motivating than foreign
competition and as well as a local \"culture\" which influences
individual industries\' attitudes toward innovation and competition.
```{=html}
</div>
```
### Model Example: The Four-Step Urban Transportation Planning System
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
Within the rational planning framework, transportation forecasts have
traditionally followed the sequential four-step model or urban
transportation planning (UTP) procedure, first implemented on mainframe
computers in the 1950s at the Detroit Area Transportation Study and
Chicago Area Transportation Study (CATS).
Land use forecasting sets the stage for the process. Typically,
forecasts are made for the region as a whole, e.g., of population
growth. Such forecasts provide control totals for the local land use
analysis. Typically, the region is divided into zones and by trend or
regression analysis, the population and employment are determined for
each.
The four steps of the classical urban transportation planning system
model are:
- Trip
generation
determines the frequency of origins or destinations of trips in each
zone by trip purpose, as a function of land uses and household
demographics, and other socio-economic factors.
- Destination
choice
matches origins with destinations, often using a gravity model
function, equivalent to an entropy maximizing model. Older models
include the fratar model.
- Mode choice
computes the proportion of trips between each origin and destination
that use a particular transportation mode. This model is often of
the logit form, developed by Nobel Prize winner Daniel McFadden.
- Route
choice
allocates trips between an origin and destination by a particular
mode to a route. Often (for highway route assignment) Wardrop\'s
principle of user equilibrium is applied, wherein each traveler
chooses the shortest (travel time) path, subject to every other
driver doing the same. The difficulty is that travel times are a
function of demand, while demand is a function of travel time.
```{=html}
</div>
```
See **Modeling**
for a deeper discussion of modeling questions.
## Ascertain performance
This is either an output of the analytical model, or the result of
subjective judgment.
Sherden[^2] identifies a number of major techniques for technological
forecasting that can be used to ascertain expected performance of
particular technologies, but that can be used within a technology to
ascertain the performance of individual projects. These are listed in
the following box:
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
\"Major techniques for technological forecasting [^3]
- Delphi method: a brain-storming session with a panel of experts.
- Nominal group process: a variant of the Delphi method with a group
leader.
- Case study method: an analysis of analogous developments in other
technologies.
- Trend analysis: the use of statistical analysis to extend past
trends into the future.
- S-curve: a form of trend analysis using an s-shaped curve to extend
past trends into the future.
- Correlation analysis: the projection of development of a new
technology past developments in similar technologies.
- Lead-user analysis: the analysis of leading-edge users of a new
technology predict how the technology will develop.
- Analytic hierarchy process: the projection of a new technology by
analyzing a hierarchy of forces influencing its development.
- Systems dynamics: the use of a detailed model to assess the dynamic
relationships among the major forces influencing the development of
the technology.
- Cross-impact analysis: the analysis of potentially interrelated
future events that may affect the future development of a
technology.
- Relevance trees: the breakdown of goals for a technology into more
detailed goals and then assigning probabilities that the technology
will achieve these detail goals.
- Scenario writing: the development of alternative future views on how
the new technology could be used.\"
```{=html}
</div>
```
## Rate alternatives
The performance of each of the alternatives is compared across decision
criteria, and weighted depending on the importance of those criteria.
The alternative with the highest ranking would be identified, and this
information would be brought forward to decision-makers.
## Compute optimal decision
The analyst is generally not the decision maker. The actual influence of
the results of the analysis in actual decisions will depend on:
1. Determinacy of evaluation
2. Confidence in the results on the part of the decision maker
3. Consistency of rating among alternatives
## Implement alternatives
A decision is made. A project is constructed or a program implemented.
## Evaluate outcome
*Evaluating outcomes* of a project includes comparing outcome against
goals, but also against predictions, so that forecasting procedures can
be improved. Analysis and implementation experience lead to revisions in
systems definition, and may affect the values that underlay that
definition. The output from this \"last\" step in is used as input to
earlier steps in subsequent analyses. See e.g. Parthasarathi, Pavithra
and David Levinson (2010) Post-Construction Evaluation of Traffic
Forecast Accuracy. *Transport
Policy*
## Relationship to other models
We need a tool to \"Identify Needs\" and \"Evaluate Options\". This may
be the transportation forecasting
model.
## Problem PRT: Skyweb Express
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
The Metropolitan Council of Governments (the region\'s main
transportation planning agency) is examining whether the Twin Cities
should build a new Personal Rapid Transit system in downtown
Minneapolis, and they have asked you to recommend how it should be
analyzed
1\. What kind of model should be used. Why?
2\. What data should be collected.
Form groups of 3 and take 15 minutes and think about what kinds of
models you want to run and what data you want to collect, what questions
you would ask, and how it should be collected. Each group should have a
note-taker, but all members of the group should be able to present
findings to the class.
```{=html}
</div>
```
## Thought Questions
- Is the \"rational planning\" process rational?
- Compare and contrast the rational planning process with the
scientific method?
## Some Issues with Rational Planning
Nevertheless, some issues remain with the rational planning model:
### Problems of incomplete information
- Limited Computational Capacity
- Limited Solution Generating Capacity
- Limited input data
- Cost of Analysis
### Problems of incompatible desires
- Conflicting Goals
- Conflicting Evaluation Criteria
- Reliance on Experts (What about the People?)
## Alternative Planning Decision Making Paradigms: Are They Irrational?
No one really believes the rational planning process is a good
description of most decision making, as it is highly idealized.
Alternatives normative and positive paradigms to the rational planning
process include:
Several strategies normatively address the problems associated with
incomplete information:
- Satisficing
- Decomposition "wikilink")
hierarchically into Strategy/Tactics/Operations.
Other strategies describe how organizations and political systems work:
- Organizational Process
- Political Bargaining
Some do both:
- Incrementalism
The paper Montes de Oca, Norah and David Levinson (2006) Network
Expansion Decision-making in the Twin Cities.*Journal of the
Transportation Research Board: Transportation Research Record* #1981 pp
1-11 describes the
actual decision making process for road construction in the Twin Cities
## References
Transportation Economics}}
[^1]: Box, G.E.P., Robustness in the strategy of scientific model
building, in Robustness in Statistics, R.L. Launer and G.N.
Wilkinson, Editors. 1979, Academic Press: New York.
[^2]: Sherden, William (1998) The Fortune Sellers, Wiley.
[^3]: Figure 6.4, p. 167 Major techniques for technological forecasting,
in Sherden, William (1998) The Fortune Sellers, Wiley.
|
# Transportation Economics/Modeling
*All forecasts are wrong; some forecasts are more wrong than others.* -
anonymous
**Modeling** is a means for representing reality in an abstracted way.
Your mental models are your **world view**: your outlook on life, and
the world. The world view is your *internal model of how the world
works*; it is employed every time you make a prediction: what do you
expect, what is a surprise. The expression "Where you stand depends on
where you sit" epitomizes this idea. Your world view is shaped by your
experience and your position.
When modeling, the issue of **Point of View** should be considered. It
must be clear who (and what) the results are for. If you are modeling
for personal pleasure, it will naturally reflect your own worldview, but
if you are working for an employer or client, their point of view must
also be considered, if the inputs or results deviate significantly from
their worldview, they may adapt their worldview, but more likely will
dismiss the model.
Modeling can be conducted both for subjective **advocacy** and for
objective **analysis**. The same methods may be employed in either, and
the ethical modeler will produce the same results in either case, but
they may be used differently.
## Why Model?
There are a variety of reasons to model. Modeling helps
- gain insight into complex situations by understanding simpler
situations resembling them
- optimize the use of resources in building or maintaining systems
- operate system, particularly by testing alternative operational
scenarios
- educate and provide experience for model-builders
- provide a platform for testing contending ideas and use in
negotiations.
Particular applications in transportation include:
- Forecasting traffic
- Testing scenarios (alternative land uses, networks, policies)
- Planning projects/corridor studies
- Regulating land use: Growth management/public facility adequacy
- Managing complexity, when eyeballs are insufficient, (different
people have different intuitions)
- Understanding travel behavior
- Influencing decisions
- Estimating traffic in the absence of data
## Developing Models
As an engineer, economist, or planner you may be given a model to use.
But that model was not spontaneously generated, it came from other
engineers, economists, or planners who undertook a series of steps to
translate raw data into a tool that could be used to generate useful
information.
The first step, specification, tells us what depends on what (the number
of trips leaving a zone may depend on the number of households). The
estimation step tells us how strong these relationships mathematically
are, (each household generates two trips in the peak hour).
Implementation takes those relationships and puts them into a computer
program. Calibration looks at the output of the computer program and
compares it to other data, if they don\'t match exactly, adjustments to
the models may be made. Validation compares the results to data at
another point in time. Finally the model is applied to look at a project
(e.g. how much traffic will use Granary Road after Washington Avenue is
closed to traffic).
- Specification:
- $y=f(X)\,\!$
- Estimation:
- $y=mX+b\,\!$; m=1, b=2
- Implementation
- If Z \> W, Then $y=mX+b\,\!$
- Calibration
- $y_{predicted}+k=y_{observed}\,\!$
- Validation
- $y_{predicted1990}+k=y_{observed1990}\,\!$
- Application
## Specification
When building a model system, numerous decisions must be made. These are
discussed below:
### Types of Models
There are numerous types of models, a short list is below. Each has
different applicability, multiple methods may be used in pursuit of the
same question, sometimes they are complementary, and sometimes
competitive techniques.
- Network analysis
- Linear Programming
- Nonlinear Programming
- Simulation
- Deterministic queuing
- Probabilistic queuing
- Regression
- Neural Nets
- Genetic Algorithm
- Cost/ Benefit Analysis
- Life-cycle costing
- System Dynamics
- Control Theory
- Difference Equations
- Differential Equations
- Probabilistic Risk Assessment
- Supply/Demand Equilibrium
- Game Theory
- Statistical Decision Theory
- Markov Models
- Cellular Automata
- Etc.
### Model Trade-offs
Building a model requires trading-off time and resource constraints. One
could always be more detailed, more accurate, or more comprehensive if
resources were not constrained. However, the following must also be
considered.
- Money,
- Data,
- Computation,
- Labor,
- Ease of Use,
- Convincing (e.g. Graphic Displays),
- Extendable,
- Evidence of Model Benefits,
- Measuring Model Success
### Organization of Model System
- Hierarchy of Models
- Centralized vs. Decentralized (Optimization (Global) vs. Agent,
Local Optimization)
### Time
- Time Frame
- Static vs. Dynamic
- Real Time vs. Offline
- Short Term vs. Long Term (Partial vs. General Equilibrium)
- Proactive vs. Reactive (Predictive vs. Responsive)
### Space
- Scale/Detail
- Spatial Extent
- Boundaries (Boundary Effects)
- Macroscopic vs. Microscopic (Zones, Flows vs. Individuals, Vehicles)
### Process
- Stochastic vs. Deterministic
- Linear vs. Nonlinear
- Continuous vs. Discrete
- Numerical Simulation vs. Closed Form Solution
- Equilibrium vs. Disequilibrium
### Type
- Behavioral vs. Aggregate Model
- Physical vs. Mathematical Models
## Solution Techniques
When solving the model, the system as a whole must be understood.
Several questions arise:
- Does the solution exist?
- Is the solution unique?
- Is the solution feasible?
Solution techniques often trade-off accuracy vs. speed. Some solution
techniques may only guarantee a local optima, while others (such as
brute force techniques) can guarantee a global optimum, but may be much
slower.
## "Four-Step" Urban Transportation Planning Models
!A green transport
hierarchy
We want to answer a number of related questions (who, what, where, when,
why, and how):
- Who is traveling or what is being shipped?
- Where are the origin and destination of those trips, shipments?
- When do those trips begin and end (how long do they take, how far
away are the trip ends)?
- Why are the trips being made, what is their purpose?
- How are the trips getting there, what routes or modes are they
taking?
If we know the answers to those questions, we want to know what are the
answers to those questions a function of?
- Cost: Money, Time spent on the trip,
- Cost: Money and Time of alternatives.
- Benefit (utility) of trip (e.g. the activity at the destination)
- Benefit of alternatives
The reason for this is to understand what will happen under various
circumstances:
- How much "induced demand" will be generated if a roadway is
expanded?
- How many passengers will be lost if bus services are cut back?
- How many people will be "priced off" if tolls are implemented?
- How much traffic will a new development generate?
- How much demand will I lose if I raise shipping costs to my
customers?
In short, for urban passenger travel, we are trying to predict the
number of trips by:
- Origin Activity,
- Destination Activity,
- Origin Zone,
- Destination Zone,
- Mode,
- Time of Day, and
- Route.
This is clearly a multidimensional problem.
In practice, the mechanism to do this is to employ a \"four-step\" urban
transportation planning model, each step will be detailed in subsequent
modules. These steps are executed in turn, though there may be feedback
between steps:
- Trip Generation - How many trips $T_{i}$ or $T_{j}$ are entering or
leaving zone $i$ or $j$
- Trip Distribution or Destination Choice - How many trips $T_{ij}$are
going from zone $i$ to zone $j$
- Mode Choice - How many trips $T_{ijm}$ from $i$ to $j$ are using
mode $m$
- Route Choice - Which links are trips $T_{ijmr}$ from $i$ to $j$ by
mode $m$ using route $r$
## Thought Questions
- Is past behavior reflective of future behavior?
- Can the future be predicted?
- Is the future independent of decisions, or are prophecies
self-fulfilling?
- How do we know if forecasts were successful?
- Against what standard are they to be judged?
- What values are embedded in the planning process?
- What happens when values change?
## Additional Problems
- Homework
- Additional
Problems
## Key Terms
- Rational Planning
- Transportation planning model
- Matrix, Full Matrix, Vector Matrix, Scalar Matrix
- Trip table
- Travel time matrix
- Origin, Destination
- Purpose
- Network
- Zone (Traffic Analysis Zone or Transportation Analysis Zone, or TAZ)
- External station or external zone
- Centroid
- Node
- Link
- Turn
- Route
- Path
- Mode
## Video
- Models
## References
|
# Transportation Economics/Data
There are a variety of types of transportation data used in analysis.
Some are listed below:
- Infrastructure Status
- Traffic Counts
- Travel Behavior Inventory
- Land Use Inventory
- Truck/Freight Demand
- External/Internal Demand (by Vehicle Type)
- Special Generators
## Revealed Preference
Household travel surveys which ask people what they actually did are a
type of *Revealed Preference* survey data that have been obtained by
direct observation of the choice that individuals make with respect to
travel behavior. *Travel Cost Analysis* uses the prices of market goods
to evaluate the value of goods that are not traded in the market.
*Hedonic Pricing* uses the market price of a traded good and measures of
its component attributes to calculate value. There are other methods to
attain Revealed Preference information, but surveys are the most common
in travel behavior.
### Travel Behavior Inventory
While the Cleveland Regional Area Traffic Study in 1927 was the first
metropolitan planning attempt sponsored by the US federal government,
the lack of comprehensive survey methods and standards at that time
precluded the systematic collection of information such as travel time,
origin and destination, and traffic counts. The first US travel surveys
appeared in urban areas after the Federal-Aid Highway Act of 1944
permitted the spending of federal funds on urban highways.[^1] A new
home- interview origin-destination survey method was developed in which
households were asked about the number of trips, purpose, mode choice,
origin and destination of the trips conducted on a daily basis. In 1944,
the US Bureau of Public Roads printed the Manual of Procedures for Home
Interview Traffic Studies.[^2] This new procedure was first implemented
in several small to mid-size areas. Highway engineers and urban planners
made use of the new data collected after the 1944 Highway Act extended
federally sponsored planning to travel surveys as well as traffic
counts, highway capacity studies, pavement condition studies and
cost-benefit analysis.
Attributes of a household travel survey, or travel behavior inventory
include:
- Travel Diary of \~ 1% sample of population (all trips made on one
day) every 10 years
- Socioeconomic/demographic data of survey respondents
- Collection methodology:
- Phone,
- Mail,
- In-Person at Home,
- In-Person at Work,
- Roadside
Many such surveys are available online at: Metropolitan Travel Survey
Archive
### Thought Question
What are the advantages and disadvantages of Revealed Preference
surveys?
## Stated Preference
In contrast with revealed preference, *Stated preference* is a group of
techniques used to calculate the utility functions of transport options
based on the response of an individual decision-maker to certain given
options. The options generally are based on descriptions of the
transport scenario or are constructed by the researcher
- *Contingent valuation* is based on the assumption that the best way
to find out the value that an individual places on something is
known by asking.
- *Compensating variation* is the compensating payment that leaves an
individual as well off as before the economic change.
- *Equivalent variation* for a benefit is the minimum amount of money
one would have to be compensated to leave the person as well as they
would be after the change.
- *Conjoint analysis* refers to the application of the design of
experiments to obtain the preferences of the individual, breaking
the task into a list of choices or ratings that enable us to compute
the relative importance of each of the attributes studied
### Thought Question
What are the advantages and disadvantages of Stated Preference surveys?
## Pavement conditions
*adapted from Xie, Feng and David Levinson (2008) The Use of Road
Infrastructure Data for Urban Transportation Planning: Issues and
Opportunities. Published in Infrastructure Reporting and Asset
Management Edited by Adjo Amekudzi and Sue McNeil. pp- 93-98. Publisher:
American Society of Civil Engineers, Reston, Virginia.*[^3]
Road infrastructure represents the supply side of an urban
transportation system. Pavement condition is a critical indicator to the
quality of road infrastructure in terms of providing a smooth and
reliable driving environment on roads. A series of indices have been
developed to evaluate the pavement conditions of road segments in their
respective jurisdictions: Pavement Condition Index (PCI) is scored as a
perfect roadway (100 points) minus point deductions for "distresses"
that are observed; Present Serviceability Rating (PSR) is measured as
vertical movement per unit horizontal movement (e.g. millimeters of
vertical displacement per meter of horizontal displacement) as one
drives along the road; (SR) Surface Rating is calculated by reviewing
images of the roadway based on the frequency and severity of defects;
Pavement Quality Index (PQI) is calculated using the PSR and SR to
evaluate the general condition of the road. A high PQI (up to 4.5) means
a road will most likely not need maintenance soon, whereas a low PQI
means it can be selected for maintenance.[^4]
These indices of pavement quality are basic measures for road
maintenance and preservation, for which each county develops its own
performance standards to evaluate pavement conditions and make decisions
on maintenance and preservation projects. Typically, pavement
preservation projects are prioritized based on PCI of road segments: the
lower the PCI, the higher the likelihood of selection. Taking Washington
County, Minnesota as an example,[^5] the county has determined that a
reasonable standard to maintain is an average PCI of 72. Thus any road
segment with its PCI below 72 has a chance to be selected for
preservation. Dakota County, Minnesota on the other hand, scores its
preservation projects according to the measure of PQI: a road segment
will be allocated 17 points (out of a possible 100) if its PQI falls
lower than 3.1.
The pavement data structure is incompatible with the link-node structure
of the planning road network used by the Metropolitan Council and other
planning agencies. Typically, the measures of PCI, PSR, and PQI are
stored in records indexed by "highway segment numbers" along each
highway route. Highway sections with the same highway segment number are
differentiated by their starting and ending stations. There is no exact
match between highway segments in the actual road network and links in
the planning network, as stationing is a position along the curved
centerline of a highway while a planning network is a simplified
structure consisting of only straight lines intersecting at points.
Historic pavement data is generally unavailable in electronic format,
although the information on pavement history such as pavement life and
the duration since last repaving are important to estimate the cost of a
preservation project, also affecting the decision whether a specific
project will get selected and how much funding will be allocated.
## Traffic flow
Traffic conditions reflect the travel demand loaded on a given road
infrastructure. Traffic flows on roads, together with road capacity, can
be used to calculate the volume/capacity (V/C) ratio, which is an
approximate indicator for the level of service of road infrastructure,
and is commonly adopted by the jurisdictions in their respective
decision making processes. The traffic flows on the planning road
network are predicted by the transportation planning model, but the
results have to be calibrated with actual traffic data.
Loop detectors are the primary technology currently employed in many US
metropolitan areas to collect actual traffic data. E.g. In the Twin
Cities of Minneapolis-St. Paul, about one thousand detector stations
have been buried on major highways, through which Mn/DOT\'s Traffic
Management Center collects, stores, and makes public traffic data every
30 seconds, including measured volume (flow) and occupancy, and
calculated speed data for each detector station. Although the estimates
of Annual Average Daily Traffic (AADT) for the planning road network are
readily available, loop detectors provide more accurate measures of
traffic volume, since they are collecting real-time data on a continuous
basis. It also allows for calibrating models to hourly rather than daily
conditions.
Due to the limited ability to convert raw data collected by loop
detectors, however, most forecasting models rely on AADT data. The raw
data are stored in a 30- second interval in binary codes. For planning
uses they have to be converted and aggregated into desired measures,
such as peak hour average, average for a particular month or a
particular season, etc., in a systematic way.
Another issue in integrating loop detector data into a planning road
network is to match the detector stations with the links in planning
networks. Similar to the problem encountered in translating pavement
data, detectors are located along major highways and mapped on the
actual geometry of the network, while the planning road network is a
simplified structure with only straight lines.
## Sampling Issues (Statistics)
- Sample Size,
- Population of Interest
- Sampling Method,
- Error:
- Measurement,
- Sampling,
- Computational,
- Specification,
- Transfer,
- Aggregation
- Bias,
- Oversampling
- Extent of Collection
- Spatial
- Temporal
- Span of Data
- Cross-section,
- Time Series, and
- Panel
## Metadata
*Adapted from Levinson, D. and Zofka, Ewa. (2006) "The Metropolitan
Travel Survey Archive: A Case Study in
Archiving" in*Travel
Survey Methods: Quality and Future Directions, Proceedings of 5th
International Conference on Travel Survey Methods*(Peter Stopher and
Cheryl Stecher, editors)*[^6]
Metadata allows data to function together. Simply put, metadata is
information about information -- labeling, cataloging and descriptive
information structured to permit data to be processed. Ryssevik and
Musgrave (1999)[^7] argue that high quality metadata standards are
essential as metadata is the launch pad for any resource discovery, maps
complex data, bridges the gap between data producers and consumers, and
links data with its resultant reports and scientific studies produced
about it. To meet the increasing needs for the proper data formats and
encoding standards, the World Wide Web Consortium (W3C) has developed
the generic Resource Description Framework (RDF) (W3C 2002). RDF treats
metadata more generally, providing a standard way to use Extended Markup
Language (XML) to "represent metadata in the form of statements about
properties and relationships of items" (W3C 2002).[^8] Resources can be
almost any type of file, including of course, travel surveys. RDF
delivers detailed and unified data description vocabulary.
Applying these tools specifically to databases, the Data Documentation
Initiative (DDI) for Document Type Definitions (DTD) applies metadata
standards used for documenting datasets. DDI was first developed by
European and North American data archives, libraries and official
statistics agencies. "The Data Documentation Initiative (DDI) is an
effort to establish an international XML-based standard for the content,
presentation, transport, and preservation of documentation for datasets
in the social and behavioral sciences" (Data Documentation Initiative
2004). As this international standardization effort gathers momentum it
is expected more and more datasets to be documented using DDI as the
primary metadata format. With DDI, searching data archives on the
Internet no longer depends on an archivist\'s skill at capturing the
information that is important to researchers. The standard of data
description provides sufficient detail sorted in a user-friendly manner.
## References
- Travel Survey Manual online
## Appendix: Typical Household Survey Questions
(source: Denver Regional Council of Governments 2001 )
1. What is/verify home address
2. Assigned survey day
3. Is your residence a single-family home, duplex/ townhouse,
apartment/condominium, mobile home, or other?
4. How many people live in this household?
5. How many overnight visitors from outside of the region stayed with
you on your survey day?
6. How many motor vehicles are available to your household?
7. In total, how many telephone lines come into your home?
8. How many of the lines are used for voice communication?
9. Has telephone service in your home been continuous for the past 12
months?
10. What was the combined income from all sources for all members of
your household for 1996?
### Vehicle Questions
1. Vehicle model year
2. Vehicle make
3. Vehicle model
4. Body type
5. Fuel type
6. Who owns or leases this vehicle?
7. Prior to the survey day, when was the last day it (the vehicle) was
used?
8. Odometer reading (mileage) at the start of the survey day
9. Odometer reading (mileage) at the end of the survey day
### Person Questions
1. Person\'s first name (used for identification purposes only during
the survey; not saved on final data files)
2. Relation to head of household
3. Age
4. Sex
5. Licensed to drive?
6. Student status (not a student, part time, full time)
7. Grade level
8. Employment status
9. Primary job description (nurse, sales, teacher)
10. Primary employer\'s name
11. Primary employer\'s address
12. Primary Employer\'s business type (hospital, retail, etc.)
13. Does your primary employer offer flextime?
14. If flextime offered (primary employer), type of deviation allowed at
start
of day
1. If flextime offered (primary employer), type of deviation allowed at
end of
day
1. Number of other jobs or employers
2. Do you have a transit pass?
3. Monthly cost \[of transit pass\] to you
4. Did you make trips on the survey day?
5. If trips were made, did you use E-470 on the travel day?
6. If trips were made, did you use the HOV lanes on the travel day?
7. Did you work at your main job on the survey day?
### Environment Questions
Using a 1 to 10 scale, with 10 the best, how would you describe the
walking and bicycling environment around your:
1. home
2. work
3. school
4. Was the person interviewed by the surveyor?
5. Based on responses and the survey, did the person appear to use the
activity diary?
### Travel Diary Questions
1. This place is my home, my regular workplace, or another place
2. What kind of place is this (bank, grocery, park etc.)?
3. Place address
4. At what time did you arrive at this place?
5. What did you do at this place (main activity)?
6. What else did you do at this place (up to eight secondary
activities)?
7. Was this your last place for the 24-hour day?
8. At what time did you leave this place to go to the next place?
### Travel Method
1. Travel method used to make this trip and related travel details
### Auto Trip Questions
1. Which household vehicle was used (if a household vehicle
was used)?
1. Total number of people in the vehicle
2. Total number of household members in the vehicle
3. If more than one person was in the vehicle, "is this a formal
carpool/vanpool"?
1. Were HOV lanes used on this trip?
2. Was E-470 used for this trip?
3. What was the parking cost paid at the end of this trip?
4. What period is covered by the parking cost paid?
5. Was the parking cost fully or partly reimbursed?
6. What was the parking location (cross streets, lot name, if
applicable, and city)
### Transit Trip Questions
The following four questions were asked if the travel method was transit
1. What was the transit route number?
2. What was your wait time for transit?
3. What was the transit fare paid?
4. How did you pay the transit fare?
The following four questions were asked if the travel method was walk or
bicycle
1. What was the bicycle or walk time?
2. Was a bike path used?
3. Where did you store this vehicle?
4. Was a walk path used?
## References
[^1]: Weiner, Edward, (1997) Urban Transportation Planning in the
United States: An Historical
Overview.
U.S. Department of Transportation. Fifth Edition.
[^2]: US Department of Commerce (1944) Manual of Procedures for Home
Interview Traffic Studies. US Bureau of Public Roads. Washington, DC
[^3]: Xie, Feng and David Levinson (2008) The Use of Road Infrastructure
Data for Urban Transportation Planning: Issues and Opportunities.
Published in Infrastructure Reporting and Asset Management Edited by
Adjo Amekudzi and Sue McNeil. pp- 93-98. Publisher: American Society
of Civil Engineers, Reston, Virginia
[^4]: Hammerand, J. (2006). "Rating Minnesota's Roads." The Minnesota
Local Technical Assistance Program (LTAP): Technology Exchange,
vol.24, No.1, 3
[^5]: Washington County Transportation Division (2004). "Washington
County Annual Performance Report for 2004."
\<<http://www.co.washington.mn.us/client_files/documents/adm/PerfMeas-2004//ADM->
PM-04-Transportation.pdf\> (May 20, 2006)
[^6]: Levinson, D. and Zofka, Ewa. (2006) "The Metropolitan Travel
Survey Archive: A Case Study in
Archiving" in
*Travel Survey Methods: Quality and Future Directions, Proceedings
of 5th International Conference on Travel Survey Methods* (Peter
Stopher and Cheryl Stecher, editors)
[^7]: Ryssevik J, Musgrave S. (1999) The Social Science Dream Machine:
Resource discovery, analysis and delivery on the Web. Paper given at
the IASSIST Conference. Toronto, May.
[^8]: World Wide Web Consortium (2002) Metadata Activity Statement
|
# Transportation Economics/Land Use Forecasting
**Land use forecasting** undertakes to project the distribution and
intensity of trip generating activities in the urban area. In practice,
land use models are demand driven, using as inputs the aggregate
information on growth produced by an aggregate economic forecasting
activity. Land use estimates are inputs to the transportation planning
process.
The discussion of land use forecasting to follow begins with a review of
the Chicago Area Transportation Study (CATS) effort. CATS researchers
did interesting work, but did not produce a transferable forecasting
model, and researchers elsewhere worked to develop models. After
reviewing the CATS work, the discussion will turn to the first model to
be widely known and emulated: the Lowry model developed by Ira S. Lowry
when he was working for the Pittsburgh Regional Economic Study. Second
and third generation Lowry models are now available and widely used, as
well as interesting features incorporated in models that are not widely
used.
Today, the transportation planning activities attached to metropolitan
planning organizations are the loci for the care and feeding of regional
land use models. In the US, interest in and use of models is spotty, for
most agencies are concerned with short run planning and day-to-day
decisions. Interest is higher in Europe and elsewhere.
Even though there isn't much use of full blown land use modeling in the
US today, we need to understand the subject: the concepts and analytic
tools pretty much define how land use-transportation matters are thought
about and handled; there is a good bit of interest in the research
community where there have been important developments; and when the
next upturn in infrastructure development comes along the present models
will form the starting place for work.
## Land Use Analysis at the Chicago Area Transportation Study
In brief, the CATS analysis of the 1950s was "by mind and hand"
distribute growth. The product was maps developed with a rule-based
process. The rules by which land use was allocated were based on
state-of-the art knowledge and concepts, and it hard to fault CATS on
those grounds. The CATS took advantage of Colin Clark's extensive work
on the distribution of population densities around city centers.
Theories of city form were available, sector and concentric circle
concepts, in particular. Urban ecology notions were important at the
University of Chicago and University of Michigan. Sociologists and
demographers at the University of Chicago had begun its series of
neighborhood surveys with an ecological flavor. Douglas Carroll, the
CATS director, had studied with Amos Hawley, an urban ecologist at
Michigan.
!Stylized Urban Density
Gradient{width="350"}
Colin Clark studied the population densities of many cities, and he
found traces similar to those in the figure. Historic data show how the
density line has changed over the years. To project the future, one uses
changes in the parameters as a function of time to project the shape of
density in the future, say in 20 years. The city spreads glacier-like.
The area under the curve is given by population forecasts.
The CATS did extensive land use and activity surveys, taking advantage
of the City work done by the Chicago Planning Commission. Hock's work
forecasting activities said what the land uses-activities were that
would be accommodated under the density curve. Existing land use data
were arrayed in cross section. Land uses were allocated in a manner
consistent with the existing pattern.
The study area was divided into transportation analysis zones: small
zones where there was a lot of activity, larger zones elsewhere. The
original CATS scheme reflected its Illinois State connections. Zones
extended well away from the city. The zones were defined to take
advantage of Census data at the block and minor civil division levels.
They also strove for homogeneous land use and urban ecology attributes.
The first land use forecasts at CATS arrayed developments using "by
hand" techniques, as stated. We do not fault the "by hand" technique --
the then state of computers and data systems forced it. It was a rule
based land use allocation. Growth was the forcing function, as were
inputs from the economic study. Growth said that the population density
envelope would have to shift. The land uses implied by the mix of
activities were allocated from "Where is the land available?" and
"What's the use now?" Considerations. Certain types of activities
allocate easily: steel mills, warehouses, etc.
Conceptually, the allocation rules seem important. There is lot of
spatial autocorrelation in urban land uses; it's driven by historical
path dependence: this sort of thing got started here and seeds more of
the same. This autocorrelation was lost somewhat in the step from "by
hand" to analytic models.
The CATS procedure was not viewed with favor by the emerging Urban
Transportation Planning professional peer group, and in the late 1950s
there was interest in the development of analytic forecasting
procedures. At about the same time, similar interests emerged to meet
urban redevelopment and sewer planning needs, and interest in analytic
urban analysis emerged in political science, economics, and geography.
## Lowry Model
!Flowchart of Lowry
Model{width="600"}
Hard on the heels of the CATS work, several agencies and investigators
began to explore analytic forecasting techniques, and between 1956 and
the early 1960s a number of modeling techniques evolved. Irwin (1965)
provides a review of the status of emerging models. One of the models,
the Lowry model, was widely adopted.
Supported at first by local organizations and later by a Ford Foundation
grant to the RAND Corporation, Ira S. Lowry undertook a three-year study
in the Pittsburgh metropolitan area. (Work at RAND will be discussed
later.) The environment was data rich, and there were good professional
relationships available in the emerging emphasis on location and
regional economies in the Economics Department at the University of
Pittsburgh under the leadership of Edgar M. Hoover. The structure of the
Lowry model is shown on the flow chart.
The flow chart gives the logic of the Lowry model. It is demand driven.
First, the model responds to an increase in basic employment. It then
responds to the consequent impacts on service activities. As Lowry
treated his model and as the flow chart indicates, the model is solved
by iteration. But the structure of the model is such that iteration is
not necessary.
Although the language giving justification for the model specification
is an economic language and Lowry is an economist, the model is not an
economic model. Prices, markets, and the like do not enter.
A review of Lowry's publication will suggest reasons why his approach
has been widely adopted. The publication was the first full elaboration
of a model, data analysis and handling problems, and computations.
Lowry's writing is excellent. He is candid and discusses his reasoning
in a clear fashion. One can imagine an analyst elsewhere reading Lowry
and thinking, "Yes, I can do that."
The diffusion of innovations of the model is interesting. Lowry was not
involved in consulting, and his word of mouth contacts with
transportation professionals were quite limited. His interest was and is
in housing economics. Lowry did little or no "selling." We learn that
people will pay attention to good writing and an idea whose time has
come.
The model makes extensive use of gravity or interaction decaying with
distance functions. Use of "gravity model" ideas was common at the time
Lowry developed his model; indeed, the idea of the gravity model was at
least 100 years old at the time. It was under much refinement at the
time of Lowry's work; persons such as Alan Voorhees, Mort Schneider,
John Hamburg, Roger Creighon, and Walter Hansen made important
contributions. (See Carrothers 1956).
The Lowry Model provided a point of departure for work in a number of
places. Goldner (1971) traces its impact and modifications made. Steven
Putnam at the University of Pennsylvania used it to develop PLUM
(Projective Land Use Model) and I(incremental)PLUM. We estimate that
Lowry derivatives are used in most MPO studies, but most of today's
workers do not recognize the Lowry heritage, the derivatives are one or
two steps away from the mother logic.
## Penn-Jersey Model
!600 px\|Flowchart of Penn-Jersey land use forecasting
model
The P-J (Penn-Jersey, greater Philadelphia area) analysis had little
impact on planning practice. It will now be discussed, even so, because
it illustrates what planners might have done, given available knowledge
building blocks. It is an introduction to some of the work by
researchers who are not practicing planners.
The P-J study scoped widely for concepts and techniques. It scoped well
beyond the CATS and Lowry efforts, especially taking advantage of things
that had come along in the late 1950s. It was well funded and viewed by
the State and the Bureau of Public Roads as a research and a practical
planning effort. Its Director's background was in public administration,
and leading personnel were associated with the urban planning department
at the University of Pennsylvania. The P-J study was planning and policy
oriented.
The P-J study drew on several factors \"in the air\". First, there was a
lot of excitement about economic activity analysis and the applied math
that it used, at first, linear programming. T. J. Koopmans, the
developer of activity analysis, had worked in transportation. There was
pull for transportation (and communications) applications, and the tools
and interested professionals were available.
There was work on flows on networks, through nodes, and activity
location. Orden (1956) had suggested the use of conservation equations
when networks involved intermediate modes; flows from raw material
sources through manufacturing plants to market were treated by Beckmann
and Jacob Marschak (1955) and Goldman (1958) had treated commodity flows
and the management of empty vehicles.
Maximal flow and synthesis problems were also treated (Boldreff 1955,
Gomory and Hu 1962, Ford and Fulkerson 1956, Kalaba and Juncosa 1956,
Pollack 1964). Balinski (1960) considered the problem of fixed cost.
Finally, Cooper (1963) considered the problem of optimal location of
nodes. The problem of investment in link capacity was treated by
Garrison and Marble (1958) and the issue of the relationship between the
length of the planning time-unit and investment decisions was raised by
Quandt (1960) and Pearman (1974).
A second set of building blocks was evolving in location economics,
regional science, and geography. Edgar Dunn (1954) undertook an
extension of the classic von Thünen analysis of the location of rural
land uses. Also, there had been a good bit of work in Europe on the
interrelations of economic activity and transportation, especially
during the railroad deployment era, by German and Scandinavian
economists. That work was synthesized and augmented in the 1930's by
August Lösch, and his The Location of Economic Activities was translated
into English during the late 1940s. Edgar Hoover's work with the same
title was also published in the late 1940s. Dunn's analysis was mainly
graphical; static equilibrium was claimed by counting equations and
unknowns. There was no empirical work (unlike Garrison 1958). For its
time, Dunn's was a rather elegant work.
William Alonso's (1964) work soon followed. It was modeled closely on
Dunn's and also was a University of Pennsylvania product. Although
Alonso's book was not published until 1964, its content was fairly
widely known earlier, having been the subject of papers at professional
meetings and Committee on Urban Economics (CUE) seminars. Alonso's work
became much more widely known than Dunn's, perhaps because it focused on
"new" urban problems. It introduced the notion of bid rent and treated
the question of the amount of land consumed as a function of land rent.
Wingo (1961) was also available. It was different in style and thrust
from Alonso and Dunn's books and touched more on policy and planning
issues. Dunn's important, but little noted, book undertook analysis of
location rent, the rent referred to by Marshall as situation rent. Its
key equation was:
$R = Y\left( {P - c} \right) - Ytd$
where: *R* = rent per unit of land, *P* = market price per unit of
product, *c* = cost of production per unit of product, *d* = distance to
market, and *t* = unit transportation cost.
In addition, there were also demand and supply schedules.
This formulation by Dunn is very useful, for it indicates how land rent
ties to transportation cost. Alonso's urban analysis starting point was
similar to Dunn's, though he gave more attention to market clearing by
actors bidding for space.
The question of exactly how rents tied to transportation was sharpened
by those who took advantage of the duality properties of linear
programming. First, there was a spatial price equilibrium perspective,
as in Henderson (1957, 1958) Next, Stevens (1961) merged rent and
transportation concepts in a simple, interesting paper. In addition,
Stevens showed some optimality characteristics and discussed
decentralized decision-making. This simple paper is worth studying for
its own sake and because the model in the P-J study took the analysis
into the urban area, a considerable step.
Stevens 1961 paper used the linear programming version of the
transportation, assignment, translocation of masses problem of Koopmans,
Hitchcock, and Kantorovich. His analysis provided an explicit link
between transportation and location rent. It was quite transparent, and
it can be extended simply. In response to the initiation of the P-J
study, Herbert and Stevens (1960) developed the core model of the P-J
Study. Note that this paper was published before the 1961 paper. Even
so, the 1961 paper came first in Stevens' thinking.
The Herbert-Stevens model was housing centered, and the overall study
had the view that the purpose of transportation investments and related
policy choices was to make Philadelphia a good place to live. Similar to
the 1961 Stevens paper, the model assumed that individual choices would
lead to overall optimization.
The P-J region was divided into *u* small areas recognizing *n*
household groups and *m* residential bundles. Each residential bundle
was defined on the house of apartment, the amenity level in the
neighborhood (parks, schools, etc.), and the trip set associated with
the site. There is an objective function:
$\max Z = \sum_{k = 1}^u {\sum_{i = 1}^n {\sum_{h = 1}^m {x_{ih}^k \left( {b_{ih} - c_{ih}^k } \right)} } } \quad x_{ih}^k \geq 0$
wherein x~ihk~ is the number of households in group *i* selecting
residential bundle *h* in area *k*. The items in brackets are b~ih~ (the
budget allocated by *i* to bundle *h*) and c~ihk~, the purchase cost of
*h* in area *k*. In short, the sum of the differences between what
households are willing to pay and what they have to pay is maximized; a
surplus is maximized. The equation says nothing about who gets the
surplus: it is divided between households and those who supply housing
in some unknown way. There is a constraint equation for each area
limiting the land use for housing to the land supply available.
$\sum_{i = 1}^n {\sum_{h = 1}^m {s_{ih} x_{ih}^k } } \leq L^k$
where: s~ih~ = land used for bundle *h* L~k~ = land supply in area *k*
And there is a constraint equation for each household group assuring
that all folks can find housing.
$\sum_{k = 1}^u {\sum_{h = 1}^m {x_{ih}^k } } = N_i$
where: *N~i~* = number of households in group *i*
A policy variable is explicit, the land available in areas. Land can be
made available by changing zoning and land redevelopment. Another policy
variable is explicit when we write the dual of the maximization problem,
namely:
$\min Z' = \sum_{k = 1}^u {r^k L^k + \sum_{i = 1}^n {v_i \left( { - N_i } \right)} }$
Subject to:
$s_{ih} r^k - v_i \geq b_{ih} - c_{ih}^k$
$r^k \geq 0$
The variables are *r^k^* (rent in area *k*) and *v~i~* an unrestricted
subsidy variable specific to each household group. Common sense says
that a policy will be better for some than others, and that is reasoning
behind the subsidy variable. The subsidy variable is also a policy
variable because society may choose to subsidize housing budgets for
some groups. The constraint equations may force such policy actions.
It is apparent that the Herbert-Stevens scheme is a very interesting
one. Its also apparent that it is housing centered, and the tie to
transportation planning is weak. That question is answered when we
examine the overall scheme for study, the flow chart of a single
iteration of the model. How the scheme works requires little study. The
chart doesn't say much about transportation. Changes in the
transportation system are displayed on the chart as if they are a policy
matter.
The word "simulate" appears in boxes five, eight, and nine. The P-J
modelers would say, "We are making choices about transportation
improvements by examining the ways improvements work their way through
urban development. The measure of merit is the economic surplus created
in housing."
Academics paid attention to the P-J study. The Committee on Urban
Economics was active at the time. The committee was funded by the Ford
Foundation to assist in the development of the nascent urban economics
field. It often met in Philadelphia for review of the P-J work. Stevens
and Herbert were less involved as the study went along. Harris gave
intellectual leadership, and he published a fair amount about the study
(1961, 1962). However, the P-J influence on planning practice was nil.
The study didn't put transportation up front. There were unsolvable data
problems. Much was promised but never delivered. The Lowry model was
already available.
## Kain Model
!Figure - Causal arrow diagram illustrating Kain's econometric model
for transportation
demand{width="600"}
About 1960, the Ford Foundation made a grant to the RAND Corporation to
support work on urban transportation problems (Lowry's work was
supported in part by that grant). The work was housed in the logistics
division of RAND, where the economists at RAND were housed. The head of
that division was then Charles Zwick, who had worked on transportation
topics previously.
The RAND work ranged from new technology and the cost of tunneling to
urban planning models and analyses with policy implications. Some of the
researchers at RAND were regular employees. Most, however, were imported
for short periods of time. The work was published in several formats:
first in the RAND P series and RM series and then in professional
publications or in book form. Often, a single piece of work is available
in differing forms at different places in the literature.
In spite of the diversity of topics and styles of work, one theme runs
through the RAND work -- the search for economic policy guides. We see
that theme in Kain (1962), which is discussed by de Neufville and
Stafford, and the figure is adapted from their book.
Kain's model dealt with direct and indirect affects. Suppose income
increases. The increase has a direct effect on travel time and indirect
affects through the use of land, auto ownership, and choice of mode.
Work supported at RAND also resulted in Meyer, Kain and Wohl (1964).
These parts of the work at RAND had considerable influence on subsequent
analysis (but not so much on practice as on policy). John Meyer became
President of the National Bureau of Economic Research and worked to
refocus its lines of work. Urban analysis Kain-style formed the core of
a several-year effort and yielded book length publications (see, e.g.,
G. Ingram, et al., The NBER Urban Simulation Model, Columbia Univ.
Press, 1972). After serving in the Air Force, Kain moved to Harvard,
first to redirect the Urban Planning Department. After a time, he
relocated at the Kennedy School, and he, along with José A.
Gómez-Ibáñez, John Meyer, and C. Ingram, lead much work in an
economic-policy analysis style. Martin Wohl moved on from RAND,
eventually, to Carnegie-Mellon University, where he continued his style
of work (e.g. Wohl 1984).
## Policy Oriented Gaming
The notion that the impact of policy on urban development might be
simulated was the theme for a conference at Cornell in the early 1960s;
collegiums were formed, several streams of work emerged. Several persons
developed rather simple (from today's view) simulation games. Land use
development was the outcome of gravitational type forces and the issue
faced was that of conflicts between developers and planners when
planners intervened in growth. CLUG and METROPOLIS are two rather well
known products from this stream of work (they were the SimCity of their
day); there must be twenty or thirty other similar planner vs. developer
in the political context games. There seems to have been little serious
attempt to analyze use of these games for policy formulation and
decision-making, except for work at the firm Environmetrics.
Peter House, one of the Cornell Conference veterans, established
Environmetrics early in the 1960s. It, too, started with relatively
simple gaming ideas. Over about a ten-year period, the comprehensiveness
of gaming devices was gradually improved and, unlike the other gaming
approaches, transportation played a role in their formulation.
Environmetrics' work moved into the Environmental Protection Agency and
was continued for a time at the EPA Washington Environmental Studies
Center.
A model known as River Basin was generalized to GEM (General
Environmental Assessment Model) and then birthed SEAS (Strategic
Environmental Assessment Model) and SOS (Son of SEAS). There was quite a
bit of development as the models were generalized, too much to be
discussed here.
The most interesting thing to be noted is change in the way the use of
the models evolved. Use shifted from a "playing games" stance to an
"evaluate the impact of federal policy" stance. The model (both
equations and data) is viewed as a generalized city or cities. It
responds to the question: What would be the impact of proposed policies
on cities?
An example of generalized question answering is LaBelle and Moses (1983)
La Belle and Moses implement the UTP process on typical cities to assess
the impact of several policies. There is no mystery why this approach
was used. House had moved from the EPA to the DOE, and the study was
prepared for his office.
## University of North Carolina
A group at Chapel Hill, mainly under the leadership of Stuart Chapin,
began its work with simple analysis devices somewhat similar to those
used in games. Results include Chapin (1965), Chapin and H. C. Hightower
(1966) and Chapin and Weiss (1968). That group subsequently focused on
(1) the ways in which individuals make tradeoffs in selecting
residential property, (2) the roles of developers and developer
decisions in the urban development process, and (3) information about
choices obtained from survey research. Lansing and Muller (1964 and
1967) at the Survey Research Center worked in cooperation with the
Chapel Hill Group in developing some of this latter information.
The first work was on simple, probabilistic growth models. It quickly
moved from this style to game-like interviews to investigate preferences
for housing. Persons interviewed would be given "money" and a set of
housing attributes -- sidewalks, garage, numbers of rooms, lot size,
etc. How do they spend their money? This is an early version of the game
The Sims. The work also began to examine developer behavior, as
mentioned. (See: Kaiser 1972).
## Reviews and Surveys
In addition to reviews at CUE meetings and sessions at professional
meetings, there have been a number of organized efforts to review
progress in land use modeling. An early effort was the May 1965 issue of
the Journal of the American Institute of Planners edited by B. Harris.
The next major effort was a Highway Research Board Conference in June,
1967 (HRB 1968) and this was most constructive. This reference contains
a review paper by Lowry, comments by Chapin, Alonso, and others. Of
special interest is Appendix A, which listed several ways that analysis
devices had been adapted for use. Robinson (1972) gives the flavor of
urban redevelopment oriented modeling. And there have been critical
reviews (e.g. Brewer 1973, Lee 1974). Pack (1978) addresses agency
practice; it reviews four models and a number of case studies of
applications. (See also Zettel and Carll 1962 and Pack and Pack 1977).
The discussion above has been limited to models that most affected
practice (Lowry) and theory (P-J, etc.) there are a dozen more that are
noted in reviews. Several of those deal with retail and industry
location. There are several that were oriented to urban redevelopment
projects where transportation was not at issue.
## Discussion
Lowry-derived land use analysis tools reside in the MPOs. The MPOs also
have a considerable data capability including census tapes and programs,
land use information of varied quality, and survey experiences and
survey-based data. Although large model work continues, fine detail
analysis dominates agency and consultant work in the US. One reason is
the requirement for environmental impact statements. Energy, noise, and
air pollution have been of concern, and techniques special to the
analysis of these topics have been developed. Recently, interest has
increased in the uses of developer fees and/or other developer
transportation related actions. Perceived shortages for funds for
highways and transit are one motive for extracting resources or actions
from developers. There's also the long-standing ethic that those who
occasion costs should pay. Finally, there is a small amount of
theoretical or academic work. Small is the operative word. There are few
researchers and the literature is limited.
The discussion to follow will first emphasize the latter,
theory-oriented work. It will then turn to a renewed interest in
planning models in the international arena. Modern behavioral, academic,
or theory-based analysis of transportation and land use date from about
1965. By modern we mean analysis that derives aggregate results from
micro behavior. First models were Herbert-Stevens in character. Similar
to the P-J model, they:
- Treated land as the constraining resource and land use choices given
land rent variations as the critical behavior.
- Imagined roles for policy makers.
- Emphasized residential land uses and ignored interdependencies in
land uses.
- Used closed system, comparative statics ways of thinking.
- And gave no special attention to transportation.
There have been three major developments subsequently:
1. Consideration of transportation activities and labor and capital
inputs in addition to land inputs,
2. Efforts to use dynamic, open system ways of thinking, and
3. Inquiry into how micro choice behavior yields macro results.
The Herbert-Stevens model was not a behavioral model in the sense that
it did not try to map from micro to macro behavior. It did assume
rational, maximizing behavior by locators. But that was attached to
macro behavior and policy by assumed some centralized authority that
provided subsidies. Wheaton (1974) and Anderson (1982) modified the
Herbert-Stevens approach in different, but fairly simple, ways to deal
with the artificiality of the Herbert-Stevens formulation.
An alternative to the P-J, Herbert-Stevens tradition was seeded when
Edwin S. Mills, who is known as the father of modern urban economics,
took on the problem of scoping more widely. Beginning with Mills (1972),
Mills has developed a line of work yielding more publications and follow
on work by others, especially his students.
Using a Manhattan geometry, Mills incorporated a transportation
component in his analysis. Homogeneous zones defined by the
transportation system were analyzed as positioned x integer steps away
from the central zone via the Manhattan geometry. Mills treated
congestion by assigning integer measures to levels of service, and he
considered the costs of increasing capacity. To organize flows, Mills
assumed a single export facility in the central node. He allowed
capital-land rent trade offs yielding the tallest buildings in the
central zones.
Stating this in a rather long but not difficult to understand linear
programming format, Mills' system minimizes land, capital, labor, and
congestion costs, subject to a series of constraints on the quantities
affecting the system. One set of these is the exogenously gives vector
of export levels. Mills (1974a,b) permitted exports from non-central
zones, and other modifications shifted the ways congestion is measured
and allowed for more than one mode of transport.
With respect to activities, Mills introduced an input-output type
coefficient for activities; aqrs, denotes land input q per unit of
output r using production technique s. T.J. Kim (1979) has followed the
Mills tradition through the addition of articulating sectors. The work
briefly reviewed above adheres to a closed form, comparative statics
manner of thinking. This note now will turn to dynamics.
The literature gives rather varied statements on what consideration of
dynamics means. Most often, there is the comment that time is considered
in an explicit fashion, and analysis becomes dynamic when results are
run out over time. In that sense, the P-J model was a dynamic model.
Sometimes, dynamics are operationalized by allowing things that were
assumed static to change with time. Capital gets attention. Most of the
models of the type discussed previously assume that capital is
malleable, and one considers dynamics if capital is taken as durable yet
subject to ageing -- e.g., a building once built stays there but gets
older and less effective. On the people side, intra-urban migration is
considered. Sometimes too, there is an information context. Models
assume perfect information and foresight. Let's relax that assumption.
Anas (1978) is an example of a paper that is "dynamic" because it
considers durable capital and limited information about the future.
Residents were mobile; some housing stock was durable (outlying), but
central city housing stock was subject to obsolescence and abandonment.
Persons working in other traditions tend to emphasize feedbacks and
stability (or the lack of stability) when they think "dynamics," and
there is some literature reflecting those modes of thought. The best
known is Forester (1968), which set off an enormous amount of critique
and some follow on thoughtful extensions (e.g., Chen (ed), 1972)
Robert Crosby in the University Research Office of the US DOT was very
much interested in the applications of dynamics to urban analysis, and
when the DOT program was active some work was sponsored (Kahn (ed)
1981). The funding for that work ended, and we doubt if any new work was
seeded.
The analyses discussed use land rent ideas. The direct relation between
transportation and land rent is assumed, e.g., as per Stevens. There is
some work that takes a less simple view of land rent. An interesting
example is Thrall (1987). Thrall introduces a consumption theory of land
rent that includes income effects; utility is broadly considered. Thrall
manages both to simplify analytic treatment making the theory readily
accessible and develop insights about policy and transportation.
## Requiem
Wachs[^1] summarizes Lee\'s (1973) \"Requiem for Large Scale Models.\"
[^2]
John Landis,[^3] has responded about seven challenges facing large scale
models:
1. Models - microbehavioral (actors and agents) \... Social
Benefit/Social Action
2. Simulation - multiple movies/scenarios
3. Respond to constraints and investments
4. Nonlinearity - path dependence in non-artifactual way (structure and
outcomes, network effects)
5. spatial vs. real autocorrelation, emergence - new dynamics,
threshold network effects
6. preference utility diversity and change over time
7. Useful beyond calibration periods. Embed innovators and norming
agents. Strategic and response function.
## References
- Alonso, William, Location and Land Use, Harvard Univ. Press, 1964.
- Anas, Alex, Dynamics of Urban Residential Growth, Journal of Urban
Economics, 5 , pp. 66--87, 1978
- Anderson, G.S. A Linear Program Model of Housing Equilibrium,
Journal of Urban Economics. 11, pp. 157--168, 1982
- Balinski, M. L. Fixed-Cost Transportation Problems Naval Research
Logistics Quarterly, 8, 41-54, 1960.
- Beckmann, M and T. Marschak, An Activity Analysis Approach to
Location Theory, Kyklos, 8, 128-143, 1955,
- Blunden, W. R. and J. A. Black. The Land-Use/Transportation System,
Pergamon Press, 1984 (Second Edition).
- Boldreff, A., Determination of the Maximal Steady State Flow of
Traffic Through a Railroad Network Operations Research , 3, 443-65,
1955.
- Boyce David E., LeBlanc Larry., and Chon K. \"Network Equilibrium
Models of Urban Location and Travel Choices: A Retrospective
Survey\" Journal of Regional Science, Vol. 28, No 2, 1988
- Brewer, Garry D. Politicians, Bureaucrats and the Consultant New
York: Basic Books, 1973
- Chen, Kan (ed), Urban Dynamics: Extensions and Reflections, San
Francisco Press, 1972
- Cooper, Leon, Location-Allocation Problems Operations Research, 11,
331-43, 1963.
- Dunn, Edgar S. Jr., The Location of Agricultural Production,
University of Florida Press 1954.
- Ford, L. R. and D. R. Fulkerson, "Algorithm for Finding Maximal
Network Flows" Canadian Journal of Math, 8, 392-404, 1956.
- Garrison, William L. and Duane F. Marble. Analysis of Highway
Networks: A Linear Programming Formulation Highway Research Board
Proceedings, 37, 1-14, 1958.
- Goldman, T.A. Efficient Transportation and Industrial Location
Papers, RSA, 4, 91-106, 1958
- Gomory, E. and T. C. Hu, An Application of Generalized Linear
Programming to Network Flows SIAM Journal, 10, 260--83, 1962.
- Harris, Britton, Linear Programming and the Projection of Land Uses,
P-J Paper #20.
- Harris, Britton, Some Problems in the Theory of Intraurban Location,
Operations Research 9 , pp. 695--721 1961.
- Harris, Britton, Experiments in the Projection of Transportation and
Land Use, Traffic Quarterly, April pp. 105--119. 1962.
- Herbert, J. D. and Benjamin Stevens, A Model for the Distribution of
Residential Activity in Urban Areas," Journal of Regional Science, 2
pp. 21-39 1960.
- Irwin, Richard D. "Review of Existing Land-Use Forecasting
Techniques," Highway Research Record No. 88, pp. 194--199. 1965.
- Isard, Walter et al., Methods of Regional Analysis: An Introduction
to Regional Science MIT Press 1960.
- Kahn, David (ed.) Essays in Social Systems Dynamics and
Transportation: Report of the Third Annual Workshop in Urban and
Regional Systems Analysis, DOT-TSC-RSPA-81-3. 1981.
- Kalaba, R. E. and M. L. Juncosa, Optimal Design and Utilization of
Transportation Networks Management Science, 3, 33-44, 1956.
- Kim T.J. Alternative Transportation Modes in a Land Use Model,
Journal. of Urban Economics, 6, pp. 197--216. 1979
- Kim T.J. A Combined Land Use-Transportation Model When Zonal Travel
Demand is Endogenously Given, Transportation Research, 17B,
pp. 449--462. 1983.
- LaBelle, S. J. and David O. Moses' Technology Assessment of
Productive Conservation in Urban Transportation, Argonne National
Laboratory, (ANL/ES 130) 1983.
- Lowry, Ira S. A Model of Metropolis RAND Memorandum 4025-RC, 1964.
- Meyer, John Robert, John Kain, and Martin Wohl The Urban
Transportation Problem Cambridge: Harvard University Press, 1964..
- Mills, Edwin S. Markets and Efficient Resource Allocation in Urban
Areas, Swedish Journal of Economics 74, pp. 100--113, 1972.
- Mills, Edwin S. Sensitivity Analysis of Congestion and Structure in
an Efficient Urban Environment, in Transport and Urban
Environment, J. Rothenberg and I. Heggie (eds), Wiley, 1974
- Mills, Edwin S. Mathematical model for Urban Planning, Urban and
Social Economics an Market and Planned Economies, A. Brown ed.,
Preager, 1974
- Orden, Alex, The Transshipment Problem Management Science, 2,
227-85, 1956
- Pack, Janet Urban Models: Diffusion and Policy Application Regional
Science Research Institute, Monograph 7, 1978
- Pack , H. and Janet, Pack "Urban Land Use Models: The Determinants
of Adoption and Use," Policy Sciences, 8 1977 pp. 79--101. 1977.
- Pearman, A. D., Two Errors in Quandt's Model of Transportation and
Optimal Network Construction Journal of the Regional Science
Association, 14, 281-286, 1974.
- Pollack, Maurice, Message Route Control in a Large Teletype Network"
Journal of the ACM, 11, 104-16, 1964.
- Quandt, R. E, Models of Transportation and Optimal Network
Construction Journal of the Regional Science Association , 2, 27-45,
1960.
- Robinson Ira M. (ed.) Decision Making in Urban Planning Sage
Publications, 1972.
- Stevens, Benjamin. H. "Linear Programming and Location Rent,"
Journal of Regional Science, 3 , pp. 15--26. 1961.
- Thrall, Grant I. Land Use and Urban Form, Metheun, 1987
- Wheaton, W. C. Linear Programming and Location Equilibrium: The
Herbert-Stevens Model Revisited, Journal of Urban Economics 1,
pp. 278--28. 1974
- Zettel R. M. and R. R. Carll "Summary Review of Major Metropolitan
Area Transportation Studies in the United States," University of
California, Berkeley, ITTE, 1962.
[^1]: Wachs, Martin (1994) Keynote Address: Evolution and Objectives of
the Travel Model Improvement Program in Travel Model Improvement
Program Conference Proceedings August 14--17, 1994, edited by Shunk,
Gordon and Bass, Patricia <http://ntl.bts.gov/DOCS/443.html>
[^2]: Lee, Douglass (1974) \"Requiem for Large Scale Models.\" Journal
of the American Institute of Planners. 39(3) 163\--178
[^3]: from PRSCO conference June 2001
|
# Transportation Economics/Evaluation
A **benefit-cost analysis** (BCA)[^1] is often required in determining
whether a project should be approved and is useful for comparing similar
projects. It determines the stream of quantifiable economic benefits and
costs that are associated with a project or policy. If the benefits
exceed the costs, the project is worth doing; if the benefits fall short
of the costs, the project is not. Benefit-cost analysis is appropriate
where the technology is known and well understood or a minor change from
existing technologies is being performed. BCA is not appropriate when
the technology is new and untried because the effects of the technology
cannot be easily measured or predicted. However, just because something
is new in one place does not necessarily make it new, so benefit-cost
analysis would be appropriate, e.g., for a light-rail or commuter rail
line in a city without rail, or for any road project, but would not be
appropriate (at the time of this writing) for something truly radical
like teleportation.
The identification of the costs, and more particularly the benefits, is
the chief component of the "art" of Benefit-Cost Analysis. This
component of the analysis is different for every project. Furthermore,
care should be taken to avoid double counting; especially counting cost
savings in both the cost and the benefit columns. However, a number of
benefits and costs should be included at a minimum. In transportation
these costs should be separated for users, transportation agencies, and
the public at large. Consumer benefits are measured by consumer's
surplus. It is important to recognize that the demand curve is downward
sloping, so there a project may produce benefits both to existing users
in terms of a reduction in cost and to new users by making travel
worthwhile where previously it was too expensive.
Agency benefits come from profits. But since most agencies are
non-profit, they receive no direct profits. Agency construction,
operating, maintenance, or demolition costs may be reduced (or
increased) by a new project; these cost savings (or increases) can
either be considered in the cost column, or the benefit column, but not
both.
Society is impacted by transportation project by an increase or
reduction of negative and positive externalities. Negative
externalities, or social costs, include air and noise pollution and
accidents. Accidents can be considered either a social cost or a private
cost, or divided into two parts, but cannot be considered in total in
both columns.
If there are network externalities (i.e. the benefits to consumers are
themselves a function of the level of demand), then consumers' surplus
for each different demand level should be computed. Of course this is
easier said than done. In practice, positive network externalities are
ignored in Benefit Cost Analysis.
## Background
### Early Beginnings
When Benjamin Franklin was confronted with difficult decisions, he often
recorded the pros and cons on two separate columns and attempted to
assign weights to them. While not mathematically precise, this "moral or
prudential algebra", as he put it, allowed for careful consideration of
each "cost" and "benefit" as well as the determination of a course of
action that provided the greatest benefit. While Franklin was certainly
a proponent of this technique, he was certainly not the first. Western
European governments, in particular, had been employing similar methods
for the construction of waterway and shipyard improvements.
Ekelund and Hebert (1999) credit the French as pioneers in the
development of benefit-cost analyses for government projects. The first
formal benefit-cost analysis in France occurred in 1708. Abbe de
Saint-Pierre attempted to measure and compare the incremental benefit of
road improvements (utility gained through reduced transport costs and
increased trade), with the additional construction and maintenance
costs. Over the next century, French economists and engineers applied
their analysis efforts to canals (Ekelund and Hebert, 1999). During this
time, The École Polytechnique had established itself as France's premier
educational institution, and in 1837 sought to create a new course in
"social arithmetic": "...the execution of public works will in many
cases tend to be handled by a system of concessions and private
enterprise. Therefore our engineers must henceforth be able to evaluate
the utility or inconvenience, whether local or general, or each
enterprise; consequently they must have true and precise knowledge of
the elements of such investments." (Ekelund and Hebert, 1999, p. 47).
The school also wanted to ensure their students were aware of the
effects of currencies, loans, insurance, amortization and how they
affected the probable benefits and costs to enterprises.
In the 1840s French engineer and economist Jules Dupuit (1844, tr. 1952)
published an article "On Measurement of the Utility of Public Works",
where he posited that benefits to society from public projects were not
the revenues taken in by the government (Aruna, 1980). Rather the
benefits were the difference between the public's willingness to pay and
the actual payments the public made (which he theorized would be
smaller). This "relative utility" concept was what Alfred Marshall would
later rename with the more familiar term, "consumer surplus" (Ekelund
and Hebert, 1999).
Vilfredo Pareto (1906) developed what became known as Pareto improvement
and Pareto efficiency (optimal) criteria. Simply put, a policy is a
Pareto improvement if it provides a benefit to at least one person
without making anyone else worse off (Boardman, 1996). A policy is
Pareto efficient (optimal) if no one else can be made better off without
making someone else worse off. British economists Kaldor and Hicks
(Hicks, 1941; Kaldor, 1939) expanded on this idea, stating that a
project should proceed if the losers could be compensated in some way.
It is important to note that the Kaldor-Hicks criteria states it is
sufficient if the winners could potentially compensate the project
losers. It does not require that they be compensated.
### Benefit-cost Analysis in the United States
Much of the early development of benefit-cost analysis in the United
States is rooted in water related infrastructure projects. The US Flood
Control Act of 1936 was the first instance of a systematic effort to
incorporate benefit-cost analysis to public decision-making. The act
stated that the federal government should engage in flood control
activities if "the benefits to whomsoever they may accrue \[be\] in
excess of the estimated costs," but did not provide guidance on how to
define benefits and costs (Aruna, 1980, Persky, 2001). Early Tennessee
Valley Authority (TVA) projects also employed basic forms of
benefit-cost analysis (US Army Corp of Engineers, 1999). Due to the lack
of clarity in measuring benefits and costs, many of the various public
agencies developed a wide variety of criteria. Not long after, attempts
were made to set uniform standards.
The U.S. Army Corp of Engineers "Green Book" was created in 1950 to
align practice with theory. Government economists used the Kaldor-Hicks
criteria as their theoretical foundation for the restructuring of
economic analysis. This report was amended and expanded in 1958 under
the title of "The Proposed Practices for Economic Analysis of River
Basin Projects" (Persky, 2001).
The Bureau of the Budget adopted similar criteria with 1952's Circular
A-47 - "Reports and Budget Estimates Relating to Federal Programs and
Projects for Conservation, Development, or Use of Water and Related Land
Resources".
### Modern Benefit-cost Analysis
During the 1960s and 1970s the more modern forms of benefit-cost
analysis were developed. Most analyses required evaluation of:
1. The present value of the benefits and costs of the proposed project
at the time they occurred
2. The present value of the benefits and costs of alternatives
occurring at various points in time (opportunity costs)
3. Determination of risky outcomes (sensitivity analysis)
4. The value of benefits and costs to people with different incomes
(distribution effects/equity issues) (Layard and Glaister, 1994)
### The Planning Programming Budgeting System (PPBS) - 1965
The Planning Programming Budgeting System (PPBS) developed by the
Johnson administration in 1965 was created as a means of identifying and
sorting priorities. This grew out of a system Robert McNamara created
for the Department of Defense a few years earlier (Gramlich, 1981). The
PPBS featured five main elements:
1. A careful specification of basic program objectives in each major
area of governmental activity.
2. An attempt to analyze the outputs of each governmental program.
3. An attempt to measure the costs of the program, not for one year but
over the next several years ("several" was not explicitly defined).
4. An attempt to compare alternative activities.
5. An attempt to establish common analytic techniques throughout the
government.
### Office of Management and Budget (OMB) -- 1977
Throughout the next few decades, the federal government continued to
demand improved benefit-cost analysis with the aim of encouraging
transparency and accountability. Approximately 12 years after the
adoption of the PPBS system, the Bureau of the Budget was renamed the
Office of Management and Budget (OMB). The OMB formally adopted a system
that attempts to incorporate benefit-cost logic into budgetary
decisions. This came from the Zero-Based Budgeting system set up by
Jimmy Carter when he was governor of Georgia (Gramlich, 1981).
### Recent Developments
Executive Order 12292, issued by President Reagan in 1981, required a
regulatory impact analysis (RIA) for every major governmental regulatory
initiative over \$100 million. The RIA is basically a benefit-cost
analysis that identifies how various groups are affected by the policy
and attempts to address issues of equity (Boardman, 1996).
According to Robert Dorfman, (Dorfman, 1997) most modern-day
benefit-cost analyses suffer from several deficiencies. The first is
their attempt "to measure the social value of all the consequences of a
governmental policy or undertaking by a sum of dollars and cents".
Specifically, Dorfman mentions the inherent difficultly in assigning
monetary values to human life, the worth of endangered species, clean
air, and noise pollution. The second shortcoming is that many
benefit-cost analyses exclude information most useful to decision
makers: the distribution of benefits and costs among various segments of
the population. Government officials need this sort of information and
are often forced to rely on other sources that provide it, namely,
self-seeking interest groups. Finally, benefit-cost reports are often
written as though the estimates are precise, and the readers are not
informed of the range and/or likelihood of error present.
The Clinton Administration sought proposals to address this problem in
revising Federal benefit-cost analyses. The proposal required numerical
estimates of benefits and costs to be made in the most appropriate unit
of measurement, and "specify the ranges of predictions and shall explain
the margins of error involved in the quantification methods and in the
estimates used" (Dorfman, 1997). Executive Order 12898 formally
established the concept of Environmental Justice with regards to the
development of new laws and policies, stating they must consider the
"fair treatment for people of all races, cultures, and incomes." The
order requires each federal agency to identify and address
"disproportionately high and adverse human health or environmental
effects of its programs, policies and activities on minority and
low-income populations."
### Probabilistic Benefit-Cost Analysis
! 275 px \| Probability-density distribution of net present values
approximated by a normal curve. Source: Treasury Board of Canada,
Benefit-Cost Analysis Guide,
1998
! 275 px \| Probability distribution curves for the NPVs of projects A
and B. Source: Treasury Board of Canada, Benefit-Cost Analysis Guide,
1998.
! 275 px \| Probability distribution curves for the NPVs of projects A
and B, where Project A has a narrower range of possible NPVs. Source:
Treasury Board of Canada, Benefit-Cost Analysis Guide,
1998
In recent years there has been a push for the integration of sensitivity
analyses of possible outcomes of public investment projects with open
discussions of the merits of assumptions used. This "risk analysis"
process has been suggested by Flyvbjerg (2003) in the spirit of
encouraging more transparency and public involvement in decision-making.
The Treasury Board of Canada's Benefit-Cost Analysis Guide recognizes
that implementation of a project has a probable range of benefits and
costs. It posits that the "effective sensitivity" of an outcome to a
particular variable is determined by four factors:
- the responsiveness of the Net Present Value (NPV) to changes in the
variable;
- the magnitude of the variable\'s range of plausible values;
- the volatility of the value of the variable (that is, the
probability that the value of the variable will move within that
range of plausible values); and
- the degree to which the range or volatility of the values of the
variable can be controlled.
It is helpful to think of the range of probable outcomes in a graphical
sense, as depicted in Figure 1 (probability versus NPV).
Once these probability curves are generated, a comparison of different
alternatives can also be performed by plotting each one on the same set
of ordinates. Consider for example, a comparison between alternative A
and B (Figure 2).
In Figure 2, the probability that any specified positive outcome will be
exceeded is always higher for project B than it is for project A. The
decision maker should, therefore, always prefer project B over project
A. In other cases, an alternative may have a much broader or narrower
range of NPVs compared to other alternatives (Figure 3).
Some decision-makers might be attracted by the possibility of a higher
return (despite the possibility of greater loss) and therefore might
choose project B. Risk-averse decision-makers will be attracted by the
possibility of lower loss and will therefore be inclined to choose
project A.
## Discount rate
Both the costs and benefits flowing from an investment are spread over
time. While some costs are one-time and borne up front, other benefits
or operating costs may be paid at some point in the future, and still
others received as a stream of payments collected over a long period of
time. Because of inflation, risk, and uncertainty, a dollar received now
is worth more than a dollar received at some time in the future.
Similarly, a dollar spent today is more onerous than a dollar spent
tomorrow. This reflects the concept of time preference that we observe
when people pay bills later rather than sooner. The existence of real
interest rates reflects this time preference. The appropriate discount
rate depends on what other opportunities are available for the capital.
If simply putting the money in a government insured bank account earned
10% per year, then at a minimum, no investment earning less than 10%
would be worthwhile. In general, projects are undertaken with those with
the highest rate of return first, and then so on until the cost of
raising capital exceeds the benefit from using that capital. Applying
this efficiency argument, no project should be undertaken on
cost-benefit grounds if another feasible project is sitting there with a
higher rate of return.
Three alternative bases for the setting the government's test discount
rate have been proposed:
1. The social rate of time preference recognizes that a dollar\'s
consumption today will be more valued than a dollar\'s consumption
at some future time for, in the latter case, the dollar will be
subtracted from a higher income level. The amount of this difference
per dollar over a year gives the annual rate. By this method, a
project should not be undertaken unless its rate of return exceeds
the social rate of time preference.
2. The opportunity cost of capital basis uses the rate of return of
private sector investment, a government project should not be
undertaken if it earns less than a private sector investment. This
is generally higher than social time preference.
3. The cost of funds basis uses the cost of government borrowing, which
for various reasons related to government insurance and its ability
to print money to back bonds, may not equal exactly the opportunity
cost of capital.
Typical estimates of social time preference rates are around 2 to 4
percent while estimates of the social opportunity costs are around 7 to
10 percent.
Generally, for Benefit-Cost studies an acceptable rate of return (the
government's test rate) will already have been established. An
alternative is to compute the analysis over a range of interest rates,
to see to what extent the analysis is sensitive to variations in this
factor. In the absence of knowing what this rate is, we can compute the
rate of return (internal rate of return) for which the project breaks
even, where the net present value is zero. Projects with high internal
rates of return are preferred to those with low rates.
## Determine a present value
The basic math underlying the idea of determining a present value is
explained using a simple compound interest rate problem as the starting
point. Suppose the sum of \$100 is invested at 7 percent for 2 years. At
the end of the first year the initial \$100 will have earned \$7
interest and the augmented sum (\$107) will earn a further 7 percent (or
\$7.49) in the second year. Thus at the end of 2 years the \$100
invested now will be worth \$114.49.
The discounting problem is simply the converse of this compound interest
problem. Thus, \$114.49 receivable in 2 years time, and discounted by 7
per cent, has a present value of \$100.
Present values can be calculated by the following equation:
\(1\) $P = \frac{F}{{\left( {1 + i} \right)^n }}
\,\!$
where:
- F = future money sum
- P = present value
- i = discount rate per time period (i.e. years) in decimal form (e.g.
0.07)
- n = number of time periods before the sum is received (or cost paid,
e.g. 2 years)
Illustrating our example with equations we have:
$P = \frac{F}{{\left( {1 + i} \right)^n }} = \frac{{114.49}}{{\left( {1 + 0.07} \right)^2 }} = 100.00
\,\!$
The present value, in year 0, of a stream of equal annual payments of A
starting year 1, is given by the reciprocal of the equivalent annual
cost. That is, by:
\(2\)
$P = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right]
\,\!$
where:
- A = Annual Payment
For example: 12 annual payments of \$500, starting in year 1, have a
present value at the middle of year 0 when discounted at 7% of: \$3971
$P = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right] = 500\left[ {\frac{{\left( {1 + 0.07} \right)^{12} - 1}}{{0.07\left( {1 + 0.07} \right)^{12} }}} \right] = 3971
\,\!$
The present value, in year 0, of m annual payments of A, starting in
year n + 1, can be calculated by combining discount factors for a
payment in year n and the factor for the present value of m annual
payments. For example: 12 annual mid-year payments of \$250 in years 5
to 16 have a present value in year 4 of \$1986 when discounted at 7%.
Therefore in year 0, 4 years earlier, they have a present value of
\$1515.
$P_{Y = 4} = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right] = 250\left[ {\frac{{\left( {1 + 0.07} \right)^{12} - 1}}{{0.07\left( {1 + 0.07} \right)^{12} }}} \right] = 1986
\,\!$
$P_{Y = 0} = \frac{F}{{\left( {1 + i} \right)^n }} = \frac{{P_{Y = 4} }}{{\left( {1 + i} \right)^n }} = \frac{{1986}}{{\left( {1 + 0.07} \right)^4 }} = 1515
\,\!$
## Evaluation criterion
Three equivalent conditions can tell us if a project is worthwhile
1. The discounted present value of the benefits exceeds the discounted
present value of the costs
2. The present value of the net benefit must be positive.
3. The ratio of the present value of the benefits to the present value
of the costs must be greater than one.
However, that is not the entire story. More than one project may have a
positive net benefit. From the set of mutually exclusive projects, the
one selected should have the highest net present value. We might note
that if there are insufficient funds to carry out all mutually exclusive
projects with a positive net present value, then the discount used in
computing present values does not reflect the true cost of capital.
Rather it is too low.
There are problems with using the internal rate of return or the
benefit/cost ratio methods for project selection, though they provide
useful information. The ratio of benefits to costs depends on how
particular items (for instance, cost savings) are ascribed to either the
benefit or cost column. While this does not affect net present value, it
will change the ratio of benefits to costs (though it cannot move a
project from a ratio of greater than one to less than one).
## Examples
### Example 1: Benefit Cost Application
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
!TProblem{width="40"}**Problem:**
This problem, adapted from Watkins
(1996), illustrates how a
Benefit Cost Analysis might be applied to a project such as a highway
widening. The improvement of the highway saves travel time and increases
safety (by bringing the road to modern standards). But there will almost
certainly be more total traffic than was carried by the old highway.
This example excludes external costs and benefits, though their addition
is a straightforward extension. The data for the "No Expansion" can be
collected from off-the-shelf sources. However the "Expansion" column's
data requires the use of forecasting and modeling. Assume there are 250
weekdays (excluding holidays) each year and four rush hours per weekday.
--------------------
**Table 1: Data**
Peak
Passenger Trips
Trip Time
Off-peak
Passenger Trips
Trip Time
Traffic Fatalities
--------------------
Note: the operating cost for a vehicle is unaffected by the project, and
is \$4.
-------------------------------
**Table 2: Model Parameters**
Peak Value of Time
Off-Peak Value of Time
Value of Life
-------------------------------
What is the benefit-cost relationship?
```{=html}
</div>
```
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #02D4EE; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
!Example{width="40"}**Solution:**
**Benefits**
!Figure 1: Change in Consumers\'
Surplus{width="400"}
A 50 minute trip at \$0.15/minute is \$7.50, while a 30 minute trip is
only \$4.50. So for existing users, the expansion saves \$3.00/trip.
Similarly in the off-peak, the cost of the trip drops from \$3.50 to
\$2.50, saving \$1.00/trip.
Consumers' surplus increases both for the trips which would have been
taken without the project and for the trips which are stimulated by the
project (so-called "induced demand"), as illustrated above in Figure 1.
Our analysis is divided into Old and New Trips, the benefits are given
in Table 3.
------------------------------
**Table 3: Hourly Benefits**
TYPE
Peak
Off-peak
------------------------------
Note: Old Trips: For trips which would have been taken anyway the
benefit of the project equals the value of the time saved multiplied by
the number of trips. New Trips: The project lowers the cost of a trip
and public responds by increasing the number of trips taken. The benefit
to new trips is equal to one half of the value of the time saved
multiplied by the increase in the number of trips. There are 1000 peak
hours per year. With 8760 hours per year, we get 7760 offpeak hours per
year. These numbers permit the calculation of annual benefits (shown in
Table 4).
------------------------------------------
**Table 4: Annual Travel Time Benefits**
TYPE
Peak
Off-peak
Total
------------------------------------------
The safety benefits of the project are the product of the number of
lives saved multiplied by the value of life. Typical values of life are
on the order of \$3,000,000 in US transportation analyses. We need to
value life to determine how to trade off between safety investments and
other investments. While your life is invaluable to you (that is, I
could not pay you enough to allow me to kill you), you don't act that
way when considering chance of death rather than certainty. You take
risks that have small probabilities of very bad consequences. You do not
invest all of your resources in reducing risk, and neither does society.
If the project is expected to save one life per year, it has a safety
benefit of \$3,000,000. In a more complete analysis, we would need to
include safety benefits from non-fatal accidents.
The annual benefits of the project are given in Table 5. We assume that
this level of benefits continues at a constant rate over the life of the
project.
------------------------------------
**Table 5: Total Annual Benefits**
Type of Benefit
Time Saving
Reduced Risk
Total
------------------------------------
**Costs**
Highway costs consist of right-of-way, construction, and maintenance.
Right-of-way includes the cost of the land and buildings that must be
acquired prior to construction. It does not consider the opportunity
cost of the right-of-way serving a different purpose. Let the cost of
right-of-way be \$100 million, which must be paid before construction
starts. In principle, part of the right-of-way cost can be recouped if
the highway is not rebuilt in place (for instance, a new parallel route
is constructed and the old highway can be sold for development). Assume
that all of the right-of-way cost is recoverable at the end of the
thirty-year lifetime of the project. The \$1 billion construction cost
is spread uniformly over the first four-years. Maintenance costs \$2
million per year once the highway is completed.
The schedule of benefits and costs for the project is given in Table 6.
-----------------------------------------------------------
**Table 6: Schedule Of Benefits And Costs (\$ millions)**
Time (year)
0
1-4
5-29
30
-----------------------------------------------------------
**Conversion to Present Value**
The benefits and costs are in constant value dollars. Assume the real
interest rate (excluding inflation) is 2%. The following equations
provide the present value of the streams of benefits and costs.
To compute the Present Value of Benefits in Year 5, we apply equation
(2) from above.
$P = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right] = 139.72\left[ {\frac{{\left( {1 + 0.02} \right)^{26} - 1}}{{0.02\left( {1 + 0.02} \right)^{26} }}} \right] = 2811.31
\,\!$
To convert that Year 5 value to a Year 1 value, we apply equation (1)
$P = \frac{F}{{\left( {1 + i} \right)^n }} = \frac{{2811.31}}{{\left( {1 + 0.02} \right)^4 }} = 2597.21
\,\!$
The present value of right-of-way costs is computed as today's right of
way cost (\$100 M) minus the present value of the recovery of those
costs in Year 30, computed with equation (1):
$P = \frac{F}{{\left( {1 + i} \right)^n }} = \frac{{100}}{{\left( {1 + 0.02} \right)^{30} }} = 55.21
\,\!$
$100 - 55.21 = 44.79
\,\!$
The present value of the construction costs is computed as the stream of
\$250M outlays over four years is computed with equation (2):
$P = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right] = 250\left[ {\frac{{\left( {1 + 0.02} \right)^4 - 1}}{{0.02\left( {1 + 0.02} \right)^4 }}} \right] = 951.93
\,\!$
Maintenance Costs are similar to benefits, in that they fall in the same
time periods. They are computed the same way, as follows: To compute the
Present Value of \$2M in Maintenance Costs in Year 5, we apply equation
(2) from above.
$P = A\left[ {\frac{{\left( {1 + i} \right)^n - 1}}{{i\left( {1 + i} \right)^n }}} \right] = 2\left[ {\frac{{\left( {1 + 0.02} \right)^{26} - 1}}{{0.02\left( {1 + 0.02} \right)^{26} }}} \right] = 40.24
\,\!$
To convert that Year 5 value to a Year 1 value, we apply equation (1)
$P = \frac{F}{{\left( {1 + i} \right)^n }} = \frac{{40.24}}{{\left( {1 + 0.02} \right)^4 }} = 37.18
\,\!$
As Table 7 shows, the benefit/cost ratio of 2.5 and the positive net
present value of \$1563.31 million indicate that the project is
worthwhile under these assumptions (value of time, value of life,
discount rate, life of the road). Under a different set of assumptions,
(e.g. a higher discount rate), the outcome may differ.
----------------------------------------------------------------
**Table 7: Present Value of Benefits and Costs (\$ millions)**
Benefits
Costs
Right-of-Way
Construction
Maintenance
Costs SubTotal
Net Benefit (B-C)
Benefit/Cost Ratio
----------------------------------------------------------------
```{=html}
</div>
```
## Thought Questions
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
### Decision Criteria
Which is a more appropriate decision criteria: Benefit/Cost or Benefit -
Cost? Why?
```{=html}
</div>
```
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
### Is it only money that matters?
**Problem**
Is money the only thing that matters in Benefit-Cost Analysis? Is
\"converted\" money the only thing that matters? For example, the value
of human life in dollars?
**Solution**
Absolutely not. A lot of benefits and costs can be converted to monetary
value, but not all. For example, you can put a price on human safety,
but how can you put a price on, say, aesthetics---something that
everyone agrees is beneficial. What else can you think of?
```{=html}
</div>
```
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
### Can small units of time be given the same value of time as larger units of time?
In other words, do 60 improvements each saving a traveler 1 minute equal
1 improvement saving a traveler 60 minutes? Similarly, does 1
improvement saving a 1000 travelers 1 minute equal the value of time of
a single traveler of 1000 minutes. These are different problems, one is
intra-traveler and one is inter-traveler, but related.
Several issues arise.
A. Is value of time linear or non-linear? To this we must conclude the
value of time is surely non-linear. I am much more agitated waiting 3
minutes at a red light than 2, and I begin to suspect the light is
broken. Studies of ramp meters show a similar phenomena.[^2]
B. How do we apply this in a benefit-cost analysis? If we break one
project into 60 smaller projects, each with a smaller value of travel
time saved, and then we added the gains, we would get a different result
than the what obtains with a single large project. For analytical
convenience, we would like our analyses to be additive, not
sub-additive, otherwise arbitrarily dividing the project changes the
result. In particular many smaller projects will produce an undercount
that is quite significant, and result in a much lower benefit than if
the projects were bundled.
As a practical matter, every Benefit/Cost Analysis assumes a single
value of time, rather than assuming non-linear value of time. This also
helps avoiding biasing public investments towards areas with people who
have a high value of time (the rich)
On the other hand, mode choice analyses do however weight different
components of travel time differently, especially transit time (i.e.
in-vehicle time is less onerous than waiting time). The implicit value
of time for travelers does depend on the type of time (though generally
not the amount of time). Using the log-sum of the mode choice model as a
measure of benefit would implicitly account for this.
```{=html}
</div>
```
```{=html}
<div style="background: ##aaaaaa; border: 4px solid #ffCC99; padding: 6pt; margin: 12pt 0.5em; width: -0.5em">
```
### Are sunk costs sunk, is salvage value salvageable? A paradox in engineering economics analysis
Salvage value is defined as \"The estimated value of an asset at the end
of its useful life.\" [^3] Sunk cost is defined as \"Cost already
incurred which cannot be recovered regardless of future events.\"[^4]
It is often said in economics that \"sunk costs are sunk\", meaning they
should not be considered a cost in economic analysis, because the money
has already been spent.
Now consider two cases
In **case 1**, we have a road project that costs \$10.00 today, and at
the end of 10 years has some economic value remaining, let\'s say a
salvage value of \$5.00, which when discounted back to the present is
\$1.93 (at 10% interest). This value is the residual value of the road.
Thus, the total present cost of the project \$10.00 - \$1.93 = \$8.07.
Clearly the road cannot be moved. However, its presence makes it easier
to build future roads \... the land has been acquired and graded, some
useful material for aggregate is on-site perhaps, and can be thought of
as the amount that it reduces the cost of future generations to build
the road. Alternatively, the land could be sold for development if the
road is no longer needed, or turned into a park.
Assume the present value of the benefit of the road is \$10.00. The
benefit/cost ratio is \$10.00 over \$8.07 or 1.23. If we treat the
salvage value as a benefit rather than cost, the benefit is \$10.00 +
\$1.93 = \$11.93 and the cost is \$10, and the B/C is 1.193.
In 10 years time, the community decides to replace the old worn out road
with a new road. This is a new project. The salvage value from the
previous project is now the sunk cost of the current project (after all
the road is there and could not be moved, and so does not cost the
current project anything to exploit). So the cost of the project in 10
years time would be \$10.00 - \$5.00 = \$5.00. Discounting that to the
present is \$1.93.
The benefit in 10 years time is also \$10.00, but the cost in 10 years
time was \$5.00, and the benefit/cost ratio they perceive is
\$10.00/\$5.00 = 2.00
Aggregating the two projects
- the benefits are \$10 + \$3.86 = \$13.86
- the costs are \$8.07 + \$1.93 = \$10.00
- the collective benefit/cost ratio is 1.386
- the NPV is benefits - costs = \$3.86
One might argue the salvage value is a benefit, rather than a cost
reduction. In that case
- the benefits are \$10.00 + \$1.93 + \$3.86 = \$15.79
- the costs are \$10.00 + \$1.93 = \$11.93
- the collective benefit/cost ratio is 1.32
- the NPV remains \$3.86
**Case 2** is an identical road, but now the community has a 20 year
time horizon to start. The initial cost is \$10, and the cost in 10
years time is \$5.00 (discounted to \$1.93). The benefits are \$10 now
and \$10 in 10 years time (discounted to \$3.86). There is no salvage
value at the end of the first period, nor sunk costs at the beginning of
the second period. What is the benefit cost ratio?
- the costs are \$11.93
- the benefits are still \$13.86
- the benefit/cost ratio is 1.16
- the NPV is \$1.93.
If you are the community, which will you invest in? Case 1 has an
initial B/C of 1.23 (or 1.193), Case 2 has a B/C of 1.16. But the real
benefits and real costs of the roads are identical.
The salvage value in this example is, like so much in economics (think
Pareto optimality), an accounting fiction. In this case no transaction
takes place to realize that salvage value. On the other hand, excluding
the salvage value over-estimates the net cost of the project, as it
ignores potential future uses of the project.
Time horizons on projects must be comparable to correctly assess
relative B/C ratio, yet not all projects do have the same benefit/cost
ratio.
```{=html}
</div>
```
## Software Tools for Impact Analysis
The majority of economic impact studies for highway capacity projects
are undertaken using conventional methods. These methods tend to focus
on the direct user impacts of individual projects in terms of travel
costs and outcomes, and compare sums of quantifiable, discounted
benefits and costs. Inputs to benefit-cost analyses can typically be
obtained from readily available data sources or model outputs (such as
construction and maintenance costs, and before and after estimates of
travel demand, by vehicle class, along with associated travel times).
Valuation of changes in external, somewhat intangible costs of travel
(e.g., air pollution and crash injury) can usually be accommodated by
using *shadow price* estimates, such as obtained from FHWA-suggested
values, based on recent empirical studies.
The primary benefits included in such studies are those related to
reductions in user cost, such as travel time savings and vehicle
operating costs (e.g. fuel costs, vehicle depreciation, etc.).
Additional benefits may stem from reductions in crash rates, vehicle
emissions, noise, and other costs associated with vehicle travel.
Project costs are typically confined to expenditures on capital
investment, along with ongoing operations and maintenance costs.
A number of economic analysis tools have been developed under the
auspices of the United States Federal Highway Administration (FHWA)
permitting different forms of benefit-cost analysis for different types
of projects, at different levels of evaluation. Several of these tools
are prevalent in past impact analyses, and are described here. However,
none identifies the effects of infrastructure on the economy and
development.
### MicroBENCOST
MicroBENCOST [^5] is a sketch planning tool for estimating basic
benefits and costs of a range of highway improvement projects, including
capacity addition projects. In each type of project, attention is
focused on corridor traffic conditions and their resulting impact on
motorist costs with and without a proposed improvement. This type of
approach may be appropriate for situations where projects have
relatively isolated impacts and do not require regional modeling.
### SPASM
The Sketch Planning Analysis Spreadsheet Model (SPASM) is a benefit-cost
tool designed for screening level analysis. It outputs estimates of
project costs, cost-effectiveness, benefits, and energy and air quality
impacts. SPASM is designed to allow for comparison among multiple modes
and non-modal alternatives, such as travel demand management scenarios.
The model is comprised of three modules (worksheets) relating to public
agency costs, characteristics of facilities and trips, and a travel
demand component. Induced traffic is dealt with through the use of
elasticity-based methods, where an elasticity of vehicle-miles of travel
(VMT) with respect to travel time is defined and applied. Vehicle
emissions are estimated based on calcuations of VMT, trip length and
speeds, and assumed shares of travel occuring in cold start, hot start,
and hot stabilized conditions. Analysis is confined to a corridor level,
with all trips having the same origin, destination and length. This
feature is appropriate for analysis of linear transportation corridors,
but also greatly limits the ability to deal with traffic drawn to or
diverted from outside the corridor. DeCorla-Souza et al. (1996) [^6]
describe the model and its application to a freeway corridor in Salt
Lake City, Utah.
### STEAM
The Surface Transportation Efficiency Analysis Model (STEAM) is a
planning-level extension of the SPASM model, designed for a fuller
evaluation of cross-modal and demand management policies. STEAM was
designed to overcome the most important limitations of its predecessor,
namely the assumption of average trip lengths within a single corridor
and the inability to analyze systemwide effects. The enhanced modeling
capabilities of STEAM feature greater compatibility with existing
four-step travel demand models, including a trip table module that is
used to calculate user benefits and emissions estimates based on changes
in network conditions and travel behavior. Also, the package features a
risk analysis component to its evaluation summary module, which
calculates the likelihood of various outcomes such as benefit-cost
ratios. An overview of STEAM and a hypothetical application are given by
DeCorla-Souza et al. (1998).[^7]
### SMITE
The Spreadsheet Model for Induced Travel Estimation (SMITE) is a sketch
planning application that was designed for inclusion with STEAM in order
to account for the effects of induced travel in traffic forecasting.
SMITE\'s design as a simple spreadsheet application allows it to be used
in cases where a conventional, four-step travel demand model is
unavailable or cannot account for induced travel effects in its
structure.[^8] SMITE applies elasticity measures that describe the
response in demand (VMT) to changes in travel time and the response in
supply (travel time) to changes in demand levels.
### SCRITS
As a practical matter, highway corridor improvements involving
intelligent transportation systems (ITS) applications to smooth traffic
flow can be considered capacity enhancements, at least in the short
term. The FHWA\'s SCRITS (SCReening for ITS) is a sketch planning tool
that offers rough estimates of ITS benefits, for screening-level
analysis. SCRITS utilizes aggregate relationships between average
weekday traffic levels and capacity to estimate travel speed impacts and
vehicle-hours of travel (VHT). Like many other FHWA sketch planning
tools, it is organized in spreadsheet format and can be used in
situations where more sophisticated modeling systems are unavailable or
insufficient.
### HERS
In addition to helping states plan and manage their highway systems, the
FHWA\'s Highway Economic Requirements System for states (HERS-ST) offers
a model for economic impacts evaluation. In one case, Luskin (2005) [^9]
use HERS-ST to conclude that Texas is under-invested in highways --
particularly urban systems and lower-order functional classes -- by 50
percent. Combining economic principles with engineering criteria, HERS
evaluates competing projects via benefit-cost ratios. Recognizing user
benefits, emissions levels, and construction and maintenance costs, HERS
operates within a GIS environment and will be evaluated under this
project, for discussion in project deliverables. Well established
software like HERS offer states and regions an oportunity to readily
pursue standardized economic impact evaluations on all projects, a key
advantage for many users, as well as the greater community.
### Summary of Software Tools
Many analytical tools, like those described above, are favored due to
their relative ease of use and employment of readily available or easily
acquired data. However, several characteristics limit their
effectiveness in evaluating the effects of new highway capacity. First,
they are almost always insufficient to describe the full range of
impacts of new highway capacity. Such methods deliberately reduce
economic analysis to the most important components, resorting to several
simplifying assumptions. If a project adds capacity to a particularly
important link in the transportation network, its effects on travel
patterns may be felt outside the immediate area. Also, the effects of
induced travel, in terms of either route switching or longer trips, may
not be accounted for in travel models based on a static, equilibrium
assignment of traffic. In the longer term, added highway capacity may
lead to the spatial reorganization of activities as a result of changes
in regional accessibility. These types of changes cannot typically be
accounted for in analysis methods.
Second, there is the general criticism of methods based on benefit-cost
analysis that they cannot account for all possible impacts of a project.
Benefit-cost methods deliberately reduce economic analysis to the most
important components and often must make simplifying assumptions. The
project-based methods described here generally do not describe the
economic effects of a project on different user or non-user groups.
Winners and losers from a new capacity project cannot be effectively
identified and differentiated.
Third, a significant amount of uncertainty and risk is involved in the
employment of project-based methods. Methods that use benefit-cost
techniques to calculate B/C ratios, rates of return, and/or net present
values are often sensitive to certain assumptions and inputs. With
transportation infrastructure projects, the choice of discount rate is
often critical, due to the long life of projects and large, up-front
costs. Also, the presumed value of travel time savings is often pivotal,
since it typically reflects the majority of project benefits. Valuations
of travel time savings vary dramatically across the traveler population,
as a function of trip purpose, traveler wage, household income, and time
of day. It is useful to test several plausible values.
Assessment procedures in the UK and other parts of Europe have moved
towards a multi-criteria approach, where economic development is only
one of several appraisal criteria. Environmental, equity, safety, and
the overall integration with other policy sectors are examined in a
transparent framework for decision makers. In the UK, the Guidance on
the Methodologies for Multi-Modal Studies (2000) [^10] provides such a
framework. These procedures require a clear definition of project goals
and objectives, so that actual effects can be tied to project
objectives, as part of the assessment procedure. This is critical for
understanding induced travel effects. Noland (2007) [^11] has argued
that this implies that comprehensive economic assessment, including
estimation of land valuation effects, is the only way to fully assess
the potential beneficial impacts of projects.
## Sample Problems
Problem
1
(Solution
1)
Problem
2
(Solution
2)
## Key Terms
- Benefit-Cost Analysis
- Profits
- Costs
- Discount Rate
- Present Value
- Future Value
## External Exercises
Use the SAND software at the STREET website to
learn how to evaluate network performance given a changing network
scenario.
## Videos
- Benefit / Cost
Analysis
- Benefit / Cost Analysis - Value of
Time
- Benefit / Cost Analysis - Value of
Life
- Benefit / Cost Analysis - Consumers and Producers
Surplus
- Benefit / Cost Analysis - An
Example
- Perspectives on
Efficiency
- Designing for Dynamic
Systems
- Diamond of Evaluation
- Choosing Measures of
Effectiveness
## References
```{=html}
<references/>
```
- Aruna, D. Social Cost-Benefit Analysis Madras Institute for
Financial Management and Research, pp. 124, 1980.
- Boardman, A. et al., Cost-Benefit Analysis: Concepts and Practice,
Prentice Hall, 2nd Ed,
- Dorfman, R, "Forty years of Cost-Benefit Analysis: Economic Theory
Public Decisions Selected Essays of Robert Dorfman", pp. 323, 1997.
- Dupuit, Jules. "On the Measurement of the Utility of Public Works
R.H. Babcock (trans.)." International Economic Papers 2. London:
Macmillan, 1952.
- Ekelund, R., Hebert, R. Secret Origins of Modern Microeconomics:
Dupuit and the Engineers, University of Chicago Press, pp. 468,
1999.
- Flyvbjerg, B. et al. Megaprojects and Risk: An Anatomy of Ambition,
Cambridge University Press, pp. 207, 2003.
- Gramlich, E., A Guide to Benefit-cost Analysis, Prentice Hall,
pp. 273, 1981.
- Hicks, John (1941) "The Rehabilitation of Consumers' Surplus,"
Review of Economic Studies, pp. 108-116.
- Kaldor, Nicholas (1939) "Welfare Propositions of Economics and
Interpersonal Comparisons of Utility," Economic Journal, 49:195,
pp. 549--552.
- Layard, R., Glaister, S., Cost-Benefit Analysis, Cambridge
University Press; 2nd Ed, pp. 507, 1994.
- Pareto, Vilfredo., (1906) Manual of Political Economy. 1971
translation of 1927 edition, New York: Augustus M. Kelley.
- Perksy, J., Retrospectives: Cost-Benefit Analysis and the Classical
Creed Journal of Economic Perspectives, 2001 pp. 526, 2000.
- Sunstein, C. Cost-Benefit Analysis and the Knowledge Problem
(2014)
- Treasury Board of Canada "Benefit-cost Analysis
Guide",
1998
Transportation Economics}}
[^1]: benefit-cost analysis is sometimes referred to as cost-benefit
analysis (CBA)
[^2]: Weighting Waiting: Evaluating Perception of In-Vehicle Travel Time
Under Moving and Stopped Conditions
[^3]: <http://www.investorwords.com/4372/salvage_value.html>
[^4]: <http://www.investorwords.com/4813/sunk_cost.html>
[^5]: McTrans. Microbencost. Web page, 2007
[^6]: P. DeCorla-Souza, H. Cohen, and K. Bhatt. Using benefit-cost
analysis to evaluate across modes and demand management strategies.
In Compendium of Technical Papers, 66th Annual Meeting of the
Institute of Transportation Engineers, pages 439--445. Institute of
Transporta- tion Engineers, ITE, 1996.
[^7]: P. DeCorla-Souza, H. Cohen, D. Haling, and J. Hunt. Using steam
for benefit-cost analysis of transportation alternatives.
Transportation Research Record, 1649:63--71, 1998.
[^8]: P. DeCorla-Souza and H. Cohen. Accounting for induced travel in
evaluation of urban high- way expansion. Online resource, U.S.
Department of Transportation, Federal Highway Administration, 1998.
[^9]: D. Luskin and Erin Mallard. Potential gains from more efficient
spending on Texas highways. In Proceedings of the 84th Annual
Meeting of the Transportation Research Board, January, Washington
D.C., 2005.
[^10]: Department of Transport and the Environment. Guidance on the
Methodologies for Multi- Modal Studies. Technical report, Department
of Transport and the Environment, 2000.
[^11]: R.B. Noland. Transport planning and environmental assessment:
implications of induced travel effects. International Journal of
Sustainable Transportation, 1(1):1--28, 2007.
|
# UK Constitution and Government/Print version
*Note: current version of this book can be found at
<http://en.wikibooks.org/wiki/UK_Constitution_and_Government>*
Remember to click \"refresh\" to view this version.
```{=html}
<div style="font-family:verdana; margin-left='8%'; margin-right='8%'; text-align: justify; font-weight:normal; font-size:11pt; color:#00000C" >
```
# Table of contents
Part I: Political History
- The Normans
- The Plantagenets
- The Houses of Lancaster and York
- The House of Tudor
- The House of Stuart and the Commonwealth
- The House of Hanover
- The Houses of Saxe-Coburg-Gotha and Windsor
Part II: Present System
- The Constitution
- The Sovereign
- The Parliament
- Her Majesty\'s Government
- The Judiciary
- Devolved Administrations
- Elections
Part III: Appendices
- List of British monarchs
# Part I: Political History
# Part II: Present System
# Part III: Appendices
# License
## GNU Free Documentation License
```{=html}
</div>
```
|
# UK Constitution and Government/Normans
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Introduction{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Plantagenets{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Plantagenets{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Normans
(1066-1154)`</big>`{=html}`</big>`{=html}
Next Chapter
------------------------------------------------------------------------
## William I
The course of English political, legal and cultural history was changed
in 1066, when William, Duke of Normandy (also called William the
Conqueror) successfully invaded the nation and displaced the Saxon king,
Harold II.
In 1066 King Edward, also called St Edward the Confessor, died. His
cousin, the Duke of Normandy, claimed that the childless King had named
him heir during a visit to France, and that the other claimant to the
throne, Harold Godwinson, had pledged to support William when he was
shipwrecked in Normandy. The veracity of this tale, however, is
doubtful, and Harold took the crown upon King Edward\'s death. William,
however, invaded England in September, and defeated (and killed) Harold
at the famous Battle of Hastings in October.
## William II
In 1087, King William I died, and divided his lands and riches between
his three sons. The eldest, Robert, became Duke of Normandy; the second,
William, became King of England; the youngest, Henry, received silver.
Henry, however, eventually came to possess all of his father\'s
dominions. William II died without children, so Henry became King. Henry
later invaded Normandy, imprisoned his brother, and took over the Duchy
of Normandy.
## Henry I, Stephen and Matilda
Henry, whose sons had predeceased him, took an unprecedented step:
naming a woman as his heir. He declared that his daughter Matilda would
be the next Queen. However, Matilda\'s claim was disputed by Stephen, a
grandson of William I in the female line. After Henry I died in 1135,
Stephen usurped the throne, but he was defeated and imprisoned by
Matilda in 1141. Later, however, Matilda was defeated, and Stephen took
the throne.
Matilda, however, was not completely defeated. She escaped from
Stephen\'s army, and her own son, Henry Plantagenet, led a military
expedition against Stephen. Stephen was forced to agree to name Henry as
his heir, and when Stephen died in 1154, Henry took the throne,
commencing the Plantagenet dynasty.
ms:Perlembagaan dan Kerajaan United Kingdom: Dinasti
Normandy
|
# UK Constitution and Government/Plantagenets
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Normans{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Houses of Lancaster and
York{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Houses of Lancaster and
York{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Plantagenets
(1154-1399)`</big>`{=html}`</big>`{=html}
Previous Chapter \|
Next
Chapter
------------------------------------------------------------------------
## Henry II
With the death of King Stephen, Henry Plantagenet took the throne as
King Henry II. He already had control over the duchy of Normandy; he had
also inherited Anjou from his father Geoffrey. Furthermore, he acquired
many territories from his wife, Eleanor of Aquitaine. Henry thus had a
vast territory when he came to the throne; as King of England, he took
over Ireland.
Henry II made other remarkable achievements in England. He established
courts throughout England and introduced trial by jury. Furthermore, he
reduced the power of ecclesiastical courts. The Archbishop of Canterbury
and Lord High Chancellor, Thomas à Becket, opposed the King\'s attempt
to take power from the Church. At a confrontation between the two in
1170, Henry II famously said, \"Who will rid me of this turbulent
priest?\" Four of his knights took him literally, and in December
murdered Becket.
Henry, however, did not have good relations with his sons. In 1170, his
eldest son Henry was crowned, and is known as Henry the Young King. In
1173, the Young King and his brothers revolted against Henry II,
planning to dethrone him and leave the Young King as the sole ruler in
England. In 1174, the revolt failed, and all of the brothers
surrendered. Later, in 1189, Henry II\'s third son, Richard, attacked
and defeated him. Henry II died days after his defeat, and Richard,
nicknamed \"the Lionheart,\" became King.
## Richard I
Richard the Lionheart is often portrayed as a hero, but he did not do
much for England. In fact, he spent almost all of his time outside the
nation, and did not even find it necessary to learn English. He is most
famous for his fighting in the Crusades, a holy war seeking to assert
Christian dominance over Jerusalem.
## John
Richard\'s successor was his brother, John Allin. Henry II had granted
John the lands of Ireland, so when John came to the throne, the titles
Lord of Ireland and King of England were united. However, though Ireland
became a dominion of the Crown, several lands on the Continent,
including most of Normandy, were lost during John\'s reign.
King John was very unpopular with the nation\'s magnates, the barons,
whom he taxed. A particularly resented tax was the scutage, a penalty
paid by barons who failed to supply the King with military resources. In
1215, after John had been defeated in France, several barons rebelled.
Later in that year, John compromised and signed the *Magna Carta*, or
Great Charter. It guaranteed political liberties and provided for a
church free from domination by the monarchy. These liberties and
privileges, however, were not extended to the common man; rather, they
were granted to the barons. Nonetheless, the document is immensely
significant in English constitutional history as it is a major
indication of a limitation on the power of the Crown.
King John, however, broke the provisions of the Charter later, claiming
that he agreed to it under duress. In the next year, when he was
retreating from a French invasion, John lost England\'s most valuable
treasures - the Crown Jewels - in a marsh known as The Wash. His mental
and physical health deteriorated, and he later died from dysentery.
## Henry III
John was succeeded by his son, Henry, who was only nine years old. Henry
III, despite a reign that lasted over half a century, is not a
particularly memorable or noteworthy monarch. Nonetheless, a very
significant political development occurred during Henry III\'s reign. In
1258, one of Henry\'s opponents, Simon de Montfort, called a Parliament,
the forerunner of the modern institution. It, however, bears little
resemblance to the modern body, as it had little power.
Simon de Montfort, who was married to Henry III\'s sister, defeated and
imprisoned his brother-in-law in 1264. He was originally supported by
Henry\'s son Edward, but the latter later returned to his father\'s
side. Edward defeated de Montfort in 1265 at the Battle of Evesham and
restored Henry III. In 1270, the ageing Henry gave up most of power to
his son; two years later, he died, and Edward succeeded to the throne.
## Edward I
Edward I was the monarch who brought the entire British Isles under
English domination. In order to raise money in the war against the
rebellious Wales, Edward instituted a tax on Jewish moneylenders. The
tax, however, was too high for the moneylenders, who eventually became
too poor to pay. Edward accused them of disloyalty and abolished the
right of Jews to lend money. He also ordered that all Jews wear a yellow
star on their clothing; that idea was later adopted by Adolf Hitler in
Germany. Edward also executed hundreds of Jews, and in 1290 banished all
of them from England.
In 1291, the Scottish nobility agreed to submit to Edward. When Queen
Margaret I died, the nobles allowed Edward to choose between the rival
claimants to the throne. Edward installed the weak John Balliol as
monarch, and easily dominated Scotland. The Scots, however, rebelled.
Edward I executed the chief dissenter, William Wallace, further
antagonising Scotland.
## Edward II
When Edward I died in 1307, his son Edward became King. Edward II
abandoned his father\'s ambitions to conquer Scotland. Furthermore, he
recalled several men his father had banished. The barons, however,
rebelled against Edward. In 1312, Edward agreed to hand over power to a
committee of barons known as \"ordainers.\" These ordainers removed the
power of representatives of commoners to advise the monarch on new laws,
and concentrated all power in the nobility. Meanwhile, Robert the Bruce
was slowly reconquering Scotland. In 1314, Robert\'s forces defeated
England\'s in battle, and Robert gained control over most of Scotland.
In 1321, the ordainers banished a baron allied with the King, Hugh le
Despencer, along with his son. In 1322, Edward reacted by recalling them
and attacking the barons. He executed the leader of the ordainers, the
Earl of Lancaster, and permitted the Despencers to rule England. The
Despencers declared that all statutes created by the ordainers were
invalid, and that thereafter, no law would be valid unless it had
received the assent of the Commons, representatives of the commoners of
England. However, the Despencers became corrupt, causing them to be very
unpopular, even with Edward\'s own wife, Isabella. In 1325, Isabella
went to France, and in 1326, she returned, allied with Roger Mortimer,
one of the barons Edward had defeated. The two killed the Despencers and
forced Edward to resign his crown to his son, also named Edward. Edward
II was imprisoned and later killed.
## Edward III
Since Edward III was a child, Isabella and Roger Mortimer ruled England
in his stead. When Edward III became eighteen, however, he had Mortimer
executed and banished his mother from court. In 1328, when Charles IV,
Isabella\'s father and King of France, died, Edward claimed France,
suggesting that the kingdom should pass to him through his mother. His
claim was opposed by Philip VI, who claimed that the throne could only
pass in the male line. Edward declared war on Philip, setting off the
Hundred Years\' War. The British claim to the French throne was not
abandoned until the nineteenth century.
## Richard II
Richard II succeeded his grandfather, Edward III, in 1377. Richard II
was only about ten years old when coming to the throne. Even as an
adult, Richard II was a rather weak king. In 1399, he was deposed by his
cousin, Henry of Bolingbroke, and probably murdered the next year.
|
# UK Constitution and Government/Houses of Lancaster and York
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Plantagenets{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!House of Tudor{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!House of Tudor{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Houses of Lancaster and York
(1399-1485)`</big>`{=html}`</big>`{=html}
Previous
Chapter \|
Next Chapter
------------------------------------------------------------------------
## Henry IV
Henry of Bolingbroke deposed his weak cousin, Richard II, in 1399. Henry
IV\'s reign was marked by widespread rebellion. These were put down
thanks to the great military skill of the Henry IV\'s son, the future
King Henry V. Henry IV died in 1413 while plagued by a severe skin
disease (possibly leprosy).
## Henry V
Henry V\'s reign was markedly different from his father\'s in that it
involved little domestic turmoil. Overseas, Henry V\'s armies won
several important victories in France. In 1415, the English defeated the
French King Charles VI decisively at the Battle of Agincourt. About 100
English soldiers were killed, along with about 5000 Frenchmen.
For the next two years, Henry V conducted delicate diplomacy to improve
England\'s chances of conquering France. He negotiated with the Holy
Roman Emperor Sigismund, who agreed to end the German alliance with
France. In 1417, the war was renewed; by 1419, English troops were about
to take Paris. The parties agreed to a treaty whereby Henry V was named
heir of France. Henry V, however, died before he could succeed to the
French throne, which therefore remained in the hands of the Frenchmen.
## Henry VI and Edward IV
!Figure 3-1: Edward IV (left) and the future Edward V
(centre) and the future Edward V (centre)")
Henry VI succeeded to the throne while still an infant. His uncles, the
Dukes of Bedford and Gloucester, both functioned as Regents. During his
reign, many French territories won during the Hundred Years War were
lost.
Henry VI\'s reign was interrupted by Edward IV\'s due to the War of the
Roses. Henry VI was a member of the House of Lancaster, while Edward IV
was from the House of York. The former House descended from Henry of
Bolingbroke, the fourth son of King Edward III; the latter House
descended from Edmund of Langley, Edward III\'s fifth son.
In 1461, the Lancastrians lost to the Yorkists at the Battle of Towton.
The Yorkist claimant, Edward IV, ascended to the throne, with the
support of the powerful nobleman Richard Neville, 16th Earl of Warwick,
known by the nickname *Warwick the Kingmaker*. In 1464, Lancastrian
revolts were put down. In 1469, however, Warwick the Kingmaker switched
his allegiance, and in 1470, Henry VI was restored to the throne. The
exiled Edward, however, soon returned and defeated Henry\'s forces. At
the Battle of Tewkesbury, the remaining Lancastrians were defeated;
Henry VI was also murdered.
## Edward V and Richard III
Edward IV was succeeded by his twelve year-old son in 1483. Edward IV\'s
brother, Richard, was made guardian of Edward V and his brother, also
named Richard. The young King\'s uncle usurped the throne and had
Parliament declare the two brothers illegitimate. The two princes were
then imprisoned in the Tower of London, where they might have been
killed (their fate, however, is not certain).
In 1485, Richard III faced Henry Tudor, the Lancastrian claimant, at the
Battle of Bosworth Field, during which Richard became the last English
monarch to be killed during battle. Henry came to power as Henry VII,
establishing the Tudor Dynasty.
ms:Perlembagaan dan Kerajaan United Kingdom: Dewan Lancaster dan
York
|
# UK Constitution and Government/House of Tudor
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Houses of Lancaster and
York{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!House of Stuart and the
Commonwealth{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!House of Stuart and the
Commonwealth{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The House of Tudor
(1485-1603)`</big>`{=html}`</big>`{=html}
Previous
Chapter
\| Next
Chapter
------------------------------------------------------------------------
## Henry VII
Henry VII was one of the most successful monarchs in British history. He
was the Lancastrian claimant to the throne and lived in France so as to
remain safe from the designs of the Yorkist Kings. At the Battle of
Bosworth Field in 1485, he defeated and killed the Yorkist Richard III.
His claim was weak due to questions relating to the legitimacy of
certain births, but he was nonetheless awarded the throne.
Henry reformed the nation\'s taxation system and refilled the nation\'s
treasury, which had been bankrupted by the fiscal irresponsibility of
his predecessors. He also made peace with France so that the resources
of the nation would not be spent trying to regain territories won during
the Hundred Years\' War. Henry also created marital alliances with Spain
and Scotland. Henry\'s son, Arthur, married Catherine of Aragon,
daughter of Ferdinand II of Aragon and Isabella I of Castile.
Furthermore, Henry\'s daughter Margaret married James IV, King of Scots.
When Henry\'s son Arthur died, he wished to protect the Anglo-Spanish
alliance. Therefore, he obtained a dispensation from Pope Julius II
allowing Henry\'s son, also named Henry, to marry Catherine. (Papal
permission was necessary since Henry was marrying his brother\'s widow.)
Upon Henry VII\'s death, Henry took the throne as Henry VIII.
## Henry VIII
King Henry VIII is often remembered for his multiple marriages. In his
quest to obtain a male heir to the throne, Henry married six different
times. His first marriage, as noted above, was to his brother\'s widow,
Catherine of Aragon. That marriage occurred in 1509 and was scarred by
several tragedies involving their children. The couple\'s first child
was stillborn, their second lived for just 52 days, the third pregnancy
ended as a miscarriage and the product of the fourth pregnancy died soon
after birth. In 1516, the couple had a daughter, named Mary, followed by
another miscarriage. Henry was growing impatient with his wife and
eagerly sought a male heir.
Henry sought to annul his marriage to Catherine. Ecclesiastic law
permitted a man to marry his brother\'s widow only if the previous
marriage had not been consummated. Catherine had informed the Pope that
her marriage was non-consummate, so the Pope agreed to grant a
dispensation allowing her to marry Henry. Now, however, Henry alleged
that Catherine had lied, thereby rendering her marriage to him invalid.
In 1533, an Act of Parliament annulled his marriage to Catherine,
enabling him to marry Anne Boleyn. It was felt by many, however, that
the Church, and not Parliament, could govern marriages. Henry had asked
Pope Clement VII to issue a divorce several times. Under pressure from
Catherine\'s nephew, Holy Roman Emperor Charles V, the Pope refused.
Parliament therefore passed an Act denying appeals to Rome from certain
decisions of English Archbishops. The Archbishop of Canterbury, Thomas
Cranmer, annulled Henry\'s marriage to Catherine. In response, the Pope
excommunicated Henry. Soon, the Church of England separated from the
Roman Catholic Church. In 1534, all appeals to Rome from the decisions
of the English clergy were stopped. An Act of Parliament passed in 1536
confirmed the King\'s position as *Supreme Head of the Church of
England*, thereby ending any ceremonial influence that the Pope still
had.
Anne Boleyn, meanwhile, was Henry\'s Queen, and the only surviving child
from the marriage to Catherine, Mary, was declared illegitimate. Anne\'s
first child, Elizabeth, was born in 1533. The next three pregnancies,
however, all resulted in stillbirth or miscarriage. A dissatisfied Henry
accused Anne of using witchcraft to entice him to marry her and to have
five men enter into adulterous affairs with her. Furthermore, Anne was
accused of treason because she had supposedly committed adultery while
she was Queen. Anne\'s marriage to Henry VIII was annulled and she was
executed at the Tower of London in 1536.
Within two weeks of Anne\'s death, Henry married Jane Seymour. In 1537,
Jane produced the male heir that Henry had long desired. The boy was
named Edward and would later succeed Henry to the throne. Meanwhile, his
half-sister Elizabeth was declared illegitimate. Shortly after the birth
of the child, Jane died. Jane was followed as Queen by Anne of Cleeves,
whom Henry married in 1540. Anne was the daughter of John III, Duke of
Cleeves. Henry did not actually see Anne until shortly before their
marriage; the relationship was contracted to establish an alliance
between Henry and the Duke of Cleeves, a major Protestant leader. After
Anne married him, Henry found her physically displeasing and
unattractive. Shortly thereafter, the marriage was annulled on the
grounds that Anne had previously been engaged to the Duke of Lorraine.
After her divorce, Anne was treated well. She was given the title of
Princess and allowed to live in Hever Castle, the former home of Anne
Boleyn\'s family.
Henry VIII\'s next marriage was to Catherine Howard, an Englishwoman of
noble birth. In 1542, she was charged and convicted of high treason
after having admitted to being engaged in an adulterous affair. In 1543,
Henry contracted his final marriage, wedding Catherine Parr. The
marriage lasted for the remainder of Henry\'s life, which ended in 1547.
## Edward VI and Lady Jane Grey
When Edward VI, son of Henry VIII and Jane Seymour, came to the throne,
he was just ten years old. His uncle, Edward Seymour, Duke of Somerset
served as Lord Protector while the King was a minor. Several nobles
attempted to take over Somerset\'s role. John Dudley, 1st Earl of
Warwick was successful; he was later created Duke of Northumberland.
Edward VI was the first Protestant King of England. His father had
broken away from the Roman Catholic Church but had not yet embraced
Protestantism. Edward, however, was brought up Protestant. He sought to
exclude his Catholic half-sister Mary from the line of succession. As he
was dying at the age of fifteen, he made a document barring his
half-sisters Mary and Elizabeth from the throne. He named the Lady Jane
Grey, daughter-in-law of the Duke of Northumberland, his successor. Her
claim to the throne was through her mother, who was a granddaughter of
King Henry VII. Jane was proclaimed Queen upon Edward\'s death in 1553,
but she served for only nine days before being deposed by Mary. Mary
enjoyed far more popular support; the public also sympathised with the
way her mother, Catherine of Aragon, had been treated. Jane was soon
executed. She was seventeen years old at the time.
## Mary I
Mary was deeply opposed to her father\'s break from the Church in Rome.
She sought to reverse reforms instituted by her Protestant half-brother.
Mary even resorted to violence in her attempt to restore Catholicism,
earning her the nickname *Bloody Mary*. She executed several
Protestants, including the former Archbishop of Canterbury Thomas
Cranmer, on charges of heresy.
In 1554, Mary married the Catholic King of Spain, Philip II. The
marriage was unpopular in England, even with Catholic subjects. The
couple were unable to produce a child before Mary\'s death from cancer
in 1558.
## Elizabeth I
Mary\'s successor, her half-sister Elizabeth, was one of the most
successful and popular British monarchs. The Elizabethan era was
associated with cultural development and the expansion of English
territory through colonialism.
After coming to power, Elizabeth quickly reversed many of Mary\'s
policies. Elizabeth reinstated the Church of England and had Parliament
pass the Act of Supremacy, which confirmed the Sovereign\'s position as
Supreme Governor of the Church of England. The Act also forced public
and clerical officers to take the Oath of Supremacy recognising the
Sovereign\'s position. Elizabeth, however, did practice limited
toleration towards Catholics.
After Pope Pius V excommunicated Elizabeth in 1570, Elizabeth ended her
policy of religious toleration. One of Elizabeth\'s chief Catholic
enemies was the Queen of Scotland, Mary. Since Elizabeth neither married
nor bore any children, her cousin Mary was a possible heir to the
English throne. Another possible heir was Lady Jane Grey\'s sister,
Catherine. However, when Lady Catherine Grey died in 1568, Elizabeth was
forced to consider that Catholic Mary was the most likely heir. Mary,
however, had earlier been deposed by Scottish nobles, putting her infant
son James on the throne. Mary had fled to England, hoping Elizabeth
would aid her efforts to regain the Scottish throne, but Elizabeth
reconsidered after learning of the \"Ridolfi Plot\", a scheme to
assassinate Elizabeth and put the Roman Catholic Mary on the English
throne. In 1572, Parliament passed a bill to exclude Mary from the line
of succession, but Elizabeth refused to grant Royal Assent to it.
Eventually, however, Mary proved to be too much of a liability due to
her constant involvement in plots to murder Elizabeth. In 1587, she was
executed after having been convicted of being involved in one such plot.
Following Mary\'s execution, Philip II (widower of Mary I of England)
sent a fleet of Spanish ships known as the *Armada* to invade England.
England had supported a Protestant rebellion in the Netherlands and was
seen as a threat to Catholicism. Furthermore, England had interfered
with Spanish shipping and trade. Using Mary\'s execution as an excuse,
Philip II obtained the Pope\'s authority to depose Elizabeth. In 1588,
the Spanish Armada set sail for England. Harmed by bad weather, the
Armada was defeated by Elizabeth\'s naval leaders, including Sir Francis
Drake and the Lord Howard of Effingham.
Towards the end of her life, Elizabeth still failed to name an heir.
When she died, she was ironically succeeded by the son of Mary, Queen of
Scots, James. James was already James VI, King of Scots; he became James
I of England in 1603 and established the rule of the Stuart dynasty.
|
# UK Constitution and Government/House of Stuart and the Commonwealth
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!House of Tudor{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!House of Hanover{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!House of Hanover{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The House of Stuart and the Commonwealth
(1603-1714)`</big>`{=html}`</big>`{=html}
Previous
Chapter \|
Next
Chapter
------------------------------------------------------------------------
## James I
!James I of England With the
death of Elizabeth in 1603, the Crowns of England and Scotland united
under James I. In 1567, when he was just a year old, James\' mother Mary
was forced to abdicate, and James became King James VI. Despite his
mother\'s Catholicism, James was brought up as a Protestant.
One of James\' first acts as King was to conclude English involvement in
the Eighty Years\' War, also called the Dutch Revolt. Elizabeth had
supported the Protestant Dutch rebels, providing one cause for Philip
II\'s attack. In 1604, James signed the Treaty of London, thereby making
peace with Spain.
James had significant difficulty with the English Parliamentary
structure. As King of Scots, he had not been accustomed to criticism
from the Parliament. James firmly believed in the *Divine Right of
Kings*---the right of Kings to rule that supposedly came from God---so
he did not easily react to critics in Parliament. Under English law,
however, it was impossible for the King to levy taxes without
Parliament\'s consent, so he had to tolerate Parliament for some time.
King James died in 1625 and was succeeded by his son Charles.
## Charles I
King Charles ruled at a time when Europe was moving toward domination by
absolute monarchs. The French ruler, Louis XIV, epitomised this
absolutism. Charles, sharing his father\'s belief in the Divine Right of
Kings, also moved toward absolutist policies.
Charles conflicted with Parliament over the issue of the Huguenots,
French Protestants. Louis XIV had begun a persecution of the Huguenots;
Charles sent an expedition to La Rochelle to provide aid to the
Protestant residents. The effort, however, was disastrous, prompting
Parliament to further criticise him. In 1628, the House of Commons
issued the Petition of Right, which demanded that Charles cease his use
of arbitrary power. Charles had persecuted individuals using the Court
of the Star Chamber, a secret court that could impose any penalty, even
torture, except for death. Charles had also imprisoned individuals
without a trial and denied them the right to the writ of *habeas
corpus*. The Petition of Right, however, was not successful; in 1629,
Charles dissolved Parliament. He ruled alone for the next eleven years,
which is sometimes referred to as the *eleven years of tyranny* or
*personal rule*. Since Parliamentary approval was required to impose
taxes, Charles had grave difficulty in keeping the government
functional. Charles imposed several taxes himself; these were widely
seen as unlawful.
During these eleven years, Charles began instituting religious reforms
in Scotland, moving it towards the English model. He attempted to impose
the Anglican Prayer Book on Scottish churches, leading to riots and
violence. In 1638, the General Assembly of the Church of Scotland
abolished the office of bishop and established Presbyterianism (an
ecclesiastic system without clerical officers such as bishops and
archbishops). Charles sent his armies to Scotland, but was quickly
forced to end the conflict, known as the First Bishops\' War, because of
a lack of funding. Charles granted Scotland certain parliamentary and
ecclesiastic freedoms in 1639.
In 1640, Charles finally called a Parliament to authorise additional
taxation. Since the Parliament was dissolved within weeks of its
summoning, it was known as the *Short Parliament*. Charles then sent a
new military expedition to Scotland to fight the Second Bishops\' War.
Again, the Royal forces were defeated. Charles then summoned Parliament
again, this Parliament becoming known as the *Long Parliament*, in order
to raise funds for making reparations to the Scots.
Tension between Charles and Parliament increased dramatically. Charles
agreed to abolish the hated Star Chamber, but he refused to give up
control of the army. In 1641, Charles entered the House of Commons with
armed guards in order to arrest his Parliamentary enemies. They had
already fled, however, and Parliament took the breach of their premises
very seriously. (Since Charles, no English monarch has sought to set
foot in the House of Commons.)
The unsafe monarch moved the Royal court to Oxford. Royal forces
controlled north and west England, while Parliament controlled south and
east England. A Civil War broke out, but was indecisive until 1644, when
Parliamentary forces clearly gained the upper hand. In 1646, Charles was
forced to escape to Scotland, but the Scottish army delivered him to
Parliament in 1647. Charles was then imprisoned. Charles negotiated with
the Scottish army, declaring that if it restored him to power, he would
implement the Scottish Presbyterian ecclesiastic model in England. In
1648, the Scots invaded England, but were defeated.
The House of Commons began to pass laws without the consent of either
the Sovereign or the House of Lords, but many MPs still wished to come
to terms with the king. Members of the army, however, felt that Charles
had gone too far by siding with the Scots against England and were
determined to have him brought to trial. In December 1648 an army
regiment, Colonel Pride\'s, used force to bar entry into the House of
Commons, only allowing MPs who would support the army to remain. These
MPs, the *Rump Parliament*, established a commission of 135 to try
Charles for treason. Charles, an ardent believer in the Divine Right of
Kings, refused to accept the jurisdiction of any court over him.
Therefore, he was by default considered guilty of high treason and was
executed on January 30, 1649.
## Oliver and Richard Cromwell
At first, Oliver Cromwell ruled along with the republican Parliament,
the state being known as the *Commonwealth of England*. After Charles\'
execution, however, Parliament became disunited. In 1653, he suspended
Parliament, and as Charles had done earlier, began several years of rule
as a dictator. Later, Parliament was recalled, and in 1657 offered to
make Cromwell the King. Since he faced opposition from his own senior
military officers, Cromwell declined. Instead, he was made a *Lord
Protector*, even being installed on the former King\'s throne. He was a
King in all but name.
Cromwell died in 1658 and was succeeded by his son Richard, an extremely
poor politician. Richard Cromwell was not interested in his position and
abdicated quickly. The Protectorate was ended and the Commonwealth
restored. Anarchy was the result. Quickly, Parliament chose to
reestablish the monarchy by inviting Charles I\'s son to take the throne
as Charles II.
## Charles II
During the rule of Oliver Cromwell, Charles II remained King in
Scotland. After an unsuccessful challenge to Cromwell\'s rule, Charles
escaped to Europe. In 1660, when England was in anarchy, Charles issued
the Declaration of Breda, outlining his conditions for returning to the
Throne. The Long Parliament, which had been convened in 1640, finally
dissolved itself. A new Parliament, called the *Convention Parliament*,
was elected; it was far more favourable to the Royalty than the Long
Parliament. In May 1660, the Convention Parliament that Charles had been
the lawful King of England since the death of his father in 1649.
Charles soon arrived in London and was restored to actual power. Charles
granted a general pardon to most of Cromwell\'s supporters. Those who
had directly participated in his father\'s execution, however, were
either executed or imprisoned for life. Cromwell himself suffered a
posthumous execution: his body was exhumed, hung, drawn and quartered,
his head cut off and displayed from a pole and the remainder of his body
thrown into a common pit. The posthumous execution took place on the
anniversary of Charles I\'s death.
Charles also dissolved the Convention Parliament. The next Parliament,
called the *Cavalier Parliament* was soon elected. The Cavalier
Parliament lasted for seventeen years without an election before being
dissolved. During its long tenure, the Cavalier Parliament enacted
several important laws, including many that suppressed religious
dissent. The Act of Uniformity required the use of the Church of
England\'s *Book of Common Prayer* in all Church services. The
Conventicle Act prohibited religious assemblies of more than five
members except under the Church of England. The Five Mile Act banned
non-members of the Church of England from living in towns with a Royal
Charter, instead forcing them into the country. In 1672, Charles
mitigated these laws with the Royal Declaration of Indulgence, which
provided for religious toleration. Parliament, however, suspected him of
Catholicism and forced him to withdraw the Declaration. In 1673,
Parliament passed the Test Act, which required civil servants to swear
an oath against Catholicism.
Parliament\'s suspicions did turn out to be accurate. As Charles II lay
dying in 1685, he converted to Catholicism. Charles did not have a
single legitimate child, though he did have, while living in Europe,
several illegitimate ones (over 300 by some estimates). He was
succeeded, therefore, by his younger brother James, an open Catholic.
## James II
James II (James VII in Scotland) was an extremely controversial monarch
due to his Catholicism. Soon after he took power, a Protestant
illegitimate son of Charles II, James Scott, Duke of Monmouth,
proclaimed himself King. James II defeated him within a few days and had
him executed.
James made himself highly unpopular by appointing Catholic officials,
especially in Ireland. Later, he established a standing army in
peacetime, alarming many Protestants. Rebellion, however, did not occur
because people trusted James\' daughter Mary, a Protestant. In 1688,
however, James produced a son, who was brought up Catholic. Since
Mary\'s place in the line of succession was lowered, and a Catholic
Dynasty in England seemed ineveitable, the \"Immortal Seven\"---the Duke
of Devonshire, the Earl of Danby, the Earl of Shrewsbury, the Viscount
Lumley, the Bishop of London, Edward Russell and Henry
Sidney---conspired to replace James and his son with Mary and her Dutch
husband William of Orange. In 1688, William and Mary invaded England and
James fled the country. The revolution was hailed as the *Glorious
Revolution* or the *Bloodless Revolution*. Though the latter term was
inaccurate, the revolution was not as violent as the War of the Roses or
the English Civil War.
## William and Mary
Parliament wished then to make Mary the sole Queen. She, however,
refused and demanded that she be made co-Sovereign with her husband. In
1689, the Parliament of England declared in the English Bill of Rights,
one of the most significant constitutional documents in British history,
that James\' flight constituted an abdication of the throne and that the
throne should go jointly to William (William III) and Mary (Mary II).
The Bill of Rights also required that the Sovereign cannot deny certain
rights, such as freedom of speech in Parliament, freedom from taxation
without Parliament\'s consent and freedom from cruel and unusual
punishment. In Scotland, the Estates General passed a similar Act,
called the Claim of Right, which also made William and Mary joint
rulers. In Ireland, power had to be won in battle. In 1690, the English
won the Battle of the Boyne, thereby establishing William and Mary\'s
rule over the entire British Isles.
For the early part of the reign, Mary administered the Government while
William controlled the military. Unpopularly, William appointed people
from his native Holland as officers in the English army and Royal Navy.
Furthermore, he used English military resources to protect the
Netherlands. In 1694, after the death of Queen Mary from smallpox,
William continued to rule as the sole Sovereign.
Since William and Mary did not have children, William\'s heir was Anne,
who had seventeen pregnancies, most of which ended in stillbirth. In
1700, Anne\'s last surviving child, William, died at the age of eleven.
Parliament was faced with a succession crisis, because after Anne, many
in the line of succession were Catholic. Therefore, in 1701, the Act of
Settlement was passed, allowing Sophia, Electress and Duchess Dowager of
Hanover (a German state), and her Protestant heirs, to succeed if Anne
had no further children. Sophia\'s claim stemmed from her
great-grandfather, James I. Several lines that were more senior to
Sophia\'s were bypassed under the act. Some of these had questionable
legitimacy, while others were Catholic. The Act of Settlement also
banned non-Protestants and those who married Catholics from the throne.
In 1702, William died, and his sister-in-law Anne became Queen.
## Anne
Even following the passage of the Act of Settlement, Protestant
succession to the throne was insecure in Scotland. In 1703, the Scottish
Parliament, the Estates, passed a bill that required that, if Anne died
without children, the Estates could appoint any Protestant descendant of
Scottish monarchs as the King. The individual appointed could not be the
same person who would, under the Act of Settlement, succeed to the
English crown unless several economic conditions were met. The Queen\'s
Commissioner refused Royal Assent on her behalf. The Scottish Estates
then threatened to withdraw Scottish troops from the Queen\'s armies,
which were then engaged in the War of the Spanish Succession in Europe
and Queen Anne\'s War in North America. The Estates also threatened to
refuse to levy taxes, so Anne relented and agreed to grant Royal Assent
to the bill, which became the Act of Security.
The English Parliament feared the separation of the Crowns which had
been united since the death of Elizabeth I. They therefore attempted to
coerce Scotland, passing the Alien Act in 1705. The Alien Act provided
for cutting off trade between England and Scotland. Scotland was already
suffering from the failure of the Darién Scheme, a disastrous and
expensive attempt to establish Scottish colonies in America. Scotland
quickly began to negotiate union with England. In 1707, the Act of Union
was passed, despite mass protest in Scotland, by Parliament and the
Scottish Estates. The Act combined England and Scotland into one Kingdom
of Great Britain, terminated the Parliament and Estates, and replaced
them with one Parliament of Great Britain. Scotland was entitled to
elect a certain number of members of the House of Commons. Furthermore,
it was permitted to send sixteen of its peers to sit along with all
English peers in the House of Lords. The Act guaranteed Scotland the
right to retain its distinct legal system. The Church of Scotland was
also guaranteed independence from political interference. Ireland
remained a separate country, though still governed by the British
Sovereign.
Anne is often remembered as the last British monarch to deny Royal
Assent to a bill, which she did in 1707 to a militia bill. Due to her
poor health, made worse by her failed pregnancies, her government was
run through her ministers. She died in 1714, to be succeeded by George,
Elector of Hanover, whose mother Sophia had died a few weeks earlier.
|
# UK Constitution and Government/House of Hanover
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!House of Stuart and the
Commonwealth{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Houses of Saxe-Coburg-Gotha and
Windsor{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Houses of Saxe-Coburg-Gotha and
Windsor{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The House of Hanover
(1714-1901)`</big>`{=html}`</big>`{=html}
Previous
Chapter
\| Next
Chapter
------------------------------------------------------------------------
## George I
!King George I George, Duke
and Elector of Hanover became King George I in 1714. His claim was
opposed by the Jacobites, supporters of the deposed King James II. Since
James II had died, his claim was taken over by his son, James Francis
Edward Stewart, the \"Old Pretender.\" In 1715, there was a Jacobite
rebellion, but an ill James could not lead it. By the time he recovered,
it was too late, and the rebellion was suppressed.
King George was not deeply involved in British politics; instead, he
concentrated on matters in his home, Germany. The King could not even
speak English, earning the ridicule of many of his subjects. George,
furthermore, spent much time in his native land of Hanover. Meanwhile, a
ministerial system developed in Great Britain. George appointed Sir
Robert Walpole as *First Lord of the Treasury*. Walpole was George\'s
most powerful minister, but he was not termed \"Prime Minister\"; that
term came into use in later years. Walpole\'s tenure began in 1721;
other ministers held office at his, rather than the King\'s, pleasure.
George\'s lack of involvement in politics contributed greatly to the
development of the modern British political system.
George died in 1727 from a stroke while in Germany. He was succeeded by
his son, who ruled as George II.
## George II
George II was naturalised as a British citizen in 1705; his reign began
in 1727. Like his father, George transferred political power to Sir
Robert Walpole, who served until 1742. Walpole was succeeded by Spencer
Compton, 1st Earl of Wilmington, who served until 1743, and then by
Henry Pelham, who served until his death in 1754. During Pelham\'s
service, the nation experienced a second Jacobite Rebellion, which was
almost successful in putting Bonnie Prince Charlie---son of the Old
Pretender, himself called the Young Pretender---on the throne. The
rebellion began in 1745 and was ended in 1746 when the King\'s forces
defeated the Jacobites at the Battle of Culloden, the last battle ever
to be fought on British soil.
Before George II\'s death in 1760, he was served by two other Prime
Ministers: Henry Pelham\'s elder brother the Duke of Newcastle, and the
Duke of Devonshire. George II\'s eldest son, Frederick, had predeceased
him, so George was succeeded by his grandson, also named George.
## George III
George III attempted to reverse the trend that his Hanoverian
predecessors had set by reducing the influence of the Prime Minister. He
appointed a variety of different people as his Prime Minister, on the
basis of favouritism rather than ability. The Whig Party of Robert
Walpole declared George an autocrat and compared him to Charles I.
George III\'s reign is notable for many important international events.
In 1763, Great Britain defeated France in the Seven Years\' War, a
global war that also involved Spain, Portugal and the Netherlands and
was fought in Europe, America and India. As a result of the Treaty of
Paris, New France (the French territory in North America, including
Quebec and land east of the Mississippi) was ceded to Britain, as was
Spanish Florida. Spain, however, took New Orleans and Louisiana, the
vast French territory on the west of the Mississippi. Great Britain came
to be recognised as the world\'s pre-eminent colonial power, displacing
France. The nation, however, was left deeply in debt. To overcome it,
British colonies in America were taxed, much to their distaste.
Eventually, Britain lost its American colonies during the American War
of Independence, which lasted from 1776 to 1783. Elsewhere, however, the
British Empire continued to expand. In India, the British East India
Company took control of many small nation-states nominally headed by
their own princes. The island of Australia was also occupied, and
Canada\'s population increased with the number of British Loyalists who
left the newly formed United States of America.
In 1801, Parliament passed the Act of Union, uniting Great Britain and
Ireland into the United Kingdom. Ireland was allowed to elect 100
Members of Parliament to the House of Commons and 22 representative
peers to the House of Lords. The Act originally provided for the removal
of restrictions from Roman Catholics, but George III refused to agree to
the proposal, arguing that doing so would violate his oath to maintain
Protestantism.
George was the last British monarch to claim the Kingdom of France. He
was persuaded to abandon the meaningless claim dating to the Plantagenet
days in 1801 by the French ruler Napoleon.
In 1811, George III, who had previously suffered bouts of madness, went
permanantly insane. His son George ruled the country as *Prince Regent*,
and became George IV when the King died in 1820.
## George IV
George IV is often remembered as an unwise and extravagant monarch.
During his Regency, London was redesigned, and funding for the arts was
increased. As King, George was unable to govern effectively; he was
overweight, possibly addicted to a form of opium and showing signs of
his father\'s mental disease. While he ruled, George\'s ministers were
once again able to regain the power that they had lost during his
father\'s reign.
George opposed several popular social reforms. As his father, he refused
to lift several restrictions on Roman Catholics. Upon his death in 1830,
his younger brother began to reign as William IV.
## William IV
Early in William\'s reign, British politics was reformed by the Reform
Act of 1832. At the time, the House of Commons was a disorganised and
undemocratic body, unlike the modern House. The nation included several
*rotten boroughs*, which historically had the right to elect members of
Parliament, but actually had very few residents. The rotten borough of
Old Sarum, for instance, had seven voters, but could elect two MPs. An
even more extreme example is of Dunwich, which could also elect two MPs
despite having no residents, the entire borough having been eroded away
into the North Sea. Other boroughs were called *pocket boroughs* because
they were \"in the pocket\" of a wealthy landowner, whose son was
normally elected to the seat. At the same time, entire cities such as
Westminster (with about 20,000 voters) still had just two MPs.
The House of Commons agreed to the Reform Bill, but it was rejected by
the House of Lords, whose members controlled several pocket boroughs.
The Tory Party, furthermore, opposed the bill actively. William IV
agreed with his Prime Minister, the Earl Grey, to flood the House of
Lords with pro-reform members by creating fifty new peerages; when the
time came, he backed down. The Earl Grey and his Whig Party government
then resigned, but returned to power when William finally agreed to
co-operate. The Reform Act of 1832 gave urban areas increased political
power, but allowed aristocrats to retain effective control of the rural
areas. Over fifty rotten boroughs were abolished, while the
representation of some other boroughs was reduced from two MPs to one.
Though members of the middle class were granted the right to vote, the
Reform Act did not do much to expand the electorate, which amounted
after passage to just three percent of the population.
In 1834, William became the last British monarch to appoint a Prime
Minister who did not have the confidence of Parliament. He replaced the
Whig Prime Minister, the Viscount Melbourne, with a Tory, Sir Robert
Peel. Peel, however, had a minority in the House of Commons, so he
resigned in 1835, and Melbourne returned to power.
In 1837, William died and was succeeded on the British throne by his
niece Victoria, who was just eighteen years old at the time. The union
of the Crowns of Britain and Hanover was then dissolved, since Salic
Law, which applied in Hanover, only allowed males to rule. Therefore,
Hanover passed to William\'s brother Ernest.
## Victoria
!Queen Victoria A few years
after taking power, Victoria married a German Prince, Albert of
Saxe-Coburg-Gotha, who was given the title of *Prince Consort*. Albert
originally wished to actively govern the United Kingdom, but he
acquiesced to his wife\'s requests to the contrary. The extremely happy
marriage ended with Albert\'s death in 1861, following which Victoria
entered a period of semi-mourning that would last for the rest of her
reign. She was often called *the Widow of Windsor*, after Windsor
Castle, a Royal home.
In 1867, Parliament passed another Reform Act. Like its predecessor, the
Reform Act of 1832, true electoral reform was not achieved; the property
qualifications limited the electorate to about eight percent of the
population. Therafter, power was held by two Prime Ministers---Benjamin
Disraeli (a Tory and a favourite of Victoria) and William Ewart
Gladstone (a Liberal whom Victoria disliked)---from 1868 to 1885. In
1876, Disraeli convinced Victoria to take the title of Empress of India.
Many of Victoria\'s daughters married into European Royal Houses, giving
her the nickname *Grandmother of Europe*. All of the current European
monarchs descend from Victoria.
Victoria died in 1901, holding the record for longest serving British
Sovereign. She was succeeded by her son Edward, who became King Edward
VII. Edward was deemed to belong not to his mother\'s House of Hanover,
but instead to his father\'s dynasty, Saxe-Coburg-Gotha.
|
# UK Constitution and Government/Houses of Saxe-Coburg-Gotha and Windsor
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!House of Hanover{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Constitution{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Constitution{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Houses of Saxe-Coburg-Gotha and Windsor
(1901---)`</big>`{=html}`</big>`{=html}
Previous
Chapter \|
Next Chapter
------------------------------------------------------------------------
## Edward VII
Edward VII was the oldest person in British history to become King,
beginning his reign at the age of fifty-nine. He participated actively
in foreign affairs, visiting France in 1903. The visit led to the
*Entente Cordiale* (Friendly Understanding), an informal agreement
between France and the United Kingdom marking the end of centuries of
Anglo-French rivalry. In the case of Germany, however, Edward VII
exacerbated rivalry through his bad relations with his nephew, Kaiser
Wilhelm II.
Towards the end of his life, Edward was faced with a constitutional
crisis when the Liberal Government, led by Herbert Henry Asquith,
proposed the *People\'s Budget*. The Budget reformed the tax system by
creating a land tax, which would adversely affect the aristocratic
class. The Conservative landowning majority in the House of Lords broke
convention by rejecting the budget. They argued that the Commons
themselves had broken a convention by attacking the wealth of the Lords.
Before the problem could be resolved, Edward VII died in 1910, allowing
his son, George, to ascend to the throne.
## George V
After George became King, the constitutional crisis was resolved after
the Liberal Government resigned and Parliament was dissolved. The
Liberals were reelected, in part due to the unpopularity of the House of
Lords, and used the election as a mandate to force their Budget through,
almost too late to save the nation\'s financial system from ruin.
The Lords paid a price for their opposition to the Liberals, who in the
commons passed the Parliament Bill, which provided that a bill could be
submitted for the King\'s Assent if the Commons passed it in three
consecutive sessions, even if the Lords rejected it. The time would
later be reduced to two sessions in 1949. When the House of Lords
refused to pass the Parliament Bill, Prime Minister Asquith asked George
V to create 250 new Liberal peers to erase the Conservative majority.
George agreed, but the Lords acquiesced and passed the bill quickly.
World War I occurred during George\'s reign. Due to the family\'s German
connections, the Royalty began to become unpopular; George\'s cousin,
Wilhelm II, was especially despised. In 1917, to appease the public,
George changed the Royal House\'s name from the German-sounding
*Saxe-Coburg-Gotha* to the more English *Windsor*.
In 1922, most of Ireland left the United Kingdom to form the Irish Free
State following the Irish Civil War. The Irish Free State retained the
British monarch as a Sovereign, but functioned as a Dominion of the
Crown, with its own Government and Legislature. Six counties in the
Irish province of Ulster remained in the United Kingdom as Northern
Ireland. In 1927, the name of the country was changed from *the United
Kingdom of Great Britain and Ireland* to *the United Kingdom of Great
Britain and Northern Ireland*.
George V died in 1936 and was succeeded by his son, who ruled as Edward
VIII.
## Edward VIII
Edward VIII became King in January of 1936 and abdicated in December.
His reign was controversial because of his desire to marry the American
Wallis Simpson. Simpson was already divorced once; she divorced her
second husband so she could marry King Edward. A problem, however,
existed because Edward was the Supreme Governor of the Church of
England, which prohibited remarriage after divorce. The Government
advised him that he could not marry while he was King, so he indicated a
desire to abdicate and marry Simpson. The abdication was not unilateral,
as the Act of Settlement provided that the Crown go to the heir of
Sophia, Electress of Hanover, regardless of that person\'s willingness
to rule. Therefore, Parliament had to pass a special Act in order to
permit Edward to abdicate, which he did.
Edward\'s brother, Albert Frederick Arthur George, became King. He chose
to rule as George VI to create a link in the public\'s mind between him
and the previous Kings of the same name during a time of crisis. Edward,
meanwhile, was made Duke of Windsor and the issue of his marriage to
Simpson were excluded from the line of succession.
## George VI
When George took power in 1936, the popularity of the Royal Family had
been damaged by the abdication crisis. It was, however, restored when
George and his wife, Queen Elizabeth, led the nation and boosted morale
during World War II. During the war, Britain was led by one of its most
famous Prime Ministers, Sir Winston Churchill.
Following the War, the United Kingdom began to lose several of its
overseas possessions. In 1947, India became independent and George lost
the title of Emperor of India. Until 1950, however, he remained King of
India while a constitution was being written. George was also the last
King of Ireland; the Irish established a republic in 1949.
George died in 1952 from lung cancer. His daughter Elizabeth succeeded
him.
## Elizabeth II
During Elizabeth\'s reign, there have been several important
constitutional developments. A notable one occurred in 1963, when
Conservative Prime Minister Harold Macmillan resigned. There was no
clear leader of the Conservative Party, but many favoured Richard Austen
Butler, the Deputy Prime Minister. Harold Macmillan advised the Queen,
however, that senior politicians in the party preferred Alec
Douglas-Home, 14th Earl of Home. Elizabeth accepted the advice and
appointed the Earl of Home to the office of Prime Minister, marking the
last time a member of the House of Lords would be so appointed. Home,
taking advantage of the Peerage Act passed in 1963, \"disclaimed\" his
peerage. A Conservative member of the House of Commons vacated his seat,
allowing Home to contest the by-election for that constituency and
become a member of the House of Commons.
There have also been many recent constitutional developments in the
nation. The office of Prime Minister increased greatly in power under
the Conservative Prime Minister Margaret Thatcher (the \"Iron Lady\")
and the Labour Prime Minister Tony Blair. Under Blair, many of
Parliament\'s lawmaking functions were devolved to local administrations
in Scotland, Wales and Northern Ireland. In 1999, the House of Lords Act
was passed, removing the automatic right of hereditary peers to sit in
the House.
Elizabeth II continues to reign; her heir is Charles, Prince of Wales.
|
# UK Constitution and Government/Constitution
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Houses of Saxe-Coburg-Gotha and
Windsor{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Sovereign{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Sovereign{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
\_\_NOTOC\_\_ `<big>`{=html}`<big>`{=html}The
Constitution`</big>`{=html}`</big>`{=html}
Previous
Chapter
\| Next Chapter
------------------------------------------------------------------------
Kingdom does not possess a document expressing itself to be the
nation\'s fundamental or highest law. Instead, the British constitution
is found in a number of sources. Because of this, the British
constitution is often said to be an *unwritten constitution*; however,
many parts of the constitution are indeed in written form, so it would
be more accurate to refer to the body of the British constitution as an
*uncodified constitution*.
The British constitution is spread across a number of sources:\
1. Statute law\
2. Royal prerogative (executive powers usually exercised by Ministers
of the Crown)\
3. Constitutional conventions (accepted norms of political
behaviour)\
4. Common law (decisions by senior courts that are binding on lower
courts)\
5. EU Treaties\
6. Statements made in books considered to have particular
authority\
Note that not all of these sources form part of the law of the land, and
so the British constitution encompasses a wider variety of rules, etc.
than that of (say) the United States.
## 1. Statute law
Many important elements of the British constitution are to be found in
Acts of Parliament. In contrast with many other countries, legislation
affecting the constitution is not subject to any special procedure, and
is passed using the same procedures as for ordinary legislation.
The most important statute law still in force and affecting the
constitution includes the following:
- The *Habeas Corpus Act 1679*
- The *Bill of Rights* (1689)
- The *Claim of Right* (1689)
- The *Act of Settlement* (1701)
- The *Acts of Union* (1707)
- The *Septennial Act 1715*
- The *Acts of Union* (1800)
- The *Parliament Acts* (1911 and 1949)
- The *Regency Act 1953*
- The *Life Peerages Act 1958*
- The *Peerage Act 1963*
- The *European Communities Act 1972*
- The *British Nationality Act 1981*
- The *Representation of the People Act 1983*
- The *Parliamentary Constituencies Act 1986*
- The *Human Rights Act 1998*
- The *Scotland Act 1998*
- The *Northern Ireland Act 1998*
- The *House of Lords Act 1999*
- The *Civil Contingencies Act 2004*
- The *Constitutional Reform Act 2005*
- The *Government of Wales Act 2006*
## 2. Royal prerogative
Certain powers pre-dating the establishment of the present parliamentary
system are still formally retained by the Queen. In practice almost all
of these powers are exercised only on the decision of Ministers of the
Crown (the Cabinet). These powers, known as the *royal prerogative*,
include the following:
- The appointment and dismissal of government ministers
- The summoning, opening, prorogation, and dissolution of Parliament
- The assenting to legislation
- The power to declare war, and to deploy the armed forces
- The power to conduct relations with foreign states, including the
recognition of states or governments, and the making of treaties
- The issuing of passports
## 3. Constitutional conventions
Conventions are customs that operate as rules considered to bind the
actions of the Queen or the Government. Conventions are not part of the
law, but nevertheless are often considered to be just as fundamental to
the structure and working of the constitution as the contents of any
statute. Indeed, statute law affecting the constitution is often written
in such a way that the existence of certain conventions is taken for
granted, and some conventions are so fundamental that many people are
unaware that they are in fact \"unwritten\" rules.
Examples of the more important constitutional conventions include:
- The Queen does not direct government policy, and leaves all
decision-making to her Cabinet
- Cabinet members are bound by the principle of *collective
responsibility*; ministers who feel themselves unable to publicly
support or defend the policy of the Government are expected to
resign
- The Government is headed by a Prime Minister, appointed by the Queen
from the House of Commons
- The Prime Minister is usually expected to be the leader of the
political party with the most MPs (members of the House of Commons)
- When a Prime Minister\'s political party loses a general election
(i.e. obtains less seats in the House of Commons than a rival
party), he or she is expected to resign
- Government ministers are usually expected to be drawn entirely from
the two Houses of Parliament, and most important office-holders are
expected to be MPs
- A government that is unable to obtain the passage through Parliament
of important legislation, including the annual Appropriation and
Finance Acts, is expected to resign
- The (unelected) House of Lords does not obstruct the passage of
legislation stated in the government party\'s election manifesto to
be fundamental policy
- The Speaker of the House of Commons is expected to be impartial,
even though originally elected as the representative of a political
party
## 4. Common law
The common law is that part of the law which does not rest on statute.
Instead, it is the accumulation of specific judicial decisions set by
senior courts as precedents binding on lesser courts.
Certain parts of the common law are also what is known as *trite law*:
examples of this include the fact that the United Kingdom is a monarchy,
and the fact that brothers used to take precedence over sisters in the
succession to the throne; this was repealed by the Succession to the
Crown Act 2013.
## 5. EU Treaties
As a member state of the European Union, the United Kingdom is bound by
EU law. However, the majority of votes cast favoured the referendum to
exit the EU. The UK invoked Article 50 of the Treaty on European Union
on 29 March 2017, which started what is often referred to as the
\"Brexit\". The United Kingdom left the European Union on 30 January
2020.
## 6. Authoritative statements
Certain published works are usually considered to have particular
authority. In the first half of the twentieth century this was the case
with A V Dicey\'s *Law of the Constitution*, being cited with approval
in judicial decisions. A particularly important work is *Erskine May*,
which sets out the procedures and customs of the House of Commons. Other
important sources include certain ministerial statements.
However, none of these works have **legal** authority; at best, they are
merely *persuasive*.
|
# UK Constitution and Government/Sovereign
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Constitution{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Parliament{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Parliament{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Sovereign`</big>`{=html}`</big>`{=html}
Previous
Chapter \|
Next Chapter
------------------------------------------------------------------------
## The Sovereign
The role of head of state in the United Kingdom is held by the
Sovereign; the present Sovereign is Queen Elizabeth II.
### Succession to the throne
As a hereditary monarchy, the rules for succession to the throne are
established by common law, as modified by statute.
In accordance with the *Act of Settlement* (1701), on the death of the
Sovereign he or she used to be succeeded by his or her \"heir of the
body\"; this operated in accordance with the principle of
*male-preference primogeniture*. If the Sovereign had only one child,
that child succeeds. If there are more than one children, then the order
of succession was determined first by sex, and then by age. This was
repealed by the Succession to the Crown Act 2013.
If the Sovereign dies childless (\"without issue\") then the order of
succession is applied to their siblings: the oldest surviving sibling
then succeeds. If the Sovereign\'s siblings have died before he or she
died, then the order of succession works through the children of the
next oldest deceased brother, and so on.
Only legitimate children are able to succeed. The *Royal Marriages Act
1772* operates to restrict the capacity for a potential heir to marry
without the Sovereign\'s approval: all descendants of King George II,
other than women who have married into foreign families, are required to
obtain the Sovereign\'s consent before marrying, unless they can
otherwise obtain approval from both Houses of Parliament.
The *Bill of Rights* (1689) and *Act of Settlement* require all heirs to
be descendants of Sophia, Electress of Hanover (d. 1714), and impose
further requirements that an heir be a Protestant, that they may never
have married a Roman Catholic, and that they be in full communion with
the Church of England. Heirs not meeting these conditions are skipped
over as if \"naturally dead\".
### The role of the Sovereign beyond the United Kingdom
As Sovereign in right of the United Kingdom, the Sovereign is also head
of state in the \"Crown dependencies\" of Jersey, Guernsey (and its
dependencies), and the Isle of Man. While the external relations of
these islands is dealt with by the United Kingdom, however, they do not
form part of the United Kingdom itself, and have their own
constitutional arrangements.
Similarly, the United Kingdom has sovereignty over various territories
around the world, known as the *British overseas territories*. As such,
the Sovereign is also head of state in these territories, although again
these do not form part of the United Kingdom itself, and have their own
constitutional arrangements.
The British Sovereign is also the Sovereign of certain other
*Commonwealth Realms*: Antigua and Barbuda, Australia, the Bahamas,
Barbados, Belize, Canada, Grenada, Jamaica, New Zealand, Papua New
Guinea, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the
Grenadines, the Solomon Islands and Tuvalu. Each of these nations is a
separate monarchy; the Sovereign therefore holds sixteen different
crowns. In each nation, the Sovereign is represented by a
Governor-General, who generally stands in relation to the local
government in the same relation as the Sovereign does to the British
government.
Finally, the Sovereign has the title *Head of the Commonwealth*. The
Commonwealth is a body of nations mostly made up of former colonial
dependencies of the United Kingdom. The role of Head of the Commonwealth
is a personal role of the present Queen, Elizabeth II, and is not
formally attached to the monarchy itself (although the present Queen\'s
father, King George VI, also held the title). The role is purely a
ceremonial one.
### Royal Family
While the members of the Sovereign\'s family do not have any role in
government, they do exercise ceremonial functions on his or her behalf.
A male Sovereign has the title \"King\", while a female Sovereign is the
\"Queen\". The wife of a King is also known as a Queen; however, the
husband of a female Sovereign has no specific title.
By convention, the Sovereign\'s eldest son is created \"Prince of
Wales\" and \"Earl of Chester\" while still a boy; he also automatically
gains the title of \"Duke of Cornwall\". Also by convention, the
Sovereign\'s sons receive a peerage either upon reaching the age of
twenty-one, or upon marrying.
The style of *Prince* or *Princess* extends to the children of the
Sovereign, the children of the sons of the Sovereign, and the eldest son
of the eldest son of the Prince of Wales. Furthermore, wives of Princes
are styled Princesses, though husbands of Princesses do not
automatically become Princes.
|
# UK Constitution and Government/Parliament
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Sovereign{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Government{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Government{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Parliament`</big>`{=html}`</big>`{=html}
Previous Chapter
\| Next Chapter
------------------------------------------------------------------------
## Parliament
Parliament is the supreme law-making body in the United Kingdom. It is
made up of two *Houses of Parliament*, namely the *House of Commons* and
the *House of Lords*, as well as the Sovereign. The Sovereign\'s
involvement in the life and working of Parliament is purely formal.
In constitutional theory, Parliament in its strictest sense is sometimes
referred to as the *Queen-in-Parliament*; this contrasts with the more
ordinary use of the term \"Parliament\", meaning just the two Houses of
Parliament. Within the British constitutional framework, the
Queen-in-Parliament is supreme (\"sovereign\"), able to make, alter, or
repeal any law at will.
Both Houses of Parliament meet at the *Palace of Westminster*.
## Parliaments and Sessions
As with most legislatures, Parliament does not continue in perpetual
existence. Typically, the \"life\" of a Parliament is around four years.
Parliament is initially *summoned* by the Sovereign. This now always
occurs after there has been a general election. Once assembled, and a
Speaker has been chosen by the House of Commons, Parliament is formally
*opened* by the Sovereign. The business of the two Houses is arranged
into *sessions*, which usually last a year (running from around October
or November each calendar year). However, there is usually a long
*recess* during the summer months, when business is temporarily
suspended.
The opening of each parliamentary session is conducted in accordance
with a great deal of traditional ceremony. The Sovereign takes his or
her seat on the throne situated in the chamber of the House of Lords,
and the Gentleman Usher of the Black Rod (one of that House\'s officers)
is commanded to summon the House of Commons. When Black Rod reaches the
door of the Commons, it is slammed shut in his face, to symbolise the
right of the Commons to debate without royal interference. Black Rod
then solemnly knocks on the door with his staff of office; on the third
knock, the door is opened, and he is permitted to enter and deliver his
message. MPs then proceed from the Commons to the House of Lords, to
hear the *Speech from the Throne*, more commonly known as the *Queen\'s
Speech*. The Speech outlines the Government\'s legislative proposals for
the session; while worded as if it\'s the Sovereign\'s own policy, the
Speech is in fact entirely drafted by Government ministers.
Each session is ended by a *prorogation*. The Commons are formally
summoned to the House of Lords, where another formal Speech is read out,
summing up the work of the two Houses of Parliament over the course of
the session. In practice the Sovereign no longer attends for the
prorogation; *Lords Commissioners* are appointed to perform the task,
and one of their number also reads out the Speech.
By law, each Parliament must come to an end no later than five years
from its commencement; this is known as *dissolution*. Because a
dissolution is necessary in order to trigger a general election, the
Prime Minister was effectively able to choose to hold elections at a
time that seems the most advantageous to his or her political party.
After the 2010 General Election the Liberal Democrat party made the
introduction of a law on fixed term parliaments a requirement for
forming a coalition government with the Conservative party. A coalition
was necessary as the result of the election meant no party had an
overall majority of seats. The main opposition party from the previous
parliament, the Conservative party, gained the most seats and under
parliamentary protocol had the first opportunity to try and form a
government. They concluded a formal deal with the Liberal Democrats to
govern together. The Liberal Democrats had insisted on fixing the term
of parliaments to reduce the inherent advantage the governing party had
in being able to choose the moment to hold an election. There are
provisions in the Fixed Term Parliaments Act to allow an early election
with the consent of Parliament. These provisions were used in 2017.
Although the duration of Parliament has been restricted to five years
since 1911, legislation was passed during both World Wars to extend the
life of the existing Parliament; this meant that the Parliament summoned
in 1935 eventually continued in existence for around ten years, until
1945.
## House of Commons
### Composition
While sometimes described as the \"lower house\", the House of Commons
is by far the most important of the two Houses of Parliament. Members of
the House of Commons are known as *Members of Parliament*, or *MPs*.
The entire United Kingdom is subdivided into *constituencies*, each of
which returns one MP to sit in the House of Commons. There are presently
650 constituencies, however the exact number fluctuates over time as the
boundaries of constituencies are periodically reviewed by Boundaries
Commissions set up for each part of the UK. Constituencies are intended
to have roughly equal numbers of voters, but in practice the smallest
and largest constituencies can have a significant difference in size.
At each *general election* all seats in the House of Commons become
vacant. If a seat becomes vacant during the life of a Parliament (i.e.
between general elections), then a *by-election* is held for that
constituency. The election for each constituency is by secret ballot
conducted according to the *First-Past-the-Post* system: the candidate
with the most votes is *returned* as MP.
#### Qualifications of voters
A person must be aged at least eighteen in order to vote.
The following nationalities are entitled to vote at parliamentary
elections:
- British citizens
- citizens of the Republic of Ireland
- citizens of Commonwealth countries
Irish and Commonwealth citizens must have been resident in the United
Kingdom. British citizens who are resident abroad are only able to vote
if they had been resident in the United Kingdom within the previous 15
years.
Certain categories of people are unable to vote:
- the Sovereign
- members of the House of Lords
- people serving prison sentences
- persons convicted of \"corrupt practices\" (electoral malpractice)
within the previous five years
- the insane
By convention, close relatives of the Sovereign also do not vote.
#### Qualifications of MPs
Anyone who is not disqualified to vote is also qualified to be an MP,
except the following:
- undischarged bankrupts
- persons convicted of treason
- members of legislatures outside of the United Kingdom that are not
in Commonwealth countries
- civil servants
- members of certain specific public bodies, and holders of certain
specific statutory offices
- members of the armed forces
- judges
#### Resignation as an MP
Since the 17th century, the House of Commons has asserted that MPs may
not resign. However, in practice members are able to resign by the legal
fiction of appointment as *Crown Steward and Bailiff of the three
Chiltern Hundreds of Stoke, Desborough, and Burnham*, or as *Crown
Steward and Bailiff of the Manor of Northstead*. Neither of these
offices carries any duties, but have been preserved in force so that
those appointed to them automatically lose their seats in the House of
Commons as having accepted an *office of profit under the Crown*.
### Speakership and procedure
The House of Commons is presided over by the *Speaker*. There are also
three Deputy Speakers, with the titles of *Chairman of Ways and Means*,
*First Deputy Chairman of Ways and Means*, and *Second Chairman of Ways
and Means*.
The Speaker and his or her deputies are elected at the commencement of a
Parliament, and serve until its dissolution. Following a general
election, the *Father of the House* (the member with the longest
unbroken service in the House, who is not also a Minister of the Crown)
takes the chair. If the Speaker from the previous Parliament has been
returned as a member of the new Parliament, and intends to continue in
office, then the House votes on a motion that the member take the chair
as Speaker. Otherwise, or if the motion for his or her re-election
fails, then members vote by secret ballot in several rounds; after each
round, the candidate with the fewest votes is eliminated. The election
ends when one member secures a majority of votes in a particular round.
Thereafter, the Speaker-elect leads the House of Commons to the House of
Lords, where the Lords Commissioners (five Lords representing the
Sovereign) officially declare the Royal Approbation (approval) of the
Speaker, who immediately takes office. The Speaker traditionally lays
claim to all of the House\'s privileges, including freedom of speech in
debate, which the Lords Commissioners then confirm on behalf of the
Sovereign.
If a Speaker should choose to resign from his post during the course of
Parliament, then he must preside over the election of his successor. The
new election is otherwise conducted in the same manner as at the
beginning of a Parliament. The new Speaker-elect receives the Royal
Approbation from Lords Commissioners; however, the ceremonial assertion
of the rights of the Commons is not repeated.
The Speaker is expected to act impartially. He or she is an important
figure within the House of Commons, controlling the flow of debate by
selecting which members get to speak in debates, and by ensuring that
the customs and procedures of the House are complied with. The Speaker
and his deputies do not generally speak during debates, nor vote at
divisions.
The Speaker also exercises disciplinary powers. He or she may order any
member to resume his or her seat if they consistently contribute
irrelevant or repetitive remarks during a debate. An individual who has
disregarded the Speaker\'s call to sit down may be requested to leave
the House; if the request is declined, then the Speaker may \"name\" the
member. The House then votes on whether to suspend the member in
question for a certain number of days, or even, in the case of repeated
breaches, for the remainder of the session. In the most serious cases,
the House may vote to expel a member. In the case of grave disorder, the
Speaker may adjourn the House without a vote.
The House votes on all questions by voice first. The Speaker asks all
those in favour of the proposition to say \"Aye,\" and those opposed to
say \"No\". The Speaker then assesses the result, saying \"I think the
Ayes have it\" or \"I think the Noes have it\", as appropriate. Only if
a member challenges the Speaker\'s opinion is a *division*, or formal
count, called. During a division, members file into two separate lobbies
on either side of the Commons chamber. As they exit each lobby, clerks
and tellers count the votes and record the names. The result is then
announced by the Speaker. In the event of a tied vote, the Speaker (or
other occupant of the Chair) has a *casting vote*; however, conventions
exist that the Speaker would cast a vote to maintain the *status quo.*
In effect moving bills on to further scrutiny, but not pass a bill into
law.
## House of Lords
### Composition
Generally speaking, membership of the House of Lords is by appointment
for life. However, up until 1999, hereditary peers were also members of
the Lords; when this right was abolished, a compromise measure allowed
them to elect ninety of their number to continue as members.
Certain office-holders are also ex officio members of the House of
Lords:
- the Earl Marshal
- the Lord Great Chamberlain
- the Archbishop of Canterbury
- the Archbishop of York
- the Bishops of London, Durham, and Winchester
The Earl Marshal and Lord Great Chamberlain are mostly ceremonial
offices. In addition to the three ex officio bishops, the 21
longest-serving diocesan bishops also sit in the Lords.
The general qualifications for sitting and voting in the Lords are:
- to have reached the age of 21
- to be a British citizen, or a citizen of the Republic of Ireland, or
a citizen of a Commonwealth country
- to not have been convicted of treason
- to not have been declared insane
### Speakership and procedure
The *Lord Speaker* is elected by the House. Until recently his or her
duties were carried out by the Lord Chancellor, a Minister of the Crown.
In contrast with the Speaker of the House of Commons, the Lord Speaker
has a relatively minor role, since the House of Lords is generally
self-governing: the House itself decides upon points of order and other
such matters. The seat used by the Lord Speaker is known as *the
Woolsack*.
Similar to the House of Commons, the Lords also vote by voice first. The
Lord Speaker (or whoever else is presiding) puts the question, with
those in favour saying \"Content,\" and those opposed saying
\"Not-Content.\" If the Lord Speaker\'s assessment of the result is
challenged, a division follows, with members voting in the appropriate
lobby just as is done in the Commons. The officer presiding may vote
from his or her place in the chamber rather than from a lobby. In the
case of a tie, the result depends on what type of motion is before the
House. A motion that a bill be advanced to the next stage or passed is
always decided in the positive, while amendments to bills or other
motions are decided in the negative, if there is an equality of votes.
## Acts of Parliament
Legislation passed by Parliament is in the form of an *Act of
Parliament*.
A draft law is known as a *Bill*. A bill passes into law provided that
it has either been passed by both Houses of Parliament, or the
provisions of the Parliament Acts have been complied with; and provided
it has received the Royal Assent.
A bill must pass through several stages in both of the two Houses. A
bill is \"read\" three times in each House. The First Reading for Public
Bills is almost always a formality. The Second Reading is a debate on
the merits of the general principles behind the bill. Next follow the
Committee and Report stages. The Third Reading is a vote upon the bill
as a whole, as amended during the Committee and Report stages. Once the
House into which the bill was first introduced has finished with it, the
bill is then introduced into the other House. Any amendments by the
second House then have to be agreed to by the first before the bill can
proceed.
Bills are classified as either *Government Bills* or as *Private
Members\' Bills*. Ministers of the Crown introduce Government Bills;
private members introduce Private Members\' Bills.
Bills are also classified as *Public*, *Private*, *Personal* or
*Hybrid*. Public bills create laws applied generally (for instance,
reforming the nation\'s electoral system). Private bills affect a
specific named company, person or other entity (for instance,
authorising major constructions on specific named public lands).
Personal bills are private bills that confer specific rights to specific
named individuals (for example by granting the right to marry a person
one would not normally be allowed to wed). Hybrid bills are public bills
that directly and specially affect private interests.
### Public Bills
A Public Bill\'s First Reading is usually a mere formality, allowing its
title to be entered in the Journals and for its text to be printed by
the House\'s authority.
After two weeks, one of the bill\'s supporters moves \"that the bill be
now read a second time\". At the second reading debate, the bill\'s
general characteristics and underlying principles, rather than the
particulars, are discussed. If the vote on the Second Reading fails, the
bill dies. It is, however, very rare for a Government bill to be
defeated at the Second Reading; such a defeat signifies a major loss.
In the House of Commons, following the Second Reading, various
procedural resolutions may need to be passed. If the bill seeks to levy
or increase a tax or charge, then a *Ways and Means Resolution* has to
be passed. If it involves significant expenditure of public funds, then
a *Money Resolution* is necessary. Finally, the government may proceed
with a *Programme Motion* or an *Allocation of Time Motion*. A Programme
Motion outlines a timetable for further debate on the bill and is
normally passed without debate. An Allocation of Time Motion, commonly
called the *Guillotine*, limits time available for debate. Normally, a
Programme motion is agreed to by both parties while an Allocation of
Time Motion becomes necessary if the Opposition does not wish to
cooperate with the Government. In the House of Lords, there are no
Guillotines or other motions that limit the time available for debate.
Next, the bill can be committed to a committee. In the House of Commons,
the bill may be sent to the *Committee of the Whole House*, a *Standing
Committee*, a *Special Standing Committee* or a *Select Committee.* The
Committee of the Whole House is a committee that includes all members of
the House and meets in the regular chamber. The Speaker is normally not
present during the meetings; a Deputy Speaker normally takes the chair.
The procedure is used for parts of the annual Finance Bill and for bills
of major constitutional importance. More often, the bill is committed to
a Standing Committee. Though the name may suggest otherwise, the
membership of Standing Committees is temporary. There can be from
sixteen to fifty members; the strength of parties in the committee is
proportional to their strengths in the whole House. It is possible for a
bill to go to a Special Standing Committee, which is like a Standing
Committee except that it may take evidence and conduct hearings; the
procedure has not been used in several years. Finally, the bill may be
sent to a Select Committee. Select Committees are permanent bodies
charged with the oversight of a particular Government department. This
last procedure is rarely used; the quinquennial Armed Forces Bill,
however, is always referred to the Defence Select Committee.
In the House of Lords, the Bill is committed to the *Committee of the
Whole House*, a *Public Bill Committee*, a *Special Public Bill
Committee*, a *Select Committee* or a *Grand Committee*. The most common
committee used is the Committee of the Whole House. Sometimes, the bill
is sent to a Public Bill Committee of twelve to sixteen members (plus
the Chairman of Committees) or to a Special Public Bill Committee of
nine or ten members. These committees correspond in function to the
Commons Standing and Special Standing Committees, but are less often
utilised. Select Committees may also be used, like in the Commons,
though it is rare for this to be done. The Grand Committee procedure is
the only one unique to the House of Lords. The procedure is reserved for
non-controversial bills that must be passed quickly; a proposal to amend
the bill is defeated if a single member votes against it.
In both Houses, the committee used considers the bill clause-by-clause
and may make amendments. Thereafter, the bill proceeds to the
*Consideration* or *Report Stage*. This stage occurs on the Floor of the
House and offers it an opportunity to further amend the bill. While the
committee is bound to consider every single clause of the bill, the
House need only debate those clauses which members seek to amend.
Following the Report Stage, the motion *that the bill be now read a
third time* is considered. In the House of Commons, there is a short
debate followed by a vote; no further amendments are permitted. If the
motion passes, then the Bill is considered passed. In the Lords,
however, amendments may be moved. Following the vote on the third
reading, there must be a separate vote on passage.
After one House has passed a bill, it is sent to the other for its
consideration. Assuming both Houses have passed a bill, differences
between their separate versions must be reconciled. Each House may
accept or reject amendments made by the other House, or offer other
amendments in lieu. If one House has rejected an amendment, the other
House may nevertheless insist upon it. If a House insists upon an
amendment that the other rejects, then the bill is lost unless the
procedure set out in the Parliament Acts is complied with.
Once a bill has passed by both Houses, or has been certified by the
Speaker of the Commons as having passed the House of Commons in
conformity with the Parliament Acts, the bill is finally submitted to
the Sovereign for *Royal Assent*. Since 1708, no Sovereign has failed to
grant Royal Assent to a bill. Assent may be given by the Sovereign in
person, but is usually given in the form of letters patent read out in
each of the Houses; in the House of Lords the Clerk announces the Norman
French formula \"La Reyne le Veult\", and the Bill thereupon becomes an
Act of Parliament. In 1708 the formula used for the Scottish Militia
Bill was \"La Reyne s\'avisera\" (however, this was on ministerial
advice).
In theory the Sovereign has the right to either *withhold* or *reserve*
the assent, however this right is not exercised. If assent were
withheld, then the bill would fail. If assent were reserved, then
formally a final decision on the bill has been put off until a later
time; if Assent were not given before prorogation of the session, then
the bill would fail.
### Private, Personal and Hybrid Bills
In the nineteenth century several hundred private Acts were passed each
year, dealing with such matters as the alteration of local authority
powers, the setting up or alteration of turnpike trusts, etc. A series
of reforms has eliminated the necessity for much of this legislation,
meaning that only a handful of private Acts are now passed each year.
A private bill is initiated when an individual petitions Parliament for
its passage. After the petition is received, it is officially gazetted
so that other interested parties may support or contest it.
Counter-petitions objecting to the passage of the bill may also be
received. To be able to file such a petition, the bill must \"directly
and specially\" affect the individual. If those supporting the bill
disagree that such an effect exist, then the matter is resolved by the
*Court of Referees*, a group of senior Members of Parliament.
The bill then proceeds through the same stages as public bills.
Generally, no debate is held on the Floor during the Second Reading
unless a Member of Parliament files a \"blocking motion\". It is
possible for a party whose petition was denied by the Court of Referees
to instead lobby a Member to object to the bill on the Floor. After the
bill is read a second time, it is sent to one of two committees: the
*Opposed Bill Committee* if there are petitions against the bill, or the
*Unopposed Bill Committee* if there aren\'t. After taking evidence, the
committee may return a finding of *Case Proved* or *Case Not Proved*. In
the latter case, the bill is considered rejected, but in the former
case, amendments to the bill may be considered. After consideration,
third reading and passage, the bill is sent to the other House, which
follows the same procedure. If necessary, the bill may have to face two
different Opposed Bill Committees. After differences between the Houses
are resolved, the bill is submitted for Royal Assent.
Personal bills relate to the \"estate, property, status, or style\" or
other personal affairs of an individual. By convention, these bills are
brought first in the House of Lords, where it is referred to a Personal
Bill Committee before being read a \"first\" time. The Committee may
make amendments or even reject the bill outright. If the bill is
reported to the House, then it follows the same procedure as any other
private bill, including going through an Unopposed or Opposed Bill
Committee in both Houses. A special case involves bills that seek to
enable marriages between those who are within a \"prohibited degree of
affinity or cosanguinity\". In those cases, the bill is not discussed on
the Floor and is sent at the committee stage to a Select Committee that
includes the Chairman of Committees, a bishop and two lay members.
Hybrid bills are public bills that have a special effect on a private
interest. Prior to the second reading of any public bill, it must be
submitted to the Clerk, who determines if any of the House\'s rules have
been violated. If the Clerk finds that the bill does have such an effect
on a private interest, then it is sent to the *Examiners*, a body which
then may report to the House that the bill does or does not affect
private interests. If the latter, then it proceeds just like a public
bill, but if the former, then it is treated as hybrid. The first and
second readings are just as for public bills, but at the committee
stage, if petitions have been filed against the bill, it is sent to a
Select Committee, but the Committee does not have the same powers of
rejection as Private Bill Committees. After the Committee reports, the
bill is recommitted to another committee as if it were a public bill.
Thereafter, the stages are the same as for a public bill, though, in the
other chamber, the bill may have to be considered once more by a Select
Committee.
### Supremacy of the House of Commons
Under the Parliament Acts of 1911 and 1949, the House of Commons is
essentially the pre-eminent chamber in Parliament. If the Lords fail to
pass a bill (by rejecting it outright, insisting on amendments disagreed
to by the Commons, or by failing to vote on it), and the bill has been
passed by the Commons in two consecutive sessions, then the bill may be
presented for Royal Assent unless the House of Commons otherwise
directs, and provided that the bill was introduced in the Lords at least
one month before the end of each session. However, twelve months must
have passed between the Second Reading in the first session, and the
final vote on passage in the second one. Also, the bill passed by the
Commons in each session must be identical, except to take into account
the passage of time since the bill was first proposed.
The effect of the procedure set out in the Parliament Acts is that the
House of Lords may delay a bill for at least thirteen months, but would
ultimately be unable to overturn the concerted will of the House of
Commons. However, this procedure does not apply in the case of private
or personal bills, nor to bills seeking to extend the life of Parliament
beyond five years.
Under the Parliament Acts, a special procedure applies to \"money
bills\". A bill is considered a money bill if the Speaker certifies that
it relates solely to national taxation or to the expenditure of public
funds. The Speaker\'s decision is final and cannot be overturned.
Following passage by the House of Commons, the bill can be considered by
the House of Lords for not longer than one month. If the Lords have not
passed the bill within that time, it is submitted for Royal Assent
regardless. Any amendments made by the House of Lords are ignored unless
accepted by the House of Commons.
In addition to the Parliament Acts, tradition and conventions limit the
House of Lords. It is the privilege of the House of Commons to levy
taxes and authorise expenditure of public funds. The House of Lords
cannot introduce bills to do either; furthermore, they are barred from
amending supply bills (bills appropriating money to expenditure). In
some cases, however, the House of Lords can circumvent the rule by
inserting a *Privilege Amendment* into a bill they have originated. The
Amendment reads:
: *Nothing in this Act shall impose any charge on the people or on
public funds, or vary the amount or incidence of or otherwise alter
any such charge in any manner, or affect the assessment, levying,
administration or application of any money raised by any such
charge.*
The House of Commons then amend the bill by removing the above clause.
Therefore, the privilege of the Commons is not violated as they, not the
Lords, have approved the tax or public expenditure.
## Delegated legislation
Many Acts of Parliament authorise the use of Statutory Instruments (SIs)
as a more flexible method of setting out and amending the precise
details for new arrangements, such as rules and regulations. This
delegated power is given either to the Queen in Council, a Minister of
the Crown, or to other named office holders. An Act may empower the
Government to make a Statutory Instrument and lay it before both Houses,
the SI to take legal effect if approved by a simple vote in each House;
or in other cases, if neither House objects within a set time. In
theory, Parliament does not lose control over such statutory instruments
when delegating the power to make them, while being saved the necessity
to debate and vote upon even quite trivial changes, unless members wish
to raise objections.
## English Votes For English Laws
During the creation of the Devolved Administrations of Scotland and
Wales, the idea of an English Parliament or Regional Assemblies were
discussed but ultimately not implemented. This created an issue where
the UK Parliament is acting as a *de facto* English Parliament on
matters devolved to the national assemblies. MPs from all regions were
free to debate and vote on issues which did not effect their
constituencies or constituents.
The Conservative Government of 2015 decided to address the issue in a
controversial manner. Instead of bringing a bill to the Parliament, they
proposed changes to the Statutory Instruments (SIs). Any bill brought
before the Commons which is adjudged by the Speaker to only effect
English Constituencies (or in some limited cases England and Wales) can
have a "double majority" rule imposed. In short, all MPs are allowed to
debate and vote, but for a vote to be won both a count of votes of all
MPs and a vote for English only MPs must be won.
## Privilege
Each House has a body of rights that it asserts, or which are conferred
by statute, with the aim of being allowed to carry out its duties
without interference. For example, members of both Houses have freedom
of speech during parliamentary debates; what they have said cannot be
questioned in any place outside Parliament, and so a speech made in
Parliament cannot constitute slander. These rights are collectively
referred to as *Parliamentary Privilege*.
Both Houses claim to determine their own privileges, and are
acknowledged by the courts as having the authority to control their own
proceedings, as well as to discipline members abusing the rules.
Furthermore, each House is the sole judge of the qualifications of its
members. Collectively, each House has the right of access to the
Sovereign. Individually, members must be left free to attend Parliament.
Therefore, the police are regularly ordered to maintain free access in
the neighbouring streets, and members cannot be called on to serve on a
jury or be subpoenaed as a witness while Parliament is in session.
(Arrest for crime is still possible, but the relevant House must be
notified of the same.) Parliament has the power to punish *contempt of
Parliament*, that is, violation of the privileges and rules of a House.
Any decisions made in this regard are final and are cannot be appealed
to any court. The usual modern penalty for contempt is a reprimand, or
brief imprisonment in the precincts of the House, but historically large
fines have been imposed.
|
# UK Constitution and Government/Government
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Parliament{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Judiciary{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Judiciary{width="" height="40"}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}His Majesty\'s
Government`</big>`{=html}`</big>`{=html}
Previous Chapter
\| Next Chapter
------------------------------------------------------------------------
## Structure
*His Majesty\'s Government* is the executive political authority for the
United Kingdom as a whole. At its heart is the *Cabinet*, a grouping of
senior *Ministers of the Crown*, headed by the *Prime Minister*. Members
of the Government are political appointees, and are usually drawn from
one of the two Houses of Parliament. In addition to the heads of the
*Departments of State* (most of whom carry the title of *Secretary of
State*), the Government also includes *junior ministers* (who bear the
title of *Under Secretary of State*, *Minister of State*, or
*Parliamentary Secretary*), *whips* (responsible for enforcing party
discipline within the two Houses), and *Parliamentary Private
Secretaries* (political assistants to ministers).
When the Sovereign is the Queen, the Government is referred to as *Her*
Majesty\'s Government; likewise, when there are joint Sovereigns, the
Government is known as *Their Majesties*\' Government.
## Prime Minister
The Prime Minister (or \"PM\") is the head of the Government. Since the
early twentieth century the Prime Minister has held the office of *First
Lord of the Treasury*, and in recent decades has also held the office of
*Minister for the Civil Service*.
The Prime Minister is *asked to form a Government* by the Sovereign.
Usually this occurs after a general election has altered the balance of
party political power within the House of Commons. The Prime Minister is
expected to *have the confidence* of the House of Commons; this usually
means that he or she is the leader of the political party holding the
majority of the seats in the Commons. Since at least the 1920s the Prime
Minister himself is also expected to be a Member of Parliament (i.e.
member of the House of Commons). The Prime Minister retains office until
he or she dies or resigns, or until someone else is appointed; this
means that even when expecting to be defeated at a general election, the
Prime Minister remains formally in power until his or her rival is
returned as an MP and asked in turn to form a Government.
By law, if defeated by an actual Commons vote of \"no confidence\",
there is a period of 14 days during which an alternative Government may
be formed, if a \"vote of confidence\" can be won by a prospective Prime
Minister. If no Government can be formed within the 14 days an election
is automatically triggered, in effect allowing the electorate itself to
approve or disapprove of the Prime Minister\'s policy. The polling day
for the election is to be the day appointed by Her Majesty by
proclamation on the recommendation of the Prime Minister. The Parliament
then in existence dissolves at the beginning of the 17th working day
before the polling day.
The existence and basis of appointment of the office is a matter of
constitutional convention rather than of law. Because of this, there are
no formal qualifications for the office. However, a small number of Acts
of Parliament do make reference to the Prime Minister, and since the
1930s office has carried a salary in its own right.
The Prime Minister is often an extremely powerful figure within the
political system; the office has been said by some to be an \"elected
dictatorship\", and some Prime Ministers have been accused of being
\"presidential\". A weak Prime Minister may be forced out of office
(i.e. forced to resign) by his or her own party, particularly if there
is an alternative figure within the party seen as a better choice.
## Cabinet and other ministers
Membership of the Cabinet is not defined by law, and is only loosely
bound by convention. The Prime Minister and (if there is one) the Deputy
Prime Minister are always members, as are the three most senior
ministerial heads of Departments of State: the *Secretary of State for
Foreign and Commonwealth Affairs* (commonly known as the *Foreign
Secretary*), the *Chancellor of the Exchequer* (i.e. the minister
responsible for finance), and the *Secretary of State for the Home
Department* (commonly known as the *Home Secretary*). Most of the other
heads of departments are usually members of the Cabinet, as well as a
small number of junior ministers.
Ministers of the Crown are formally appointed by the Sovereign upon the
\"advice\" of the Prime Minister. Ministers are bound by the convention
of *collective responsibility*, by which they are expected to publicly
support or defend the policy of the Government, or else resign. They are
also bound by the less clearly defined convention of *individual
responsibility*, by which they are responsible to Parliament for the
acts of their department. Ministers are often called upon to resign who
either by their own actions, or by those of their department, are
perceived in some manner to have failed in their duty; however, it
usually takes sustained criticism over a period of time for both a
minister to feel compelled to resign, and for the Prime Minister to
accept that resignation. Occasionally a minister offers his or her
resignation, but the Prime Minister retains them in office.
Parliamentary Private Secretaries are also bound by the principle of
*collective responsibility*, even though they hold no ministerial
responsibility and take no part in the formation of policy; the position
is seen as an initial stepping-stone towards being offered ministerial
office.
## Privy Council
*His Majesty\'s Most Honourable Privy Council* is a ceremonial body of
advisors to the Sovereign. The Privy Council is used as a mechanism for
maintaining ministerial responsibility for the actions of the Crown; for
example, royal proclamations are approved by the Privy Council before
they are issued. All senior members of the Government are appointed to
be Privy Counsellors, as well as certain senior members of the Royal
Family, leaders of the main political parties, the archbishops and
senior bishops of the Church of England, and certain senior judges.
The Privy Council is headed by the *Lord President of the Council*, a
ministerial office usually held by a member of the Cabinet. By
convention the Lord President is also either the *Leader of the House of
Commons*, or the *Leader of the House of Lords*, with responsibility for
directing and negotiating the course of business in the respective
House.
Meetings of the Privy Council are usually extremely short, and are
rarely attended by more than a bare minimum of Privy Counsellors.
|
# UK Constitution and Government/Judiciary
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Government{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Devolved
Administrations{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Devolved
Administrations{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}The Judiciary`</big>`{=html}`</big>`{=html}
Previous Chapter
\| Next
Chapter
------------------------------------------------------------------------
## Structure
The United Kingdom is made up of three separate legal jurisdictions,
each with a separate laws and hierarchy of courts: *England and Wales*,
*Scotland*, and *Northern Ireland*.
## England and Wales
England and Wales is a *common law* jurisdiction.
### Lower courts
The lowest court in England and Wales is the *Magistrates\' Court*.
Magistrates, also known as Justices of the Peace, are laypersons
appointed by the Sovereign. The court hears \"summary\" offences
(punishable by six months or less in prison). When hearing such cases,
three magistrates sit together as a panel without a jury. In some
metropolitan areas, such as London, there are no magistrates; instead,
summary cases are tried by a single District Judge who is trained in
law.
Serious criminal cases are tried before a *Crown Court* with a judge and
a jury of twelve. The accused may also choose to have certain summary
offences referred from the magistrates\' court to the Crown Court, in
order for their case to be tried before a jury; the Crown Court also
hears appeals from magistrates\' courts. Though the Crown Court is
constituted as a single body for the whole of England and Wales, it sits
permanently at multiple places throughout its area of jurisdiction.
The counterpart to the Crown and Magistrates\' Courts in the civil
justice system is the *County Court*. There are over 200 County Courts
throughout England and Wales.
### High Court
The *High Court of England and Wales* takes appeals from the County
Court, and also has an original jurisdiction in certain matters.
The High Court is constituted into three *divisions*: the *Family
Division*, the *Chancery Division*, and the *Queen\'s Bench Division*.
The Family Division is presided over by the *President of the Family
Division*, and hears cases involving family matters such as matrimonial
breakdown, child custody and welfare, and adoption.
The Chancery Division is presided over by the *Chancellor of the High
Court* (formerly known as the *Vice-Chancellor*), and hears cases
involving land, companies, bankruptcy, and probate.
The Queen\'s Bench Division is presided over by the *President of the
Queen\'s Bench Division*, and hears cases involving torts (civil
wrongs). The Queen\'s Bench Division also includes four subordinate
courts: the *Admiralty Court* (dealing with shipping), the *Commercial
Court* (dealing with insurance, banking, and commerce), the *Technology
and Construction Court* (dealing with complex technological matters),
and the *Administrative Court* (exercising judicial review over the
actions of local government). The Queen\'s Bench Division also has
oversight of the lower courts.
## Court of Appeal
Above the High Court in civil cases, and the Crown Court in criminal
cases, is the Court of Appeal, headed by the *Master of the Rolls*, and
including 35 *Lords Justices of Appeal* as well as other judges.
The Court of Appeal is divided into a *Civil Division* (presided over by
the Master of the Rolls) and a *Criminal Division* (presided over by the
Lord Chief Justice). Generally speaking, appeals may only be heard \"by
leave\"; that is, with the permission of the either the Court of Appeal
or the judge whose decision is being contested. In some cases, it is
possible to \"leapfrog\" the High Court and bring a case directly from a
County Court.
Together, the Crown Court, the High Court, and Court of Appeal
constitute the *Senior Courts* (formerly known as the *Supreme Court of
Judicature*). Thus, since they are theoretically one body, it is
possible for judges of one court to sit in other courts. Appeals from
the Senior Courts go to the Supreme Court; it is also possible to
leapfrog from the High Court, but not from the Crown Court. Normally,
leave to appeal to the Supreme Court is not granted unless the case is
of great legal or constitutional importance.
## Northern Ireland
Northern Ireland\'s system is based on that used in England and Wales,
with a similar hierarchy of magistrates\' court, the Crown Court (for
criminal trials), county courts (for civil trials), the High Court, and
the Court of Appeal.
Appeals from Northern Ireland lie to the Supreme Court.
## Scotland
In contrast with the rest of the United Kingdom, Scotland uses a mixture
of common law and civil law. Its court system was developed
independently of that in England. The *Act of Union* (1707) guarantees
the continuance of Scotland\'s different legal system.
### Lower courts
Summary jurisdiction is exercised by *Justice of the Peace Courts*, held
either by three *Justices of the Peace* (lay magistrates) sitting
together, or by a Justice of the Peace sitting with a legally qualified
clerk. As in England and Wales, professional judges may sit in certain
metropolitan areas.
Above the Justice of the Peace Courts are the *Sheriff Courts*, of which
there are around 50. Sheriff Courts hear both criminal and civil cases,
and are held before a judge known as a Sheriff, and have a jury of
fifteen people. Sheriff Courts are grouped into six different
Sheriffdoms, headed by a Sheriff Principal who hears appeals from cases
not decided by a jury.
### High Court of Justiciary
The highest criminal court in Scotland is the *High Court of
Justiciary*. The judges of the court are also the judges of the Court of
Session (see below); as High Court judges they are known as *Lords
Commissioners of Justiciary*. The head of the court is the *Lord
Justice-General* (also the Lord President of the Court of Session), with
a deputy known as the *Lord Justice Clerk* (who holds the same office in
the Court of Session). Altogether the High Court has up to 32 individual
judges.
The High Court has exclusive jurisdiction in serious crimes, such as
murder or drug trafficking, in which case a single judge sits with a
jury of fifteen. The High Court also hears appeals from Justice of the
Peace Courts, and hears appeals in criminal cases from Sheriff Courts.
Appeals against decisions by a High Court judge in criminal cases are
heard by either two (in appeals against sentences) or three (in appeals
against conviction) High Court judges. No appeal lies beyond the High
Court.
### Court of Session
The highest civil court in Scotland is the *Court of Session*. Its
judges also sit as judges of the High Court of Justiciary (see above);
as Court of Session judges they are known as *Lords and Ladies of
Council and Session*, or *Senators of the College of Justice*. The Court
is headed by the *Lord President*, with a *Lord Justice Clerk* as
deputy. Altogether the Court of Session has up to 32 individual judges.
The Court of Session is divided into the *Outer House* (made up of
nineteen judges), and the *Inner House* (made up of the remaining
judges). The Outer House has original jurisdiction, while the Inner
House has appellate jurisdiction. The Inner House is further divided
into the First and Second Divisions, headed by the Lord President and
Lord Justice Clerk respectively. Sometimes, when many cases are before
the court, an Extra Division may be appointed. Each Division may sit as
a panel hearing an appeal from the Sheriff Court or from the Outer
House.
Appeals from the Court of Session lie to the Supreme Court.
## Supreme Court
The *Supreme Court of the United Kingdom* is the ultimate court of
appeal in all civil matters, as well as in criminal cases (other than
from Scotland), and also has original jurisdiction in devolution cases.
The Supreme Court has replaced the jurisdiction previously exercised by
the House of Lords in the latter\'s now-abolished judicial capacity. The
Supreme Court of the United Kingdom is not to be confused with the
Supreme Court of Judicature, the name formerly held by (a) the Senior
Courts, in England and Wales, and (b) the Court of Judicature, in
Northern Ireland.
The Supreme Court is headed by a President, who has a Deputy President.
There are a further ten *puisne* judges.
## Judicial Committee of the Privy Council
The *Judicial Committee of the Privy Council* formerly held original
jurisdiction in the United Kingdom in devolution cases, and continues to
hold appellate jurisdiction over the ecclesiastical courts of the Church
of England. Appeals to the Privy Council as a court of last resort also
lie from the Crown dependencies, the British overseas territories, and
from certain Commonwealth countries.
Membership of the Judicial Committee is made up of Justices of the
Supreme Court, Privy Counsellors who are or were Lord Justices of Appeal
in either England and Wales or Northern Ireland, members of the Inner
House of Scotland\'s Court of Session, and selected senior judges from
certain other Commonwealth countries. Members retire at the age of 75.
Appeals to *Her Majesty in Council* are referred to the Judicial
Committee, which formally reports to the Queen in Council, who in turn
formally confirms the report. By agreement, appeals from certain
Commonwealth countries lie directly to the Judicial Committee itself.
The Queen-in-Council also considers appeals from the disciplinary
committees of certain medical bodies such as the Royal College of
Surgeons. Also, cases against the Church Commissioners (who administer
the Church of England\'s property estates) may be considered. Appeals
may be heard from certain ecclesiastic courts (the Court of Arches in
Canterbury, and the Chancery Court in York) in cases that do not involve
Church doctrine. Appeals may also be heard from certain dormant courts,
including Prize Courts (which hear cases relating to the capture of
enemy ships at sea, and the ownership of property seized from captured
ships) and the Court of Admiralty of the Cinque Ports. Finally, the
Queen-in-Council determines if an individual is qualified to be elected
to the House of Commons under the House of Commons Disqualification Act.
## ECHR and ECJ
In addition to the above domestic courts, there are two further courts
which can be said to exercise a jurisdiction over the United Kingdom.
The *European Court of Human Rights* deals with cases concerning alleged
infringements of the *European Convention on Human Rights*.
The *European Court of Justice* deals with cases concerning alleged
infringements of European Union law.
|
# UK Constitution and Government/Devolved Administrations
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Judiciary{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!Elections{width="" height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!Elections{width="" height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}Devolved
Administrations`</big>`{=html}`</big>`{=html}
Previous Chapter
\| Next Chapter
## Devolution
*Devolution* refers to the transfer of administrative, executive, or
legislative authority to new institutions operating only within a
defined part of the United Kingdom. Devolved institutions have been
created for Scotland, Northern Ireland, and Wales.
Devolution differs from federalism in formally being a unilateral
process that can be reversed at will; formal sovereignty is still
retained at the centre. Thus, while the US Congress cannot reduce the
powers of a state legislature, Parliament has the legal capacity to even
go so far as to abolish the devolved legislatures.
Devolution in Wales was originally restricted to the
executive/administrative sphere, whereas in Scotland and Northern
Ireland devolution extended to wide powers to pass laws.
## Scotland
The Scottish legislative authority is the Scottish Parliament. The
Scottish Parliament is a unicameral body composed of 129 members (called
Members of Scottish Parliament, or MSPs) elected for fixed four-year
terms. Each of 73 members is elected by a constituency. The remaining
are elected by eight regions, with each region electing seven members.
Each voter has one constituency vote---cast for a single
individual---and one regional vote---cast either for a party or for an
independent candidate. Regional members are allocated in such a way as
to permit a party\'s share of the regional vote to be proportional to
its share of seats in the Scottish Parliament.
The Scottish Government is the executive authority of Scotland; it is
led by the First Minister. Other members of the Scottish Cabinet are
generally given the title of Minister. The First Minister must retain
the confidence of the Scottish Parliament to remain in power.
Scotland has responsibility over several major areas, including
taxation, criminal justice, health, education, transport, the
environment, sport, culture and local government. The Parliament at
Westminster, however, retains authority over a certain number of
*reserved matters*. Reserved matters include foreign affairs, defence,
immigration, social security and welfare, employment, and general
economic and fiscal policy.
## Wales
The National Assembly for Wales is the Welsh legislative authority. It
is, like the Scottish Parliament, a unicameral body; it also uses a
similar electoral system. Forty of its sixty members are chosen from
single-member constituencies, while the remaining twenty are regional
members. (There are five regions.) The Welsh Government is led by the
First Minister and includes other Ministers, who must retain the
confidence of the Assembly.
The third Welsh Assembly can legislate using a system called \"Assembly
Measures\". This system is a lower form of Primary Legislation similar
to Acts of Parliament. They can be used to repeal laws, create provision
and amend laws. The difference with \"Assembly Measures\" and \"Acts of
the Assembly\" is that Measures do not have a bulk of powers with them,
each Measure will come with a LCO, or Legislative Competency Order,
which transfers powers from the UK Parliament to the Welsh Assembly
Government. Devolution in Wales has changed a lot since 1999.
In order for the National Assembly to have full legislative powers, they
will need to trigger a referendum through both the Assembly and both
houses of the United Kingdom parliament. Once done, Wales will for the
first time ever, will be able to legislate and make their own Acts. (To
be known as Acts of the Assembly, or Acts of the National Assembly for
Wales). In early 2011, a referendum held in Wales approved the transfer
of full legislative competence to the National Assembly in all devolved
matters.
## Northern Ireland
Northern Ireland was the first part of the United Kingdom to gain
devolution, in 1921. However, it has had a troubled history since then,
caused by conflict between the main *Unionist* and *Nationalist*
communities. Because of this historical background, the present system
of devolution requires power to be shared between political parties
representing the different communities, and there are complex procedural
checks in place to ensure cross-community support for legislation and
executive action.
The Northern Ireland Assembly comprises 108 members elected to represent
18 six-member constituencies.
The Executive (government) is made up of members from the largest
parties in the Assembly, with ministerial portfolios allocated in
proportion to party strengths. The Executive is headed jointly by a
First Minister and Deputy First Minister, who are jointly elected by the
Assembly.
The Assembly\'s legislative powers are broad, and are similar to those
of the Scottish Parliament (with the notable exception of taxation). The
transfer from the United Kingdom\'s central government of responsibility
for the criminal justice system has been highly contentious, and has
only recently been carried out.
|
# UK Constitution and Government/Elections
### `<font size=1 color=dimgray>`{=html}Presentation`</font>`{=html}
```{=html}
<table height=1 border=1 style="border-collapse:collapse; border-color:LightSkyBlue; background-color:AliceBlue;" width="100%">
```
```{=html}
<tr>
```
```{=html}
<td align=left>
```
```{=html}
<table width="100%" align=left>
```
```{=html}
<tr>
```
```{=html}
<td>
```
!Devolved
Administrations{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
![](Gohome.png "Gohome.png"){width="" height="40"}
```{=html}
</td>
```
```{=html}
<td nowrap>
```
`<font size=6 color=teal>`{=html}`</font>`{=html}
```{=html}
</td>
```
```{=html}
<td>
```
!British Monarchs{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td>
```
!List of Topics{width=""
height="40"}
```{=html}
</td>
```
```{=html}
<td width="99%">
```
```{=html}
</td>
```
```{=html}
<td>
```
```{=html}
<td>
```
!British Monarchs{width=""
height="40"}
```{=html}
</td>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
`<big>`{=html}`<big>`{=html}Elections`</big>`{=html}`</big>`{=html}
Previous
Chapter
------------------------------------------------------------------------
## General Elections
Members of the House of Commons are elected in General Elections.
General Elections are called by the Prime Minister. General Elections
are held at least once every five years. The maximum term that a
parliament can exist before a new election interrupts it is defined by
parliament. Currently, the Parliament Act
states that five years is the maximum length.
## Local Elections
From 2007 Scotland will use Single Transferable Vote to elect all of its
local councillors. England and Wales use first past the post or
multiple-member first past the post for local elections. Northern
Ireland uses STV for its local elections.
## European Elections
Members of the European Parliament for Northern Ireland are elected
using Single Transferable Vote (STV). MEPs for England, Scotland and
Wales are elected using the D\'Hondt method.
es:Sistema político del Reino
Unido/Elecciones
ms:Perlembagaan dan Kerajaan United Kingdom: Pilihan
raya
|
# Control Systems/Introduction
## This Wikibook
This book was written at **Wikibooks**, a free online community where
people write open-content textbooks. Any person with internet access is
welcome to participate in the creation and improvement of this book.
Because this book is continuously evolving, there are no finite
\"versions\" or \"editions\" of this book. Permanent links to known good
versions of the pages may be provided.
## What are Control Systems?
The study and design of automatic **Control Systems**, a field known as
**control engineering,** has become important in modern technical
society. From devices as simple as a toaster or a toilet, to complex
machines like space shuttles and power steering, control engineering is
a part of our everyday life. This book introduces the field of control
engineering and explores some of the more advanced topics in the field.
Note, however, that control engineering is a very large field and this
book serves only as a foundation of control engineering and an
introduction to selected advanced topics in the field. Topics in this
book are added at the discretion of the authors and represent the
available expertise of our contributors.
Control systems are components that are added to other components to
increase functionality or meet a set of design criteria. For example:
This simple example can be complex to both users and designers of the
motor system. It may seem obvious that the motor should start at a
higher voltage so that it accelerates faster. Then we can reduce the
supply back down to 10 volts once it reaches ideal speed.
This is clearly a simplistic example but it illustrates an important
point: we can add special \"Controller units\" to preexisting systems to
improve performance and meet new system specifications.
Here are some formal definitions of terms used throughout this book:
There are essentially two methods to approach the problem of designing a
new control system: the **Classical Approach** and the **Modern
Approach**.
## Classical and Modern
**Classical** and **Modern** control methodologies are named in a
misleading way, because the group of techniques called \"Classical\"
were actually developed later than the techniques labeled \"Modern\".
However, in terms of developing control systems, Modern methods have
been used to great effect more recently, while the Classical methods
have been gradually falling out of favor. Most recently, it has been
shown that Classical and Modern methods can be combined to highlight
their respective strengths and weaknesses.
Classical Methods, which this book will consider first, are methods
involving the **Laplace Transform domain**. Physical systems are modeled
in the so-called \"time domain\", where the response of a given system
is a function of the various inputs, the previous system values, and
time. As time progresses, the state of the system and its response
change. However, time-domain models for systems are frequently modeled
using high-order differential equations which can become impossibly
difficult for humans to solve and some of which can even become
impossible for modern computer systems to solve efficiently. To
counteract this problem, integral transforms, such as the **Laplace
Transform** and the **Fourier Transform**, can be employed to change an
Ordinary Differential Equation (ODE) in the time domain into a regular
algebraic polynomial in the transform domain. Once a given system has
been converted into the transform domain it can be manipulated with
greater ease and analyzed quickly by humans and computers alike.
Modern Control Methods, instead of changing domains to avoid the
complexities of time-domain ODE mathematics, converts the differential
equations into a system of lower-order time domain equations called
**State Equations**, which can then be manipulated using techniques from
linear algebra. This book will consider Modern Methods second.
A third distinction that is frequently made in the realm of control
systems is to divide analog methods (classical and modern, described
above) from digital methods. Digital Control Methods were designed to
try and incorporate the emerging power of computer systems into previous
control methodologies. A special transform, known as the
**Z-Transform**, was developed that can adequately describe digital
systems, but at the same time can be converted (with some effort) into
the Laplace domain. Once in the Laplace domain, the digital system can
be manipulated and analyzed in a very similar manner to Classical analog
systems. For this reason, this book will not make a hard and fast
distinction between Analog and Digital systems, and instead will attempt
to study both paradigms in parallel.
## Who is This Book For?
This book is intended to accompany a course of study in under-graduate
and graduate engineering. As has been mentioned previously, this book is
not focused on any particular discipline within engineering, however any
person who wants to make use of this material should have some basic
background in the Laplace transform (if not other transforms), calculus,
etc. The material in this book may be used to accompany several
semesters of study, depending on the program of your particular college
or university. The study of control systems is generally a topic that is
reserved for students in their 3rd or 4th year of a 4 year undergraduate
program, because it requires so much previous information. Some of the
more advanced topics may not be covered until later in a graduate
program.
Many colleges and universities only offer one or two classes
specifically about control systems at the undergraduate level. Some
universities, however, do offer more than that, depending on how the
material is broken up, and how much depth that is to be covered. Also,
many institutions will offer a handful of graduate-level courses on the
subject. This book will attempt to cover the topic of control systems
from both a graduate and undergraduate level, with the advanced topics
built on the basic topics in a way that is intuitive. As such, students
should be able to begin reading this book in any place that seems an
appropriate starting point, and should be able to finish reading where
further information is no longer needed.
## What are the Prerequisites?
Understanding of the material in this book will require a solid
mathematical foundation. This book does not currently explain, nor will
it ever try to fully explain most of the necessary mathematical tools
used in this text. For that reason, the reader is expected to have read
the following wikibooks, or have background knowledge comparable to
them:
Algebra\
Calculus
: The reader should have a good understanding of differentiation and
integration. Partial differentiation, multiple integration, and
functions of multiple variables will be used occasionally, but the
students are not necessarily required to know those subjects well.
These advanced calculus topics could better be treated as a
co-requisite instead of a pre-requisite.
Linear Algebra
: State-space system representation draws heavily on linear algebra
techniques. Students should know how to operate on matrices.
Students should understand basic matrix operations (addition,
multiplication, determinant, inverse, transpose). Students would
also benefit from a prior understanding of Eigenvalues and
Eigenvectors, but those subjects are covered in this text.
Ordinary Differential Equations
: All linear systems can be described by a linear ordinary
differential equation. It is beneficial, therefore, for students to
understand these equations. Much of this book describes methods to
analyze these equations. Students should know what a differential
equation is, and they should also know how to find the general
solutions of first and second order ODEs.
Engineering Analysis
: This book reinforces many of the advanced mathematical concepts used
in the Engineering Analysis book,
and we will refer to the relevant sections in the aforementioned
text for further information on some subjects. This is essentially a
math book, but with a focus on various engineering applications. It
relies on a previous knowledge of the other math books in this list.
Signals and Systems
: The Signals and Systems book will
provide a basis in the field of **systems theory**, of which control
systems is a subset. Readers who have not read the Signals and
Systems book will be at a severe
disadvantage when reading this book.
## How is this Book Organized?
This book will be organized following a particular progression. First
this book will discuss the basics of system theory, and it will offer a
brief refresher on integral transforms. Section 2 will contain a brief
primer on digital information, for students who are not necessarily
familiar with them. This is done so that digital and analog signals can
be considered in parallel throughout the rest of the book. Next, this
book will introduce the state-space method of system description and
control. After section 3, topics in the book will use state-space and
transform methods interchangeably (and occasionally simultaneously). It
is important, therefore, that these three chapters be well read and
understood before venturing into the later parts of the book.
After the \"basic\" sections of the book, we will delve into specific
methods of analyzing and designing control systems. First we will
discuss Laplace-domain stability analysis techniques (Routh-Hurwitz,
root-locus), and then frequency methods (Nyquist Criteria, Bode Plots).
After the classical methods are discussed, this book will then discuss
Modern methods of stability analysis. Finally, a number of advanced
topics will be touched upon, depending on the knowledge level of the
various contributors.
As the subject matter of this book expands, so too will the
prerequisites. For instance, when this book is expanded to cover
**nonlinear systems**, a basic background knowledge of nonlinear
mathematics will be required.
### Versions
This wikibook has been expanded to include multiple
versions of its text,
differentiated by the material covered, and the order in which the
material is presented. Each different version is composed of the
chapters of this book, included in a different order. This book covers a
wide range of information, so if you don\'t need all the information
that this book has to offer, perhaps one of the other versions would be
right for you and your educational needs.
Each separate version has a table of contents outlining the different
chapters that are included in that version. Also, each separate version
comes complete with a printable version, and some even come with PDF
versions as well.
Take a look at the **All Versions Listing
Page** to find the version of
the book that is right for you and your needs.
## Differential Equations Review
Implicit in the study of control systems is the underlying use of
differential equations. Even if they aren\'t visible on the surface, all
of the continuous-time systems that we will be looking at are described
in the time domain by ordinary differential equations (ODE), some of
which are relatively high-order.
Differential equations are particularly difficult to manipulate,
especially once we get to higher-orders of equations. Luckily, several
methods of abstraction have been created that allow us to work with
ODEs, but at the same time, not have to worry about the complexities of
them. The classical method, as described above, uses the Laplace,
Fourier, and Z Transforms to convert ODEs in the time domain into
polynomials in a complex domain. These complex polynomials are
significantly easier to solve than the ODE counterparts. The Modern
method instead breaks differential equations into systems of low-order
equations, and expresses this system in terms of matrices. It is a
common precept in ODE theory that an ODE of order N can be broken down
into N equations of order 1.
Readers who are unfamiliar with differential equations might be able to
read and understand the material in this book reasonably well. However,
all readers are encouraged to read the related sections in
**Calculus**.
## History
The field of control systems started essentially in the ancient world.
Early civilizations, notably the Greeks and the Arabs were heavily
preoccupied with the accurate measurement of time, the result of which
were several \"water clocks\" that were designed and implemented.
However, there was very little in the way of actual progress made in the
field of engineering until the beginning of the renaissance in Europe.
Leonhard Euler (for whom **Euler\'s Formula** is named) discovered a
powerful integral transform, but Pierre-Simon Laplace used the transform
(later called the **Laplace Transform**) to solve complex problems in
probability theory.
Joseph Fourier was a court mathematician in France under Napoleon I. He
created a special function decomposition called the **Fourier Series**,
that was later generalized into an integral transform, and named in his
honor (the **Fourier Transform**). {{-}}
+----------------------------------+----------------------------------+
| ![](Pierre-Simon-Laplace_(174 | ![](Joseph_Fourier.jpg "J |
| 9-1827).jpg "Pierre-Simon-Laplac | oseph_Fourier.jpg"){width="150"} |
| e_(1749-1827).jpg"){width="150"} | |
+----------------------------------+----------------------------------+
| Pierre-Simon Laplace\ | Joseph Fourier\ |
| 1749-1827 | 1768-1840 |
+----------------------------------+----------------------------------+
The \"golden age\" of control engineering occurred between 1910-1945,
where mass communication methods were being created and two world wars
were being fought. During this period, some of the most famous names in
controls engineering were doing their work: Nyquist and Bode.
**Hendrik Wade Bode** and **Harry Nyquist**, especially in the 1930\'s
while working with Bell Laboratories, created the bulk of what we now
call \"Classical Control Methods\". These methods were based off the
results of the Laplace and Fourier Transforms, which had been previously
known, but were made popular by **Oliver Heaviside** around the turn of
the century. Previous to Heaviside, the transforms were not widely used,
nor respected mathematical tools.
Bode is credited with the \"discovery\" of the closed-loop feedback
system, and the logarithmic plotting technique that still bears his name
(**bode plots**). Harry Nyquist did extensive research in the field of
system stability and information theory. He created a powerful stability
criteria that has been named for him (**The Nyquist Criteria**).
Modern control methods were introduced in the early 1950\'s, as a way to
bypass some of the shortcomings of the classical methods. **Rudolf
Kalman** is famous for his work in modern control theory, and an
adaptive controller called the **Kalman Filter** was named in his honor.
Modern control methods became increasingly popular after 1957 with the
invention of the computer, and the start of the space program. Computers
created the need for digital control methodologies, and the space
program required the creation of some \"advanced\" control techniques,
such as \"optimal control\", \"robust control\", and \"nonlinear
control\". These last subjects, and several more, are still active areas
of study among research engineers. {{-}}
## Branches of Control Engineering
Here we are going to give a brief listing of the various different
methodologies within the sphere of control engineering. Oftentimes, the
lines between these methodologies are blurred, or even erased
completely.
Classical Controls:Control methodologies where the ODEs that describe a system are transformed using the Laplace, Fourier, or Z Transforms, and manipulated in the transform domain.\
Modern Controls:Methods where high-order differential equations are broken into a system of first-order equations. The input, output, and internal states of the system are described by vectors called \"state variables\".\
Robust Control:Control methodologies where arbitrary outside noise/disturbances are accounted for, as well as internal inaccuracies caused by the heat of the system itself, and the environment.\
Optimal Control:In a system, performance metrics are identified, and arranged into a \"cost function\". The cost function is minimized to create an operational system with the lowest cost.\
Adaptive Control:In adaptive control, the control changes its response characteristics over time to better control the system.\
Nonlinear Control:The youngest branch of control engineering, nonlinear control encompasses systems that cannot be described by linear equations or ODEs, and for which there is often very little supporting theory available.\
Game Theory:Game Theory is a close relative of control theory, and especially robust control and optimal control theories. In game theory, the external disturbances are not considered to be random noise processes, but instead are considered to be \"opponents\". Each player has a cost function that they attempt to minimize, and that their opponents attempt to maximize.
This book will definitely cover the first two branches, and will
hopefully be expanded to cover some of the later branches, if time
allows.
## MATLAB
**MATLAB** ® is a programming tool that is commonly used in the field of
control engineering. We will discuss MATLAB in specific sections of this
book devoted to that purpose. MATLAB will not appear in discussions
outside these specific sections, although MATLAB may be used in some
example problems. An overview of the use of MATLAB in control
engineering can be found in the **appendix** at: Control
Systems/MATLAB.
For more information on MATLAB in general, see: MATLAB
Programming.
Nearly all textbooks on the subject of control systems, linear systems,
and system analysis will use MATLAB as an integral part of the text.
Students who are learning this subject at an accredited university will
certainly have seen this material in their textbooks, and are likely to
have had MATLAB work as part of their classes. It is from this
perspective that the MATLAB appendix is written.
In the future, this book may be expanded to include information on
**Simulink** ®, as well as MATLAB.
There are a number of other software tools that are useful in the
analysis and design of control systems. Additional information can be
added in the appendix of this book, depending on the experience and
prior knowledge of contributors.
## About Formatting
This book will use some simple conventions throughout.
### Mathematical Conventions
Mathematical equations will be labeled with the
template, to give them names. Equations that are labeled in such a
manner are important, and should be taken special note of. For instance,
notice the label to the right of this equation:
$$f(t)
= \mathcal{L}^{-1} \left\{F(s)\right\}
= {1 \over {2\pi i}}\int_{c-i\infty}^{c+i\infty} e^{st} F(s)\,ds$$
Equations that are named in this manner will also be copied into the
List of Equations
Glossary in the end of
the book, for an easy reference.
Italics will be used for English variables, functions, and equations
that appear in the main text. For example *e*, *j*, *f(t)* and *X(s)*
are all italicized. Wikibooks contains a LaTeX mathematics formatting
engine, although an attempt will be made not to employ formatted
mathematical equations inline with other text because of the difference
in size and font. Greek letters, and other non-English characters will
not be italicized in the text unless they appear in the midst of
multiple variables which are italicized (as a convenience to the
editor).
Scalar time-domain functions and variables will be denoted with
lower-case letters, along with a *t* in parenthesis, such as: *x(t)*,
*y(t)*, and *h(t)*. Discrete-time functions will be written in a similar
manner, except with an *\[n\]* instead of a *(t)*.
Fourier, Laplace, Z, and Star transformed functions will be denoted with
capital letters followed by the appropriate variable in parenthesis. For
example: *F(s)*, *X(jω)*, *Y(z)*, and *F\*(s)*.
Matrices will be denoted with capital letters. Matrices which are
functions of time will be denoted with a capital letter followed by a
*t* in parenthesis. For example: *A(t)* is a matrix, *a(t)* is a scalar
function of time.
Transforms of time-variant matrices will be displayed in uppercase bold
letters, such as **H***(s)*.
Math equations rendered using LaTeX will appear on separate lines, and
will be indented from the rest of the text.
### Text Conventions
|
# Control Systems/System Identification
## Systems
Systems, in one sense, are devices that take input and produce an
output. A system can be thought to **operate** on the input to produce
the output. The output is related to the input by a certain relationship
known as the **system response**. The system response usually can be
modeled with a mathematical relationship between the system input and
the system output.
## System Properties
Physical systems can be divided up into a number of different
categories, depending on particular properties that the system exhibits.
Some of these system classifications are very easy to work with and have
a large theory base for analysis. Some system classifications are very
complex and have still not been investigated with any degree of success.
By properly identifying the properties of a system, certain analysis and
design tools can be selected for use with the system.
The early sections of this book will focus primarily on **linear
time-invariant** (LTI) systems. LTI systems are the easiest class of
system to work with, and have a number of properties that make them
ideal to study. This chapter discusses some properties of systems.
Later chapters in this book will look at time variant systems and
nonlinear systems. Both time variant and nonlinear systems are very
complex areas of current research, and both can be difficult to analyze
properly. Unfortunately, most physical real-world systems are
time-variant, nonlinear, or both.
An introduction to system identification and least squares techniques
can be found
here.
An introduction to parameter identification techniques can be found
here.
## Initial Time
The **initial time** of a system is the time before which there is no
input. Typically, the initial time of a system is defined to be zero,
which will simplify the analysis significantly. Some techniques, such as
the **Laplace Transform** require that the initial time of the system be
zero. The initial time of a system is typically denoted by t~0~.
The value of any variable at the initial time t~0~ will be denoted with
a 0 subscript. For instance, the value of variable x at time t~0~ is
given by:
$$x(t_0) = x_0$$
Likewise, any time t with a positive subscript are points in time *after
t~0~*, in ascending order:
$$t_0 \le t_1 \le t_2 \le \cdots \le t_n$$
So t~1~ occurs after t~0~, and t~2~ occurs after both points. In a
similar fashion above, a variable with a positive subscript (unless
specifying an index into a vector) also occurs at that point in time:
$$x(t_1) = x_1$$
$$x(t_2) = x_2$$
This is valid for all points in time t.
## Additivity
A system satisfies the property of **additivity** if a sum of inputs
results in a sum of outputs. By definition: an input of
$x_3(t) = x_1(t) + x_2(t)$ results in an output of
$y_3(t) = y_1(t) + y_2(t)$. To determine whether a system is additive,
use the following test:
Given a system f that takes an input x and outputs a value y, assume two
inputs (x~1~ and x~2~) produce two outputs:
$$y_1 = f(x_1)$$
$$y_2 = f(x_2)$$
Now, create a composite input that is the sum of the previous inputs:
$$x_3 = x_1 + x_2$$ Then the system is additive if the following
equation is true:
$$y_3 = f(x_3) = f(x_1 + x_2) = f(x_1) + f(x_2) = y_1 + y_2$$
Systems that satisfy this property are called **additive**. Additive
systems are useful because a sum of simple inputs can be used to analyze
the system response to a more complex input.
### Example: Sinusoids
## Homogeneity
A system satisfies the condition of **homogeneity** if an input scaled
by a certain factor produces an output scaled by that same factor. By
definition: an input of $ax_1$ results in an output of $ay_1$. In other
words, to see if function *f()* is **homogeneous**, perform the
following test:
Stimulate the system *f* with an arbitrary input *x* to produce an
output *y*:
$$y = f(x)$$
Now, create a second input *x~1~*, scale it by a multiplicative factor
*C* (*C* is an arbitrary constant value), and produce a corresponding
output *y~1~*:
$$y_1 = f(Cx_1)$$
Now, assign x to be equal to *x~1~*:
$$x_1 = x$$
Then, for the system to be homogeneous, the following equation must be
true:
$$y_1 = f(Cx) = Cf(x) = Cy$$
Systems that are homogeneous are useful in many applications, especially
applications with gain or amplification.
### Example: Straight-Line
## Linearity
A system is considered **linear** if it satisfies the conditions of
Additivity and Homogeneity. In short, a system is linear if the
following is true:
Take two arbitrary inputs, and produce two arbitrary outputs:
$$y_1 = f(x_1)$$
$$y_2 = f(x_2)$$
Now, a linear combination of the inputs should produce a linear
combination of the outputs:
$$f(Ax_1 + Bx_2) = f(Ax_1) + f(Bx_2) = Af(x_1) + Bf(x_2) = Ay_1 + By_2$$
This condition of additivity and homogeneity is called
**superposition**. A system is linear if it satisfies the condition of
superposition.
### Example: Linear Differential Equations
## Memory
A system is said to have **memory** if the output from the system is
dependent on past inputs (or future inputs!) to the system. A system is
called **memoryless** if the output is only dependent on the current
input. Memoryless systems are easier to work with, but systems with
memory are more common in digital signal processing applications.
Systems that have memory are called **dynamic** systems, and systems
that do not have memory are **static** systems.
## Causality
Causality is a property that is very similar to memory. A system is
called **causal** if it is only dependent on past and/or current inputs.
A system is called **anti-causal** if the output of the system is
dependent only on future inputs. A system is called **non-causal** if
the output depends on past and/or current and future inputs.
## Time-Invariance
A system is called **time-invariant** if the system relationship between
the input and output signals is not dependent on the passage of time. If
the input signal $x(t)$ produces an output $y(t)$ then any time shifted
input, $x(t + \delta)$, results in a time-shifted output $y(t + \delta)$
This property can be satisfied if the transfer function of the system is
not a function of time except expressed by the input and output. If a
system is time-invariant then the system block is commutative with an
arbitrary delay. This facet of time-invariant systems will be discussed
later.
To determine if a system f is time-invariant, perform the following
test:
Apply an arbitrary input x to a system and produce an arbitrary output
y:
$$y(t) = f(x(t))$$
Apply a second input x~1~ to the system, and produce a second output:
$$y_1(t) = f(x_1(t))$$
Now, assign x~1~ to be equal to the first input x, time-shifted by a
given constant value δ:
$$x_1(t) = x(t - \delta)$$
Finally, a system is time-invariant if y~1~ is equal to y shifted by the
same value δ:
$$y_1(t) = y(t - \delta)$$
## LTI Systems
A system is considered to be a **Linear Time-Invariant** (LTI) system if
it satisfies the requirements of time-invariance and linearity. LTI
systems are one of the most important types of systems, and they will be
considered almost exclusively in the beginning chapters of this book.
Systems which are not LTI are more common in practice, but are much more
difficult to analyze.
## Lumpedness
A system is said to be **lumped** if one of the two following conditions
are satisfied:
1. There are a finite number of states that the system can be in.
2. There are a finite number of state variables.
The concept of \"states\" and \"state variables\" are relatively
advanced, and they will be discussed in more detail in the discussion
about **modern controls**.
Systems which are not lumped are called **distributed**. A simple
example of a distributed system is a system with delay, that is,
$A(s)y(t)=B(s)u(t-\tau)$, which has an infinite number of state
variables (Here we use $s$ to denote the Laplace variable). However,
although distributed systems are quite common, they are very difficult
to analyze in practice, and there are few tools available to work with
such systems. Fortunately, in most cases, a delay can be sufficiently
modeled with the Pade approximation. This book will not discuss
distributed systems much.
## Relaxed
A system is said to be **relaxed** if the system is causal and at the
initial time t~0~ the output of the system is zero, i.e., there is no
stored energy in the system. The output is excited solely and uniquely
by input applied thereafter.
$$y(t_0) = f(x(t_0)) = 0$$
In terms of differential equations, a relaxed system is said to have
\"zero initial states\". Systems without an initial state are easier to
work with, but systems that are not relaxed can frequently be modified
to approximate relaxed systems.
## Stability
**Stability** is a very important concept in systems, but it is also one
of the hardest function properties to prove. There are several different
criteria for system stability, but the most common requirement is that
the system must produce a finite output when subjected to a finite
input. For instance, if 5 volts is applied to the input terminals of a
given circuit, it would be best if the circuit output didn\'t approach
infinity, and the circuit itself didn\'t melt or explode. This type of
stability is often known as \"**Bounded Input, Bounded Output**\"
stability, or **BIBO**.
There are a number of other types of stability, most of which are based
on the concept of BIBO stability. Because stability is such an important
and complicated topic, an entire section of this text is devoted to its
study.
## Inputs and Outputs
Systems can also be categorized by the number of inputs and the number
of outputs the system has. Consider a television as a system, for
instance. The system has two inputs: the power wire and the signal
cable. It has one output: the video display. A system with one input and
one output is called **single-input, single output**, or SISO. a system
with multiple inputs and multiple outputs is called **multi-input,
multi-output**, or MIMO.
These systems will be discussed in more detail later.
|
# Control Systems/Digital and Analog
## Digital and Analog
There is a significant distinction between an **analog system** and a
**digital system**, in the same way that there is a significant
difference between analog and digital data. This book is going to
consider both analog and digital topics, so it is worth taking some time
to discuss the differences, and to display the different notations that
will be used with each.
### Continuous Time
A signal is called **continuous-time** if it is defined at every time t.
A system is a continuous-time system if it takes a continuous-time input
signal, and outputs a continuous-time output signal. Here is an example
of an analog waveform:
{{-}}
![](Analog_Waveform.svg "Analog_Waveform.svg"){width="400"}
### Discrete Time
A signal is called **discrete-time** if it is only defined for
particular points in time. A discrete-time system takes discrete-time
input signals, and produces discrete-time output signals. The following
image shows the difference between an analog waveform and the sampled
discrete time equivalent: {{-}}
![](Sampled_Waveform.svg "Sampled_Waveform.svg"){width="400"}
### Quantized
A signal is called **Quantized** if it can only be certain values, and
cannot be other values. This concept is best illustrated with examples:
1. Students with a strong background in physics will recognize this
concept as being the root word in \"Quantum Mechanics\". In quantum
mechanics, it is known that energy comes only in discrete packets.
An electron bound to an atom, for example, may occupy one of several
discrete energy levels, but not intermediate levels.
2. Another common example is population statistics. For instance, a
common statistic is that a household in a particular country may
have an average of \"3.5 children\", or some other fractional
number. Actual households may have 3 children, or they may have 4
children, but no household has 3.5 children.
3. People with a computer science background will recognize that
integer variables are quantized because they can only hold certain
integer values, not fractions or decimal points.
The last example concerning computers is the most relevant, because
quantized systems are frequently computer-based. Systems that are
implemented with computer software and hardware will typically be
quantized.
Here is an example waveform of a quantized signal. Notice how the
magnitude of the wave can only take certain values, and that creates a
step-like appearance. This image is discrete in magnitude, but is
continuous in time:
![](Quantized_Waveform.svg "Quantized_Waveform.svg"){width="400"}
## Analog
By definition:
An analog system is a system that represents data using a direct
conversion from one form to another. In other words, an analog system is
a system that is continuous in both time and magnitude.
### Example: Motor
### Example: Analog Clock
## Digital
Digital data is represented by discrete number values. By definition:
Digital data always have a certain granularity, and therefore there will
almost always be an error associated with using such data, especially if
we want to account for all real numbers. The tradeoff, of course, to
using a digital system is that our powerful computers with our powerful,
Moore\'s law microprocessor units, can be instructed to operate on
digital data only. This benefit more than makes up for the shortcomings
of a digital representation system.
Discrete systems will be denoted inside square brackets, as is a common
notation in texts that deal with discrete values. For instance, we can
denote a discrete data set of ascending numbers, starting at 1, with the
following notation:
: x\[n\] = \[1 2 3 4 5 6 \...\]
**n**, or other letters from the central area of the alphabet (m, i, j,
k, l, for instance) are commonly used to denote discrete time values.
Analog, or \"non-discrete\" values are denoted in regular expression
syntax, using parenthesis. Here is an example of an analog waveform and
the digital equivalent. Notice that the digital waveform is discrete in
both time and magnitude:
+----------------------------------+----------------------------------+
| ![](Analog_Waveform.svg "An | ![](Digital_Waveform.svg "Dig |
| alog_Waveform.svg"){width="400"} | ital_Waveform.svg"){width="400"} |
+----------------------------------+----------------------------------+
| ```{=html} | ```{=html} |
| <center> | <center> |
| ``` | ``` |
| **Analog Waveform** | **Digital Waveform** |
| | |
| ```{=html} | ```{=html} |
| </center> | </center> |
| ``` | ``` |
+----------------------------------+----------------------------------+
### Example: Digital Clock
: {\| class=\"wikitable\"
!Minute !! Binary Representation \|- \|1 \|\| 1 \|- \|10 \|\| 1010 \|-
\|30 \|\| 11110 \|- \|59 \|\| 111011 \|}
## Hybrid Systems
**Hybrid Systems** are systems that have both analog and digital
components. Devices called **samplers** are used to convert analog
signals into digital signals, and Devices called **reconstructors** are
used to convert digital signals into analog signals. Because of the use
of samplers, hybrid systems are frequently called **sampled-data
systems**.
### Example: Automobile Computer
## Continuous and Discrete
A system is considered **continuous-time** if the signal exists for all
time. Frequently, the terms \"analog\" and \"continuous\" will be used
interchangeably, although they are not strictly the same.
Discrete systems can come in three flavors:
1. Discrete time (sampled)
2. Discrete magnitude (quantized)
3. Discrete time and magnitude (digital)
**Discrete magnitude** systems are systems where the signal value can
only have certain values.**Discrete time** systems are systems where
signals are only available (or valid) at particular times. Computer
systems are discrete in the sense of (3), in that data is only read at
specific discrete time intervals, and the data can have only a limited
number of discrete values.
A discrete-time system has a **sampling time** value associated with it,
such that each discrete value occurs at multiples of the given sampling
time. We will denote the sampling time of a system as T. We can equate
the square-brackets notation of a system with the continuous definition
of the system as follows:
$$x[n] = x(nT)$$
Notice that the two notations show the same thing, but the first one is
typically easier to write, *and* it shows that the system in question is
a discrete system. This book will use the square brackets to denote
discrete systems by the sample number n, and parenthesis to denote
continuous time functions.
## Sampling and Reconstruction
The process of converting analog information into digital data is called
\"Sampling\". The process of converting digital data into an analog
signal is called \"Reconstruction\". We will talk about both processes
in a later chapter. For more information on the topic than is available
in this book, see the Analog and Digital
Conversion wikibook. Here is
an example of a reconstructed waveform. Notice that the reconstructed
waveform here is quantized because it is constructed from a digital
signal:
![](Reconstructed_Waveform.svg "Reconstructed_Waveform.svg"){width="400"}
|
# Control Systems/System Metrics
## System Metrics
When a system is being designed and analyzed, it doesn\'t make any sense
to test the system with all manner of strange input functions, or to
measure all sorts of arbitrary performance metrics. Instead, it is in
everybody\'s best interest to test the system with a set of standard,
simple reference functions. Once the system is tested with the reference
functions, there are a number of different metrics that we can use to
determine the system performance.
It is worth noting that the metrics presented in this chapter represent
only a small number of possible metrics that can be used to evaluate a
given system. This wikibook will present other useful metrics along the
way, as their need becomes apparent.
## Standard Inputs
There are a number of standard inputs that are considered simple enough
and universal enough that they are considered when designing a system.
These inputs are known as a **unit step**, a **ramp**, and a
**parabolic** input.
Also, sinusoidal and exponential functions are considered basic, but
they are too difficult to use in initial analysis of a system.
## Steady State
When a unit-step function is input to a system, the **steady-state**
value of that system is the output value at time $t = \infty$. Since it
is impractical (if not completely impossible) to wait till infinity to
observe the system, approximations and mathematical calculations are
used to determine the steady-state value of the system. Most system
responses are **asymptotic**, that is that the response approaches a
particular value. Systems that are asymptotic are typically obvious from
viewing the graph of that response.
### Step Response
The step response of a system is most frequently used to analyze
systems, and there is a large amount of terminology involved with step
responses. When exposed to the step input, the system will initially
have an undesirable output period known as the **transient response**.
The transient response occurs because a system is approaching its final
output value. The steady-state response of the system is the response
after the transient response has ended.
The amount of time it takes for the system output to reach the desired
value (before the transient response has ended, typically) is known as
the **rise time**. The amount of time it takes for the transient
response to end and the steady-state response to begin is known as the
**settling time**.
It is common for a systems engineer to try and improve the step response
of a system. In general, it is desired for the transient response to be
reduced, the rise and settling times to be shorter, and the steady-state
to approach a particular desired \"reference\" output.
+----------------------------------+----------------------------------+
| ![](Step_Function.svg " | ![](Step_Response.svg " |
| Step_Function.svg"){width="400"} | Step_Response.svg"){width="400"} |
+----------------------------------+----------------------------------+
| ```{=html} | ```{=html} |
| <center> | <center> |
| ``` | ``` |
| An arbitrary step function with | A step response graph of input |
| $x(t) = Mu(t)$ | *x(t)* to a made-up system |
| | |
| ```{=html} | ```{=html} |
| </center> | </center> |
| ``` | ``` |
+----------------------------------+----------------------------------+
{{-}}
## Target Value
The target output value is the value that our system attempts to obtain
for a given input. This is not the same as the steady-state value, which
is the actual value that the system does obtain. The target value is
frequently referred to as the **reference value**, or the \"reference
function\" of the system. In essence, this is the value that we want the
system to produce. When we input a \"5\" into an elevator, we want the
output (the final position of the elevator) to be the fifth floor.
Pressing the \"5\" button is the reference input, and is the expected
value that we want to obtain. If we press the \"5\" button, and the
elevator goes to the third floor, then our elevator is poorly designed.
## Rise Time
**Rise time** is the amount of time that it takes for the system
response to reach the target value from an initial state of zero. Many
texts on the subject define the rise time as being the time it takes to
rise between the initial position and 80% of the target value. This is
because some systems never rise to 100% of the expected, target value,
and therefore they would have an infinite rise-time. This book will
specify which convention to use for each individual problem. Rise time
is typically denoted *t~r~*, or *t~rise~*.
## Percent Overshoot
Underdamped systems frequently overshoot their target value initially.
This initial surge is known as the \"overshoot value\". The ratio of the
amount of overshoot to the target steady-state value of the system is
known as the **percent overshoot**. Percent overshoot represents an
overcompensation of the system, and can output dangerously large output
signals that can damage a system. Percent overshoot is typically denoted
with the term *PO*.
## Steady-State Error
Sometimes a system might never achieve the desired steady-state value,
but instead will settle on an output value that is not desired. The
difference between the steady-state output value to the reference input
value at steady state is called the **steady-state error** of the
system. We will use the variable *e~ss~* to denote the steady-state
error of the system.
## Settling Time
After the initial rise time of the system, some systems will oscillate
and vibrate for an amount of time before the system output settles on
the final value. The amount of time it takes to reach steady state after
the initial rise time is known as the **settling time**. Notice that
damped oscillating systems may never settle completely, so we will
define settling time as being the amount of time for the system to
reach, and stay in, a certain acceptable range. The acceptable range for
settling time is typically determined on a per-problem basis, although
common values are 20%, 10%, or 5% of the target value. The settling time
will be denoted as *t~s~*.
## System Order
The **order** of the system is defined by the number of independent
energy storage elements in the system, and intuitively by the highest
order of the linear differential equation that describes the system. In
a transfer function representation, the order is the highest exponent in
the transfer function. In a **proper system**, the system order is
defined as the degree of the denominator polynomial. In a state-space
equation, the system order is the number of state-variables used in the
system. The order of a system will frequently be denoted with an *n* or
*N*, although these variables are also used for other purposes. This
book will make clear distinction on the use of these variables.
### Proper Systems
A **proper system** is a system where the degree of the denominator is
larger than or equal to the degree of the numerator polynomial. A
**strictly proper system** is a system where the degree of the
denominator polynomial is larger than (but never equal to) the degree of
the numerator polynomial. A **biproper system** is a system where the
degree of the denominator polynomial equals the degree of the numerator
polynomial.
It is important to note that only proper systems can be physically
realized. In other words, a system that is not proper cannot be built.
It makes no sense to spend a lot of time designing and analyzing
imaginary systems.
### Example: System Order
In the above example, G(s) is a second-order transfer function because
in the denominator one of the s variables has an exponent of 2.
Second-order functions are the easiest to work with.
## System Type
Let\'s say that we have a process transfer function (or combination of
functions, such as a controller feeding in to a process), all in the
forward branch of a unity feedback loop. Say that the overall forward
branch transfer function is in the following generalized form (known as
**pole-zero form**):
$$G(s) = \frac {K \prod_i (s - s_i)}{s^M \prod_j (s - s_j)}$$
we call the parameter *M* the **system type**. Note that increased
system type number correspond to larger numbers of poles at s = 0. More
poles at the origin generally have a beneficial effect on the system,
but they increase the order of the system, and make it increasingly
difficult to implement physically. System type will generally be denoted
with a letter like *N*, *M*, or *m*. Because these variables are
typically reused for other purposes, this book will make clear
distinction when they are employed.
Now, we will define a few terms that are commonly used when discussing
system type. These new terms are **Position Error**, **Velocity Error**,
and **Acceleration Error**. These names are throwbacks to physics terms
where acceleration is the derivative of velocity, and velocity is the
derivative of position. Note that none of these terms are meant to deal
with movement, however.
Position Error:The position error, denoted by the **position error constant** *$K_p$*. This is the amount of steady-state error of the system when stimulated by a unit step input. We define the position error constant as follows:
$$K_p = \lim_{s \to 0} G(s)$$
: Where G(s) is the transfer function of our system.
```{=html}
<!-- -->
```
Velocity Error:The velocity error is the amount of steady-state error when the system is stimulated with a ramp input. We define the **velocity error constant** as such:
$$K_v = \lim_{s \to 0} s G(s)$$
Acceleration Error:The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We define the **acceleration error constant** to be:
$$K_a = \lim_{s \to 0} s^2 G(s)$$
Now, this table will show briefly the relationship between the system
type, the kind of input (step, ramp, parabolic), and the steady-state
error of the system:
: {\| class=\"wikitable\"
! ! colspan=3 \| Unit System Input \|- ! Type, *M* !! *Au(t)* !! *Ar(t)*
!! *Ap(t)* \|- \|0 \|\| $e_{ss} = \frac{A}{1 + K_p}$ \|\|
$e_{ss} = \infty$ \|\| $e_{ss} = \infty$ \|- \|1 \|\| $e_{ss} = 0$ \|\|
$e_{ss} = \frac{A}{K_v}$ \|\| $e_{ss} = \infty$ \|- \|2 \|\|
$e_{ss} = 0$ \|\| $e_{ss} = 0$ \|\| $e_{ss} = \frac{A}{K_a}$ \|- \| \> 2
\|\| $e_{ss} = 0$ \|\| $e_{ss} = 0$ \|\| $e_{ss} = 0$ \|}
### Z-Domain Type
Likewise, we can show that the system order can be found from the
following generalized transfer function in the *Z* domain:
$$G(z) = \frac {K \prod_i (z - z_i)}{(z - 1)^M \prod_j (z - z_j)}$$
Where the constant *M* is the **type** of the digital system. Now, we
will show how to find the various error constants in the Z-Domain:
: {\| class=\"wikitable\"
! Error Constant !! Equation \|- \| Kp \|\| $K_p = \lim_{z \to 1} G(z)$
\|- \| Kv \|\| $K_v = \lim_{z \to 1} (z - 1) G(z)$ \|- \| Ka \|\|
$K_a = \lim_{z \to 1} (z - 1)^2 G(z)$ \|}
## Visually
Here is an image of the various system metrics, acting on a system in
response to a step input:
![](System_Metrics_Diagram.svg "System_Metrics_Diagram.svg"){width="500"}
The target value is the value of the input step response. The rise time
is the time at which the waveform first reaches the target value. The
overshoot is the amount by which the waveform exceeds the target value.
The settling time is the time it takes for the system to settle into a
particular bounded region. This bounded region is denoted with two short
dotted lines above and below the target value.
|
# Control Systems/System Modeling
## The Control Process
It is the job of a control engineer to analyze existing systems, and to
design new systems to meet specific needs. Sometimes new systems need to
be designed, but more frequently a controller unit needs to be designed
to improve the performance of existing systems. When designing a system,
or implementing a controller to augment an existing system, we need to
follow some basic steps:
1. Model the system mathematically
2. Analyze the mathematical model
3. Design system/controller
4. Implement system/controller and test
The vast majority of this book is going to be focused on (2), the
analysis of the mathematical systems. This chapter alone will be devoted
to a discussion of the mathematical modeling of the systems.
## External Description
An **external description** of a system relates the system input to the
system output without explicitly taking into account the internal
workings of the system. The external description of a system is
sometimes also referred to as the **Input-Output Description** of the
system, because it only deals with the inputs and the outputs to the
system.
![](Time-Domain_Transfer_Block.svg "Time-Domain_Transfer_Block.svg")
If the system can be represented by a mathematical function *h(t, r)*,
where *t* is the time that the output is observed, and *r* is the time
that the input is applied. We can relate the system function *h(t, r)*
to the input *x* and the output *y* through the use of an integral:
$$y(t) = \int_{-\infty}^\infty h(t, r)x(r)dr$$
This integral form holds for all linear systems, and every linear system
can be described by such an equation.
If a system is causal (i.e. an input at *t=r* affects system behaviour
only for $t \ge r$) and there is no input of the system before *t=0*, we
can change the limits of the integration:
$$y(t) = \int_0^t h(t, r)x(r)dr$$
### Time-Invariant Systems
If furthermore a system is time-invariant, we can rewrite the system
description equation as follows:
$$y(t) = \int_0^t h(t - r)x(r)dr$$
This equation is known as the **convolution integral**, and we will
discuss it more in the next chapter.
Every Linear Time-Invariant (LTI) system can be used with the **Laplace
Transform**, a powerful tool that allows us to convert an equation from
the time domain into the **S-Domain**, where many calculations are
easier. Time-variant systems cannot be used with the Laplace Transform.
## Internal Description
If a system is linear and lumped, it can also be described using a
system of equations known as **state-space equations**. In state-space
equations, we use the variable *x* to represent the internal state of
the system. We then use *u* as the system input, and we continue to use
*y* as the system output. We can write the state-space equations as
such:
$$x'(t) = A(t)x(t) + B(t)u(t)$$
$$y(t) = C(t)x(t) + D(t)u(t)$$
We will discuss the state-space equations more when we get to the
section on **modern controls**.
## Complex Descriptions
Systems which are LTI and Lumped can also be described using a
combination of the state-space equations, and the Laplace Transform. If
we take the Laplace Transform of the state equations that we listed
above, we can get a set of functions known as the **Transfer Matrix
Functions**. We will discuss these functions in a later chapter.
## Representations
To recap, we will prepare a table with the various system properties,
and the available methods for describing the system:
: {\| class=\"wikitable\"
\|- !Properties !! State-Space\
Equations !! Laplace\
Transform !! Transfer\
Matrix \|- \|Linear, Time-Variant, Distributed \|\| no \|\| no \|\| no
\|- \|Linear, Time-Variant, Lumped \|\| yes \|\| no \|\| no \|-
\|Linear, Time-Invariant, Distributed \|\| no \|\| yes \|\| no \|-
\|Linear, Time-Invariant, Lumped \|\| yes \|\| yes \|\| yes \|}
We will discuss all these different types of system representation later
in the book.
## Analysis
Once a system is modeled using one of the representations listed above,
the system needs to be analyzed. We can determine the system metrics and
then we can compare those metrics to our specification. If our system
meets the specifications we are finished with the design process.
However if the system does not meet the specifications (as is typically
the case), then suitable controllers and compensators need to be
designed and added to the system.
Once the controllers and compensators have been designed, the job isn\'t
finished: we need to analyze the new composite system to ensure that the
controllers work properly. Also, we need to ensure that the systems are
stable: unstable systems can be dangerous.
### Frequency Domain
For proposals, early stage designs, and quick turn around analyses a
frequency domain model is often superior to a time domain model.
Frequency domain models take disturbance PSDs (Power Spectral Densities)
directly, use transfer functions directly, and produce output or
residual PSDs directly. The answer is a steady-state response.
Oftentimes the controller is shooting for 0 so the steady-state response
is also the residual error that will be the analysis output or metric
for report.
```{=html}
<div align="center">
```
Input Model Output
------- ------------------- --------
PSD Transfer Function PSD
: **Table 1: Frequency Domain Model Inputs and Outputs**
```{=html}
</div>
```
#### Brief Overview of the Math
Frequency domain modeling is a matter of determining the impulse
response of a system to a random process.
!Figure 1: Frequency Domain
System{width="500"}
$$S_{YY}\left(\omega\right)=G^*\left(\omega\right)G\left(\omega\right)S_{XX}= \left | G\left(\omega\right)\right \vert S_{XX}$$[^1]
where
$$S_{XX}\left(\omega\right)$$ is the one-sided input PSD in
$\frac{magnitude^2}{Hz}$
$$G\left(\omega\right)$$ is the frequency response function of the
system and
$$S_{YY}\left(\omega\right)$$ is the one-sided output PSD or auto power
spectral density function.
The frequency response function, $G\left(\omega\right)$, is related to
the impulse response function (transfer function) by
$$g\left(\tau\right)=\frac{1}{2 \pi} \int_{-\infty}^{\infty}e^{i\omega t}H\left(\omega\right)\,d\omega$$
Note some texts will state that this is only valid for random processes
which are stationary. Other texts suggest stationary and ergodic while
still others state weakly stationary processes. Some texts do not
distinguish between strictly stationary and weakly stationary. From
practice, the rule of thumb is if the PSD of the input process is the
same from hour to hour and day to day then the input PSD can be used and
the above equation is valid.
#### Notes
```{=html}
<references />
```
See a full explanation with example at
ControlTheoryPro.com
## Modeling Examples
Modeling in Control Systems is oftentimes a matter of judgement. This
judgement is developed by creating models and learning from other
people\'s models. ControlTheoryPro.com is a site with a lot of examples.
Here are links to a few of them
- Hovering Helicopter
Example
- Reaction Torque Cancellation
Example
- List of all examples at
ControlTheoryPro.com
## Manufacture
Once the system has been properly designed we can prototype our system
and test it. Assuming our analysis was correct and our design is good,
the prototype should work as expected. Now we can move on to manufacture
and distribute our completed systems.
[^1]: Sun, Jian-Qiao (2006). *Stochastic Dynamics and Control, Volume
4*. Amsterdam: Elsevier Science. .
|
# Control Systems/Transforms
## Transforms
There are a number of transforms that we will be discussing throughout
this book, and the reader is assumed to have at least a small prior
knowledge of them. It is not the intention of this book to teach the
topic of transforms to an audience that has had no previous exposure to
them. However, we will include a brief refresher here to refamiliarize
people who maybe cannot remember the topic perfectly. If you do not know
what the **Laplace Transform** or the **Fourier Transform** are yet, it
is highly recommended that you use this page as a simple guide, and look
the information up on other sources. Specifically,
Wikipedia has lots of information on these subjects.
### Transform Basics
A **transform** is a mathematical tool that converts an equation from
one variable (or one set of variables) into a new variable (or a new set
of variables). To do this, the transform must remove all instances of
the first variable, the \"Domain Variable\", and add a new \"Range
Variable\". Integrals are excellent choices for transforms, because the
limits of the definite integral will be substituted into the domain
variable, and all instances of that variable will be removed from the
equation. An integral transform that converts from a domain variable *a*
to a range variable *b* will typically be formatted as such:
$$\mathcal{T}[f(a)] = F(b) = \int_C f(a)g(a,b)da$$
Where the function *f(a)* is the function being transformed, and
*g(a,b)* is known as the **kernel** of the transform. Typically, the
only difference between the various integral transforms is the kernel.
## Laplace Transform
The **Laplace Transform** converts an equation from the time-domain into
the so-called \"S-domain\", or the **Laplace domain**, or even the
\"Complex domain\". These are all different names for the same
mathematical space and they all may be used interchangeably in this book
and in other texts on the subject. The Transform can only be applied
under the following conditions:
1. The system or signal in question is analog.
2. The system or signal in question is Linear.
3. The system or signal in question is Time-Invariant.
4. The system or signal in question is causal.
The transform is defined as such:
$$\begin{matrix}F(s) = \mathcal{L}[f(t)] = \int_0^\infty f(t) e^{-st}dt\end{matrix}$$
Laplace transform results have been tabulated extensively. More
information on the Laplace transform, including a transform table can be
found in **the
Appendix**.
If we have a linear differential equation in the time domain:
$$\begin{matrix}y(t) = ax(t) + bx'(t) + cx''(t)\end{matrix}$$
With zero initial conditions, we can take the Laplace transform of the
equation as such:
$$\begin{matrix}Y(s) = aX(s) + bsX(s) + cs^2X(s)\end{matrix}$$
And separating, we get:
$$\begin{matrix}Y(s) = X(s)[a + bs + cs^2]\end{matrix}$$
### Inverse Laplace Transform
The **inverse Laplace Transform** is defined as such: {{-}}
$$\begin{matrix}f(t)
= \mathcal{L}^{-1} \left\{F(s)\right\}
= {1 \over {2\pi i}}\int_{c-i\infty}^{c+i\infty} e^{st} F(s)\,ds\end{matrix}$$
The inverse transform converts a function from the Laplace domain back
into the time domain.
### Matrices and Vectors
The Laplace Transform can be used on systems of linear equations in an
intuitive way. Let\'s say that we have a system of linear equations:
$$\begin{matrix}y_1(t) = a_1x_1(t)\end{matrix}$$
$$\begin{matrix}y_2(t) = a_2x_2(t)\end{matrix}$$
We can arrange these equations into matrix form, as shown:
$$\begin{bmatrix}y_1(t) \\ y_2(t)\end{bmatrix} = \begin{bmatrix}a_1 & 0 \\ 0 & a_2\end{bmatrix}\begin{bmatrix}x_1(t) \\x_2(t)\end{bmatrix}$$
And write this symbolically as:
$$\mathbf{y}(t) = A\mathbf{x}(t)$$
We can take the Laplace transform of both sides:
$$\mathcal{L}[\mathbf{y}(t)] = \mathbf{Y}(s) = \mathcal{L}[A\mathbf{x}(t)] = A\mathcal{L}[\mathbf{x}(t)] = A\mathbf{X}(s)$$
Which is the same as taking the transform of each individual equation in
the system of equations.
### Example: RL Circuit
Here, we are going to show a common example of a first-order system, an
**RL Circuit**. In an inductor, the relationship between the current,
*I*, and the voltage, *V*, in the time domain is expressed as a
derivative:
$$V(t) = L\frac{dI(t)}{dt}$$
Where L is a special quantity called the \"Inductance\" that is a
property of inductors.
{RI(t) + L \\frac{dI(t)}{dt}}V\_{in}(t)`</math>`{=html}
This is a very complicated equation, and will be difficult to solve
unless we employ the Laplace transform:
$$V_{out}(s) = \frac{Ls}{R + Ls}V_{in}(s)$$
We can divide top and bottom by L, and move V~in~ to the other side:
$$\frac{V_{out}}{V_{in}} = \frac{s}{\frac{R}{L} + s}$$
And using a simple table look-up, we can solve this for the time-domain
relationship between the circuit input and the circuit output:
$$\frac{V_{out}}{V_{in}} = \frac{d}{dt}e^{\left(\frac{-Rt}{L}\right)}u(t)$$}}
### Partial Fraction Expansion
Laplace transform pairs are extensively tabulated, but frequently we
have transfer functions and other equations that do not have a tabulated
inverse transform. If our equation is a fraction, we can often utilize
**Partial Fraction Expansion** (PFE) to create a set of simpler terms
that will have readily available inverse transforms. This section is
going to give a brief reminder about PFE, for those who have already
learned the topic. This refresher will be in the form of several
examples of the process, as it relates to the Laplace Transform. People
who are unfamiliar with PFE are encouraged to read more about it in
**Calculus**.
### Example: Second-Order System
### Example: Fourth-Order System
\\to \\frac{t\^{n}}{n!}e\^{-\\alpha t} \\cdot u(t) `</math>`{=html}
We can plug in our values for *A*, *B*, *C*, and *D* into our expansion,
and try to convert it into the form above.
$$F(s)=\frac{A}{s}+\frac{B}{(s+10)^3}+\frac{C}{(s+10)^2}+\frac{D}{s+10}$$
$$F(s)=A\frac{1}{s}+B\frac{1}{(s+10)^3}+C\frac{1}{(s+10)^2}+D\frac{1}{s+10}$$
$$F(s)=1\frac{1}{s}+26\frac{1}{(s+10)^3}+69\frac{1}{(s+10)^2}-1\frac{1}{s+10}$$
$$f(t)=u(t)+13t^2e^{-10t}+69te^{-10t}-e^{-10t}$$}}
### Example: Complex Roots
### Example: Sixth-Order System
### Final Value Theorem
The **Final Value Theorem** allows us to determine the value of the time
domain equation, as the time approaches infinity, from the S domain
equation. In Control Engineering, the Final Value Theorem is used most
frequently to determine the steady-state value of a system. The real
part of the poles of the function must be \<0.
$$\lim_{t \to \infty}x(t) = \lim_{s \to 0} s X(s)$$
From our chapter on system metrics, you may recognize the value of the
system at time infinity as the steady-state time of the system. The
difference between the steady state value and the expected output value
we remember as being the steady-state error of the system. Using the
Final Value Theorem, we can find the steady-state value and the
steady-state error of the system in the Complex S domain.
### Example: Final Value Theorem
### Initial Value Theorem
Akin to the final value theorem, the **Initial Value Theorem** allows us
to determine the initial value of the system (the value at time zero)
from the S-Domain Equation. The initial value theorem is used most
frequently to determine the starting conditions, or the \"initial
conditions\" of a system.
$$x(0) = \lim_{s \to \infty} s X(s)$$
### Common Transforms
We will now show you the transforms of the three functions we have
already learned about: The unit step, the unit ramp, and the unit
parabola. The transform of the unit step function is given by:
$$\mathcal{L}[u(t)] = \frac{1}{s}$$
And since the unit ramp is the integral of the unit step, we can
multiply the above result times *1/s* to get the transform of the unit
ramp:
$$\mathcal{L}[r(t)] = \frac{1}{s^2}$$
Again, we can multiply by *1/s* to get the transform of the unit
parabola:
$$\mathcal{L}[p(t)] = \frac{1}{s^3}$$
## Fourier Transform
The **Fourier Transform** is very similar to the Laplace transform. The
fourier transform uses the assumption that any finite time-domain signal
can be broken into an infinite sum of sinusoidal (sine and cosine waves)
signals. Under this assumption, the Fourier Transform converts a
time-domain signal into its frequency-domain representation, as a
function of the radial frequency, ω, The Fourier Transform is defined as
such:
$$F(j\omega) = \mathcal{F}[f(t)] = \int_0^\infty f(t) e^{-j\omega t} dt$$
We can now show that the Fourier Transform is equivalent to the Laplace
transform, when the following condition is true:
$$\begin{matrix}s = j\omega\end{matrix}$$
Because the Laplace and Fourier Transforms are so closely related, it
does not make much sense to use both transforms for all problems. This
book, therefore, will concentrate on the Laplace transform for nearly
all subjects, except those problems that deal directly with frequency
values. For frequency problems, it makes life much easier to use the
Fourier Transform representation.
Like the Laplace Transform, the Fourier Transform has been extensively
tabulated. Properties of the Fourier transform, in addition to a table
of common transforms is available in **the
Appendix**.
### Inverse Fourier Transform
The **inverse Fourier Transform** is defined as follows: {{-}}
$$f(t)
= \mathcal{F}^{-1}\left\{F(j\omega)\right\}
= \frac{1}{2\pi}\int_{-\infty}^\infty F(j\omega) e^{j\omega t} d\omega$$
This transform is nearly identical to the Fourier Transform.
## Complex Plane
![](S_Plane.svg "S_Plane.svg"){width="200"}
Using the above equivalence, we can show that the Laplace transform is
always equal to the Fourier Transform, if the variable *s* is an
imaginary number. However, the Laplace transform is different if *s* is
a real or a complex variable. As such, we generally define *s* to have
both a real part and an imaginary part, as such:
$$\begin{matrix}s = \sigma + j\omega\end{matrix}$$
And we can show that *s = j*ω if σ*= 0*.
Since the variable *s* can be broken down into 2 independent values, it
is frequently of some value to graph the variable *s* on its own special
\"S-plane\". The S-plane graphs the variable σ on the horizontal axis,
and the value of *j*ω on the vertical axis. This axis arrangement is
shown at right.
{{-}}
## Euler\'s Formula
There is an important result from calculus that is known as **Euler\'s
Formula**, or \"Euler\'s Relation\". This important formula relates the
important values of *e*, *j*, π, 1 and 0:
$$\begin{matrix}e^{j\pi} + 1 = 0\end{matrix}$$
However, this result is derived from the following equation, setting ω
to π:
$$\begin{matrix}e^{j\omega} = \cos(\omega) + j\sin(\omega)\end{matrix}$$
This formula will be used extensively in some of the chapters of this
book, so it is important to become familiar with it now.
## MATLAB
The MATLAB symbolic toolbox contains functions to compute the Laplace
and Fourier transforms automatically. The function **laplace**, and the
function **fourier** can be used to calculate the Laplace and Fourier
transforms of the input functions, respectively. For instance, the code:
`t = sym('t');`\
`fx = 30*t^2 + 20*t;`\
`laplace(fx)`
produces the output:
`ans =`\
\
`60/s^3+20/s^2`
We will discuss these functions more in The
Appendix.
## Further reading
- Digital Signal Processing/Continuous-Time Fourier
Transform
- Signals and Systems/Aperiodic
Signals
- Circuit Theory/Laplace
Transform
|
# Control Systems/Transfer Functions
## Transfer Functions
A **Transfer Function** is the ratio of the output of a system to the
input of a system, in the Laplace domain considering its initial
conditions and equilibrium point to be zero. This assumption is relaxed
for systems observing transience. If we have an input function of
*X(s)*, and an output function *Y(s)*, we define the transfer function
*H(s)* to be:
$$H(s) = {Y(s) \over X(s)}$$
Readers who have read the Circuit Theory
book will recognize the transfer function as being the impedance,
admittance, impedance ratio of a voltage divider or the admittance ratio
of a current divider.
![](Laplace_Block.svg "Laplace_Block.svg"){width="400"}
## Impulse Response
For comparison, we will consider the time-domain equivalent to the above
input/output relationship. In the time domain, we generally denote the
input to a system as *x(t)*, and the output of the system as *y(t)*. The
relationship between the input and the output is denoted as the
**impulse response**, *h(t)*.
We define the impulse response as being the relationship between the
system output to its input. We can use the following equation to define
the impulse response:
$$h(t) = \frac{y(t)}{x(t)}$$
### Impulse Function
It would be handy at this point to define precisely what an \"impulse\"
is. The **Impulse Function**, denoted with δ*(t)* is a special function
defined piece-wise as follows:
$$\delta(t) = \left\{
\begin{matrix}
0, & t < 0
\\
\mbox{undefined}, & t = 0
\\
0, & t > 0
\end{matrix}\right.$$
The impulse function is also known as the **delta function** because
it\'s denoted with the Greek lower-case letter δ. The delta function is
typically graphed as an arrow towards infinity, as shown below:
![](Delta_Function.svg "Delta_Function.svg")
It is drawn as an arrow because it is difficult to show a single point
at infinity in any other graphing method. Notice how the arrow only
exists at location 0, and does not exist for any other time *t*. The
delta function works with regular time shifts just like any other
function. For instance, we can graph the function δ*(t - N)* by shifting
the function δ*(t)* to the right, as such:
![](DeltaN_Function.svg "DeltaN_Function.svg")
An examination of the impulse function will show that it is related to
the unit-step function as follows:
$$\delta(t) = \frac{du(t)}{dt}$$
and
$$u(t) = \int \delta(t) dt$$
The impulse function is not defined at point *t = 0*, but the impulse
must always satisfy the following condition, or else it is not a true
impulse function:
$$\int_{-\infty}^\infty \delta(t)dt = 1$$
The response of a system to an impulse input is called the **impulse
response**. Now, to get the Laplace Transform of the impulse function,
we take the derivative of the unit step function, which means we
multiply the transform of the unit step function by s:
$$\mathcal{L}[u(t)] = U(s) = \frac{1}{s}$$
$$\mathcal{L}[\delta(t)] = sU(s) = \frac{s}{s} = 1$$
This result can be verified in the transform tables in **The
Appendix**.
### Step Response
Similar to the impulse response, the **step response** of a system is
the output of the system when a unit step function is used as the input.
The step response is a common analysis tool used to determine certain
metrics about a system. Typically, when a new system is designed, the
step response of the system is the first characteristic of the system to
be analyzed.
## Convolution
However, the impulse response cannot be used to find the system output
from the system input in the same manner as the transfer function. If we
have the system input and the impulse response of the system, we can
calculate the system output using the **convolution operation** as such:
$$y(t) = h(t) * x(t)$$
Where \" \* \" (asterisk) denotes the convolution operation. Convolution
is a complicated combination of multiplication, integration and
time-shifting. We can define the convolution between two functions,
*a(t)* and *b(t)* as the following:
$$(a*b)(t) = (b*a)(t) = \int_{-\infty}^\infty a(\tau)b(t - \tau)d\tau$$
(The variable τ (Greek tau) is a dummy variable for integration). This
operation can be difficult to perform. Therefore, many people prefer to
use the Laplace Transform (or another transform) to convert the
convolution operation into a multiplication operation, through the
**Convolution Theorem**.
### Time-Invariant System Response
If the system in question is time-invariant, then the general
description of the system can be replaced by a convolution integral of
the system\'s impulse response and the system input. We can call this
the **convolution description** of a system, and define it below:
$$y(t) = x(t) * h(t) = \int_{-\infty}^\infty x(\tau)h(t - \tau)d\tau$$
## Convolution Theorem
This method of solving for the output of a system is quite tedious, and
in fact it can waste a large amount of time if you want to solve a
system for a variety of input signals. Luckily, the Laplace transform
has a special property, called the **Convolution Theorem**, that makes
the operation of convolution easier:
The Convolution Theorem can be expressed using the following equations:
$$\mathcal{L}[f(t) * g(t)] = F(s)G(s)$$
$$\mathcal{L}[f(t)g(t)] = F(s) * G(s)$$
This also serves as a good example of the property of **Duality**.
## Using the Transfer Function
The Transfer Function fully describes a control system. The Order, Type
and Frequency response can all be taken from this specific function.
Nyquist and Bode plots can be drawn from the open loop Transfer
Function. These plots show the stability of the system when the loop is
closed. Using the denominator of the transfer function, called the
characteristic equation, roots of the system can be derived.
For all these reasons and more, the Transfer function is an important
aspect of classical control systems. Let\'s start out with the
definition:
If the complex Laplace variable is *s*, then we generally denote the
transfer function of a system as either *G(s)* or *H(s)*. If the system
input is *X(s)*, and the system output is *Y(s)*, then the transfer
function can be defined as such:
$$H(s) = \frac{Y(s)}{X(s)}$$
If we know the input to a given system, and we have the transfer
function of the system, we can solve for the system output by
multiplying:
$$Y(s) = H(s)X(s)$$
### Example: Impulse Response
### Example: Step Response
### Example: MATLAB Step Response
## Frequency Response
The **Frequency Response** is similar to the Transfer function, except
that it is the relationship between the system output and input in the
complex Fourier Domain, not the Laplace domain. We can obtain the
frequency response from the transfer function, by using the following
change of variables:
$$s = j\omega$$
![](Fourier_Block.svg "Fourier_Block.svg"){width="400"}
Because the frequency response and the transfer function are so closely
related, typically only one is ever calculated, and the other is gained
by simple variable substitution. However, despite the close relationship
between the two representations, they are both useful individually, and
are each used for different purposes.
|
# Control Systems/Poles and Zeros
## Poles and Zeros
**Poles** and **Zeros** of a transfer function are the frequencies for
which the value of the denominator and numerator of transfer function
becomes infinite and zero respectively. The values of the poles and the
zeros of a system determine whether the system is stable, and how well
the system performs. Control systems, in the most simple sense, can be
designed simply by assigning specific values to the poles and zeros of
the system.
Physically realizable control systems must have a number of poles
greater than the number of zeros. Systems that satisfy this relationship
are called **Proper**. We will elaborate on this below.
## Time-Domain Relationships
Let\'s say that we have a transfer function with 3 poles:
$$H(s) = \frac{a}{(s - l)(s - m)(s - n)}$$
The poles are located at s = **l**, **m**, **n**. Now, we can use
partial fraction expansion to separate out the transfer function:
$$H(s) = \frac{a}{(s - l)(s - m)(s - n)} = \frac{A}{s-l} + \frac{B}{s-m} + \frac{C}{s-n}$$
Using the inverse transform on each of these component fractions
(looking up the transforms in our table), we get the following:
$$h(t) = Ae^{lt}u(t) + Be^{mt}u(t) + Ce^{nt}u(t)$$
But, since s is a complex variable, **l** **m** and **n** can all
potentially be complex numbers, with a real part (σ) and an imaginary
part (jω). If we just look at the first term:
$$Ae^{lt}u(t) = Ae^{(\sigma_l + j\omega_l)t}u(t) = Ae^{\sigma_l t}e^{j\omega_l t}u(t)$$
Using **Euler\'s Equation** on the
imaginary exponent, we get:
$$Ae^{\sigma_l t}[\cos(\omega_l t) + j\sin(\omega_l t)]u(t)$$
If a complex pole is present it is always accompanied by another pole
that is its complex conjugate. The imaginary parts of their time domain
representations thus cancel and we are left with 2 of the same real
parts. Assuming that the complex conjugate pole of the first term is
present, we can take 2 times the real part of this equation and we are
left with our final result:
$$2Ae^{\sigma_l t}\cos(\omega_l t)u(t)$$
We can see from this equation that every pole will have an exponential
part, and a sinusoidal part to its response. We can also go about
constructing some rules:
1. if σ~l~ = 0, the response of the pole is a perfect sinusoidal (an
oscillator)
2. if ω~l~ = 0, the response of the pole is a perfect exponential.
3. if σ~l~ \< 0, the exponential part of the response will decay
towards zero.
4. if σ~l~ \> 0, the exponential part of the response will rise towards
infinity.
From the last two rules, we can see that all poles of the system must
have negative real parts, and therefore they must all have the form (s +
l) for the system to be stable. We will discuss stability in later
chapters.
## What are Poles and Zeros
Let\'s say we have a transfer function defined as a ratio of two
polynomials:
$$H(s) = {N(s) \over D(s)}$$
Where *N(s)* and *D(s)* are simple polynomials. **Zeros** are the roots
of *N(s)* (the numerator of the transfer function) obtained by setting
*N(s) = 0* and solving for *s*.
**Poles** are the roots of *D(s)* (the denominator of the transfer
function), obtained by setting *D(s) = 0* and solving for *s*. Because
of our restriction above, that a transfer function must not have more
zeros than poles, we can state that the polynomial order of *D(s)* must
be greater than or equal to the polynomial order of *N(s)*.
{{-}}
### Example
## Effects of Poles and Zeros
As *s* approaches a zero, the numerator of the transfer function (and
therefore the transfer function itself) approaches the value 0. When *s*
approaches a pole, the denominator of the transfer function approaches
zero, and the value of the transfer function approaches infinity. An
output value of infinity should raise an alarm bell for people who are
familiar with BIBO stability. We will discuss this later.
As we have seen above, the locations of the poles, and the values of the
real and imaginary parts of the pole determine the response of the
system. Real parts correspond to exponentials, and imaginary parts
correspond to sinusoidal values. Addition of poles to the transfer
function has the effect of pulling the root locus to the right, making
the system less stable. Addition of zeros to the transfer function has
the effect of pulling the root locus to the left, making the system more
stable.
## Second-Order Systems
The canonical form for a second order system is as follows:
$$H(s) = \frac{K\omega^2}{s^2 + 2\zeta\omega s + \omega^2}$$
Where K is the **system gain**, ζ is called the **damping ratio** of the
function, and ω is called the **natural frequency** of the system. ζ and
ω, if exactly known for a second order system, the time responses can be
easily plotted and stability can easily be checked. More information on
second order systems can be found
here.
### Damping Ratio
The **damping ratio** of a second-order system, denoted with the Greek
letter zeta (ζ), is a real number that defines the damping properties of
the system. More damping has the effect of less percent overshoot, and
slower settling time. Damping is the inherent ability of the system to
oppose the oscillatory nature of the system\'s transient response.
Larger values of damping coefficient or damping factor produces
transient responses with lesser oscillatory nature.
### Natural Frequency
The natural frequency is occasionally written with a subscript:
$$\omega \to \omega_n$$
We will omit the subscript when it is clear that we are talking about
the natural frequency, but we will include the subscript when we are
using other values for the variable ω. Also, $\omega~=~\omega_n$ when
$\zeta~=0$.
## Higher-Order Systems
|
# Control Systems/State-Space Equations
## Time-Domain Approach
The \"Classical\" method of controls (what we have been studying so far)
has been based mostly in the transform domain. When we want to control
the system in general, we represent it using the Laplace transform
(Z-Transform for digital systems) and when we want to examine the
frequency characteristics of a system we use the Fourier Transform. The
question arises, why do we do this?
Let\'s look at a basic second-order Laplace Transform transfer function:
$$\frac{Y(s)}{X(s)} = G(s) = \frac{1 + s}{1 + 2s + 5s^2}$$
We can decompose this equation in terms of the system inputs and
outputs:
$$(1 + 2s + 5 s^2)Y(s) = (1 + s)X(s)$$
Now, when we take the inverse Laplace transform of our equation, we can
see that:
$$y(t) + 2\frac{d y(t)}{dt} + 5\frac{d^2y(t)}{dt^2} = x(t) + \frac{dx(t)}{dt}$$
The Laplace transform is transforming the fact that we are dealing with
second-order differential equations. The Laplace transform moves a
system out of the time-domain into the complex frequency domain so we
can study and manipulate our systems as algebraic polynomials instead of
linear ODEs. Given the complexity of differential equations, why would
we ever want to work in the time domain?
It turns out that to decompose our higher-order differential equations
into multiple first-order equations, one can find a new method for
easily manipulating the system *without having to use integral
transforms*. The solution to this problem is **state variables**. By
taking our multiple first-order differential equations and analyzing
them in vector form, we can not only do the same things we were doing in
the time domain using simple matrix algebra, but now we can easily
account for systems with multiple inputs and outputs without adding much
unnecessary complexity. This demonstrates why the \"modern\" state-space
approach to controls has become popular.
## State-Space
In a state-space system, the internal state of the system is explicitly
accounted for by an equation known as the **state equation**. The system
output is given in terms of a combination of the current system state,
and the current system input, through the **output equation**. These two
equations form a system of equations known collectively as **state-space
equations**. The state-space is the vector space that consists of all
the possible internal states of the system.
For a system to be modeled using the state-space method, the system must
meet this requirement:
1. **The system must be \"lumped\"**
\"Lumped\" in this context, means that we can find a
*finite*-dimensional state-space vector which fully characterises all
such internal states of the system.
This text mostly considers linear state-space systems where the state
and output equations satisfy the superposition principle. However, the
state-space approach is equally valid for nonlinear systems although
some specific methods are not applicable to nonlinear systems.
#### State
Central to the state-space notation is the idea of a **state**. A state
of a system is the current value of internal elements of the system
which change separately (but are not completely unrelated) to the output
of the system. In essence, the state of a system is an explicit account
of the values of the internal system components. Here are some examples:
## State Variables
When modeling a system using a state-space equation, we first need to
define three vectors:
Input variables: A SISO (Single-Input Single-Output) system will only have one input value, but a MIMO (Multiple-Input Multiple-Output) system may have multiple inputs. We need to define all the inputs to the system and arrange them into a vector.\
Output variables: This is the system output value, and in the case of MIMO systems we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector.\
State Variables: The state variables represent values from inside the system that can change over time. In an electric circuit for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables.
We denote the input variables with *u*, the output variables with *y*,
and the state variables with *x*. In essence, we have the following
relationship:
$$y = f(x, u)$$
Where *f(x, u)* is our system. Also, the state variables can change with
respect to the current state and the system input:
$$x' = g(x, u)$$
Where *x\'* is the rate of change of the state variables. We will define
*f(u, x)* and *g(u, x)* in the next chapter.
## Multi-Input, Multi-Output
In the Laplace domain, if we want to account for systems with multiple
inputs and multiple outputs, we are going to need to rely on the
principle of superposition to create a system of simultaneous Laplace
equations for each input and output. For such systems, the classical
approach not only doesn\'t simplify the situation, but because the
systems of equations need to be transformed into the frequency domain
first, manipulated, and then transformed back into the time domain, they
can actually be more difficult to work with. However, the Laplace domain
technique can be combined with the State-Space techniques discussed in
the next few chapters to bring out the best features of both techniques.
We will discuss MIMO systems in the MIMO Systems
Chapter.
## State-Space Equations
In a state-space system representation, we have a system of two
equations: an equation for determining the state of the system, and
another equation for determining the output of the system. We will use
the variable *y(t)* as the output of the system, *x(t)* as the state of
the system, and *u(t)* as the input of the system. We use the notation
*x\'(t)* (note the prime) for the first derivative of the state vector
of the system, as dependent on the current state of the system and the
current input. Symbolically, we say that there are transforms **g** and
**h**, that display this relationship:
$$x'(t) = g[t_0, t, x(t), x(0), u(t)]$$
$$y(t) = h[t, x(t), u(t)]$$
The first equation shows that the system state change is dependent on
the previous system state, the initial state of the system, the time,
and the system inputs. The second equation shows that the system output
is dependent on the current system state, the system input, and the
current time.
If the system state change *x\'(t)* and the system output *y(t)* are
linear combinations of the system state and input vectors, then we can
say the systems are linear systems, and we can rewrite them in matrix
form:
$$x'(t) = A(t)x(t) + B(t)u(t)$$
$$y(t) = C(t)x(t) + D(t)u(t)$$
If the systems themselves are time-invariant, we can re-write this as
follows:
$$x'(t) = Ax(t) + Bu(t)$$
$$y(t) = Cx(t) + Du(t)$$
The **State Equation** shows the relationship between the system\'s
current state and its input, and the future state of the system. The
**Output Equation** shows the relationship between the system state and
its input, and the output. These equations show that in a given system,
the current output is dependent on the current input and the current
state. The future state is also dependent on the current state and the
current input.
It is important to note at this point that the state space equations of
a particular system are not unique, and there are an infinite number of
ways to represent these equations by manipulating the *A*, *B*, *C* and
*D* matrices using row operations. There are a number of \"standard
forms\" for these matrices, however, that make certain computations
easier. Converting between these forms will require knowledge of linear
algebra.
### Matrices: A B C D
Our system has the form:
$$\mathbf{x}'(t) = \mathbf{g}[t_0, t, \mathbf{x}(t), x(0), \mathbf{u}(t)]$$
$$\mathbf{y}(t) = \mathbf{h}[t, \mathbf{x}(t), \mathbf{u}(t)]$$
We\'ve bolded several quantities to try and reinforce the fact that they
can be vectors, not just scalar quantities. If these systems are
time-invariant, we can simplify them by removing the time variables:
$$\mathbf{x}'(t) = \mathbf{g}[\mathbf{x}(t), x(0), \mathbf{u}(t)]$$
$$\mathbf{y}(t) = \mathbf{h}[\mathbf{x}(t), \mathbf{u}(t)]$$
Now, if we take the partial derivatives of these functions with respect
to the input and the state vector at time *t~0~*, we get our system
matrices:
$$A = \mathbf{g}_x[x(0), x(0), u(0)]$$
$$B = \mathbf{g}_u[x(0), x(0), u(0)]$$
$$C = \mathbf{h}_x[x(0), u(0)]$$
$$D = \mathbf{h}_u[x(0), u(0)]$$
In our time-invariant state space equations, we write these matrices and
their relationships as:
$$x'(t) = Ax(t) + Bu(t)$$
$$y(t) = Cx(t) + Du(t)$$
We have four constant matrices: *A*, *B*, *C*, and *D*. We will explain
these matrices below:
Matrix A:Matrix *A* is the **system matrix**, and relates how the current state affects the state change *x\'*. If the state change is not dependent on the current state, *A* will be the zero matrix. The exponential of the state matrix, *e^At^* is called the **state transition matrix**, and is an important function that we will describe below.\
Matrix B:Matrix *B* is the **control matrix**, and determines how the system input affects the state change. If the state change is not dependent on the system input, then *B* will be the zero matrix.\
Matrix C:Matrix *C* is the **output matrix**, and determines the relationship between the system state and the system output.\
Matrix D:Matrix *D* is the **feed-forward matrix**, and allows for the system input to affect the system output directly. A basic feedback system like those we have previously considered do not have a feed-forward element, and therefore for most of the systems we have already considered, the *D* matrix is the zero matrix.
### Matrix Dimensions
Because we are adding and multiplying multiple matrices and vectors
together, we need to be absolutely certain that the matrices have
compatible dimensions, or else the equations will be undefined. For
integer values *p*, *q*, and *r*, the dimensions of the system matrices
and vectors are defined as follows:
: {\| class=\"wikitable\"
!Vectors \|\| Matrices \|- \|
- $x: p \times 1$
- $x': p\times 1$
- $u: q \times 1$
- $y: r \times 1$
\|
- $A: p \times p$
- $B: p \times q$
- $C: r \times p$
- $D: r \times q$
\|}
If the matrix and vector dimensions do not agree with one another, the
equations are invalid and the results will be meaningless. Matrices and
vectors must have compatible dimensions or they cannot be combined using
matrix operations.
For the rest of the book, we will be using the small template on the
right as a reminder about the matrix dimensions, so that we can keep a
constant notation throughout the book.
### Notational Shorthand
The state equations and the output equations of systems can be expressed
in terms of matrices *A*, *B*, *C*, and *D*. Because the form of these
equations is always the same, we can use an ordered quadruplet to denote
a system. We can use the shorthand *(A, B, C, D)* to denote a complete
state-space representation. Also, because the state equation is very
important for our later analysis, we can write an ordered pair *(A, B)*
to refer to the state equation:
$$(A, B) \to x' = Ax + Bu$$
$$(A, B, C, D) \to \left\{\begin{matrix}x' = Ax + Bu \\ y = Cx + Du \end{matrix}\right.$$
## Obtaining the State-Space Equations
The beauty of state equations, is that they can be used to transparently
describe systems that are both continuous and discrete in nature. Some
texts will differentiate notation between discrete and continuous cases,
but this text will not make such a distinction. Instead we will opt to
use the generic coefficient matrices *A*, *B*, *C* and *D* for both
continuous and discrete systems. Occasionally this book may employ the
subscript *C* to denote a continuous-time version of the matrix, and the
subscript *D* to denote the discrete-time version of the same matrix.
Other texts may use the letters *F*, *H*, and *G* for continuous systems
and *Γ*, and *Θ* for use in discrete systems. However, if we keep track
of our time-domain system, we don\'t need to worry about such notations.
### From Differential Equations
### From Transfer Functions
The method of obtaining the state-space equations from the Laplace
domain transfer functions are very similar to the method of obtaining
them from the time-domain differential equations. We call the process of
converting a system description from the Laplace domain to the
state-space domain **realization**. We will discuss realization in more
detail in a later chapter. In general, let\'s say that we have a
transfer function of the form:
$$T(s) = \frac{s^m+a_{m-1}s^{m-1} +\cdots+a_0}{s^n+b_{n-1}s^{n-1}+\cdots+b_0}$$
We can write our *A*, *B*, *C*, and *D* matrices as follows:
: {\|class=\"wikitable\"
\|- \|$A = \begin{bmatrix}0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \cdots & 0 \\
\vdots &\vdots &\vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & 1 \\
-b_0 & -b_1 & -b_2 & \cdots & -b_{n-1}
\end{bmatrix}$ \|-
\|$B = \begin{bmatrix}0 \\ 0 \\ \vdots \\1\end{bmatrix}$ \|-
\|$C = \begin{bmatrix}a_0 & a_1 & \cdots & a_{m-1}\end{bmatrix}$ \|-
\|$D = 0$ \|}
This form of the equations is known as the **controllable canonical
form** of the system matrices, and we will discuss this later.
Notice that to perform this method, the denominator and numerator
polynomials must be *monic*, the coefficients of the highest-order term
must be 1. If the coefficient of the highest order term is not 1, you
must divide your equation by that coefficient to make it 1.
## State-Space Representation
As an important note, remember that the state variables *x* are
user-defined and therefore are arbitrary. There are any number of ways
to define *x* for a particular problem, each of which are going to lead
to different state space equations.
Consider the previous continuous-time example. We can rewrite the
equation in the form
$$\frac{d}{dt}\left[\frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t)\right] + a_0y(t)=u(t)$$.
We now define the state variables
$$x_1 = y(t)$$
$$x_2 = \frac{dy(t)}{dt}$$
$$x_3 = \frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t)$$
with first-order derivatives
$$x_1' = \frac{dy(t)}{dt} = x_2$$
$$x_2' = \frac{d^2y(t)}{dt^2} = - a_1x_1 - a_2x_2 + x_3$$ (suspected
error here. Fails to account that $$\frac{d}{dt}\left[\right]$$.
encapsulates $$\frac{d^2y(t)}{dt^2} + a_2\frac{dy(t)}{dt} + a_1y(t)$$
five lines earlier.)
$$x_3' = -a_0y(t) + u(t)$$
The state-space equations for the system will then be given by
$$x' = \begin{bmatrix}
0 & 1 & 0 \\
-a_1 & -a_2 & 1 \\
-a_0 & 0 & 0
\end{bmatrix} x(t) +
\begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix} u(t)$$
$$y(t) = \begin{bmatrix}
1 & 0 & 0
\end{bmatrix} x(t)$$
*x* may also be used in any number of variable transformations, as a
matter of mathematical convenience. However, the variables *y* and *u*
correspond to physical signals, and may not be arbitrarily selected,
redefined, or transformed as *x* can be.
### Example: Dummy Variables
## Discretization
If we have a system *(A, B, C, D)* that is defined in continuous time,
we can **discretize** the system so that an equivalent process can be
performed using a digital computer. We can use the definition of the
derivative, as such:
$$x'(t) = \lim_{T\to 0} \frac{x(t + T) - x(t)}{T}$$
And substituting this into the state equation with some approximation
(and ignoring the limit for now) gives us:
$$\lim_{T\to 0} \frac{x(t + T) - x(t)}{T} = Ax(t) + Bu(t)$$
$$x(t + T) = x(t) + Ax(t)T + Bu(t)T$$
$$x(t + T) = (1 + AT)x(t) + (BT)u(t)$$
We are able to remove that limit because in a discrete system, the time
interval between samples is positive and non-negligible. By definition,
a discrete system is only defined at certain time points, and not at all
time points as the limit would have indicated. In a discrete system, we
are interested only in the value of the system at discrete points. If
those points are evenly spaced by every *T* seconds (the sampling time),
then the samples of the system occur at *t = kT*, where *k* is an
integer. Substituting *kT* for *t* into our equation above gives us:
$$x(kT + T) = (1 + AT)x(kT) + TBu(kT)$$
Or, using the square-bracket shorthand that we\'ve developed earlier, we
can write:
$$x[k+1] = (1 + AT)x[k] + TBu[k]$$
In this form, the state-space system can be implemented quite easily
into a digital computer system using software, not complicated analog
hardware. We will discuss this relationship and digital systems more
specifically in a later chapter.
We will write out the discrete-time state-space equations as:
$$x[n+1] = A_dx[n] + B_du[n]$$
$$y[n] = C_dx[n] + D_du[n]$$
## Note on Notations
The variable *T* is a common variable in control systems, especially
when talking about the beginning and end points of a continuous-time
system, or when discussing the sampling time of a digital system.
However, another common use of the letter *T* is to signify the
transpose operation on a matrix. To alleviate this ambiguity, we will
denote the transpose of a matrix with a *prime*:
$$A^T \to A'$$
Where *A\'* is the transpose of matrix *A*.
The prime notation is also frequently used to denote the
time-derivative. Most of the matrices that we will be talking about are
time-invariant; there is no ambiguity because we will never take the
time derivative of a time-invariant matrix. However, for a time-variant
matrix we will use the following notations to distinguish between the
time-derivative and the transpose:
$$A(t)'$$ the transpose.
$$A'(t)$$ the time-derivative.
Note that certain variables which are time-variant are not written with
the *(t)* postscript, such as the variables *x*, *y*, and *u*. For these
variables, the default behavior of the prime is the time-derivative,
such as in the state equation. If the transpose needs to be taken of one
of these vectors, the *(t)\'* postfix will be added explicitly to
correspond to our notation above.
For instances where we need to use the Hermitian transpose, we will use
the notation:
$$A^H$$
This notation is common in other literature, and raises no obvious
ambiguities here.
## MATLAB Representation
State-space systems can be represented in MATLAB using the 4 system
matrices, A, B, C, and D. We can create a system data structure using
the **ss** function:
`sys = ss(A, B, C, D);`
Systems created in this way can be manipulated in the same way that the
transfer function descriptions (described earlier) can be manipulated.
To convert a transfer function to a state-space representation, we can
use the **tf2ss** function:
`[A, B, C, D] = tf2ss(num, den);`
And to perform the opposite operation, we can use the **ss2tf**
function:
`[num, den] = ss2tf(A, B, C, D);`
he:תורת הבקרה/משתני מצב
|
# Control Systems/Linear System Solutions
## State Equation Solutions
The state equation is a first-order linear differential equation, or
(more precisely) a system of linear differential equations. Because this
is a first-order equation, we can use results from Ordinary
Differential Equations to
find a general solution to the equation in terms of the state-variable
*x*. Once the state equation has been solved for *x*, that solution can
be plugged into the output equation. The resulting equation will show
the direct relationship between the system input and the system output,
without the need to account explicitly for the internal state of the
system. The sections in this chapter will discuss the solutions to the
state-space equations, starting with the easiest case (Time-invariant,
no input), and ending with the most difficult case (Time-variant
systems).
## Solving for x(t) With Zero Input
Looking again at the state equation:
$$x' = Ax(t) + Bu(t)$$
We can see that this equation is a first-order differential equation,
except that the variables are vectors, and the coefficients are
matrices. However, because of the rules of matrix calculus, these
distinctions don\'t matter. We can ignore the input term (for now), and
rewrite this equation in the following form:
$$\frac{dx(t)}{dt} = Ax(t)$$
And we can separate out the variables as such:
$$\frac{dx(t)}{x(t)} = A dt$$
Integrating both sides, and raising both sides to a power of *e*, we
obtain the result:
$$x(t) = e^{At+C}$$
Where *C* is a constant. We can assign *D = e^C^* to make the equation
easier, but we also know that *D* will then be the initial conditions of
the system. This becomes obvious if we plug the value zero into the
variable *t*. The final solution to this equation then is given as:
$$x(t) = e^{A(t-t_0)}x(t_0)$$
We call the matrix exponential *e^At^* the **state-transition matrix**,
and calculating it, while difficult at times, is crucial to analyzing
and manipulating systems. We will talk more about calculating the matrix
exponential below.
## Solving for x(t) With Non-Zero Input
If, however, our input is non-zero (as is generally the case with any
interesting system), our solution is a little bit more complicated.
Notice that now that we have our input term in the equation, we will no
longer be able to separate the variables and integrate both sides
easily.
$$x'(t) = Ax(t) + Bu(t)$$
We subtract to get the $Ax(t)$ on the left side, and then we do
something curious; we premultiply both sides by the inverse state
transition matrix:
$$e^{-At}x'(t) - e^{-At}Ax(t) = e^{-At}Bu(t)$$
The rationale for this last step may seem fuzzy at best, so we will
illustrate the point with an example:
### Example
Using the result from our example, we can condense the left side of our
equation into a derivative:
$$\frac{d(e^{-At}x(t))}{dt} = e^{-At}Bu(t)$$
Now we can integrate both sides, from the initial time (*t~0~*) to the
current time (*t*), using a dummy variable τ, we will get closer to our
result. Finally, if we premultiply by e^At^, we get our final result:
$$x(t) = e^{A(t-t_0)}x(t_0) + \int_{t_0}^{t}e^{A(t - \tau)}Bu(\tau)d\tau$$
If we plug this solution into the output equation, we get:
$$y(t) = Ce^{A(t-t_0)}x(t_0) + C\int_{t_0}^{t}e^{A(t - \tau)}Bu(\tau)d\tau + Du(t)$$
This is the general Time-Invariant solution to the state space
equations, with non-zero input. These equations are important results,
and students who are interested in a further study of control systems
would do well to memorize these equations.
## State-Transition Matrix
The state transition matrix, *e^At^*, is an important part of the
general state-space solutions for the time-invariant cases listed above.
Calculating this matrix exponential function is one of the very first
things that should be done when analyzing a new system, and the results
of that calculation will tell important information about the system in
question.
The matrix exponential can be calculated directly by using a
Taylor-Series expansion:
$$e^{At} = \sum_{n=0}^\infty \frac{(At)^n}{n!}$$
Also, we can attempt to diagonalize the matrix A into a **diagonal
matrix** or a **Jordan Canonical matrix**. The exponential of a diagonal
matrix is simply the diagonal elements individually raised to that
exponential. The exponential of a Jordan canonical matrix is slightly
more complicated, but there is a useful pattern that can be exploited to
find the solution quickly. Interested readers should read the relevant
passages in Engineering Analysis.
The state transition matrix, and matrix exponentials in general are very
important tools in control engineering.
### Diagonal Matrices
If a matrix is diagonal, the state transition matrix can be calculated
by raising each diagonal entry of the matrix raised as a power of *e*.
### Jordan Canonical Form
If the A matrix is in the Jordan Canonical form, then the matrix
exponential can be generated quickly using the following formula:
$$e^{Jt} = e^{\lambda t} \begin{bmatrix} 1 & t & \frac{1}{2!}t^2 & \cdots & \frac{1}{n!}t^n \\0 & 1 & t & \cdots & \frac{1}{(n-1)!}t^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1\end{bmatrix}$$
Where *λ* is the eigenvalue (the value on the diagonal) of the
jordan-canonical matrix.
### Inverse Laplace Method
We can calculate the state-transition matrix (or any matrix exponential
function) by taking the following inverse Laplace transform:
$$e^{At} = \mathcal{L}^{-1}[(sI - A)^{-1}]$$
If A is a high-order matrix, this inverse can be difficult to solve.
If the A matrix is in the Jordan Canonical form, then the matrix
exponential can be generated quickly using the following formula:
` `$e^{Jt} = e^{\lambda t} \begin{bmatrix} 1 & t & \frac{1}{2!}t^2 & \cdots & \frac{1}{n!}t^n \\0 & 1 & t & \cdots & \frac{1}{(n-1)!}t^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1\end{bmatrix}$
Where λ is the eigenvalue (the value on the diagonal) of the
jordan-canonical matrix.
### Spectral Decomposition
If we know all the eigenvalues of *A*, we can create our transition
matrix *T*, and our inverse transition matrix *T^-1^* These matrices
will be the matrices of the right and left eigenvectors, respectively.
If we have both the left and the right eigenvectors, we can calculate
the state-transition matrix as:
$$e^{At} = \sum_{i = 1}^ne^{\lambda_i t} v_i w_i'$$
Note that *w~i~\'* is the transpose of the *i*th left-eigenvector, not
the derivative of it. We will discuss the concepts of \"eigenvalues\",
\"eigenvectors\", and the technique of spectral decomposition in more
detail in a later chapter.
### Cayley-Hamilton Theorem
The **Cayley-Hamilton Theorem** can also be used to find a solution for
a matrix exponential. For any eigenvalue of the system matrix *A*, *λ*,
we can show that the two equations are equivalent:
$$e^{\lambda t} = a_0 + a_1 \lambda t + a_2 \lambda^2t^2 + \cdots + a_{n-1}\lambda^{n-1}t^{n-1}$$
Once we solve for the coefficients of the equation, *a*, we can then
plug those coefficients into the following equation:
$$e^{At} = a_0I + a_1 A t + a_2 A^2 t^2 + \cdots + a_{n-1} A^{n-1} t^{n-1}$$
### Example: Off-Diagonal Matrix
### Example: Sympy Calculation
### Example: MATLAB Calculation
### Example: Multiple Methods in MATLAB
he:תורת הבקרה/פתרון משוואת המצב עבור מערכת קבועה
בזמן
|
# Control Systems/Time Variant System Solutions
## General Time Variant Solution
The state-space equations can be solved for time-variant systems, but
the solution is significantly more complicated than the time-invariant
case. Our time-variant state equation is given as follows:
$$x'(t) = A(t)x(t) + B(t)u(t)$$
We can say that the general solution to time-variant state-equation is
defined as:
$$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^{t} \phi(t,\tau)B(\tau)u(\tau)d\tau$$
The function $\phi$ is called the **state-transition matrix**, because
it (like the matrix exponential from the time-invariant case) controls
the change for states in the state equation. However, unlike the
time-invariant case, we cannot define this as a simple exponential. In
fact, $\phi$ can\'t be defined in general, because it will actually be a
different function for every system. However, the state-transition
matrix does follow some basic properties that we can use to determine
the state-transition matrix.
In a time-variant system, the general solution is obtained when the
state-transition matrix is determined. For that reason, the first thing
(and the most important thing) that we need to do here is find that
matrix. We will discuss the solution to that matrix below.
### State Transition Matrix
The state transition matrix $\phi$ is not completely unknown, it must
always satisfy the following relationships:
$$\frac{\partial \phi(t, t_0)}{\partial t} = A(t)\phi(t, t_0)$$
$$\phi(\tau, \tau) = I$$
And $\phi$ also must have the following properties:
: {\| class=\"wikitable\"
\|- \|1.\|\|$\phi(t_2, t_1)\phi(t_1, t_0) = \phi(t_2, t_0)$ \|-
\|2.\|\|$\phi^{-1}(t, \tau) = \phi(\tau, t)$ \|-
\|3.\|\|$\phi^{-1}(t, \tau)\phi(t, \tau) = I$ \|-
\|4.\|\|$\frac{d\phi(t_0, t_0)}{dt} = A(t)$ \|}
If the system is time-invariant, we can define $\phi$ as:
$$\phi(t, t_0) = e^{A(t - t_0)}$$
The reader can verify that this solution for a time-invariant system
satisfies all the properties listed above. However, in the time-variant
case, there are many different functions that may satisfy these
requirements, and the solution is dependent on the structure of the
system. The state-transition matrix must be determined before analysis
on the time-varying solution can continue. We will discuss some of the
methods for determining this matrix below.
## Time-Variant, Zero Input
As the most basic case, we will consider the case of a system with zero
input. If the system has no input, then the state equation is given as:
$$x'(t) = A(t)x(t)$$
And we are interested in the response of this system in the time
interval T = (a, b). The first thing we want to do in this case is find
a **fundamental matrix** of the above equation. The fundamental matrix
is related
### Fundamental Matrix
Given the equation:
$$x'(t) = A(t)x(t)$$
The solutions to this equation form an *n*-dimensional vector space in
the interval T = (a, b). Any set of *n* linearly-independent solutions
{x~1~, x~2~, \..., x~n~} to the equation above is called a **fundamental
set** of solutions.
A **fundamental matrix FM** is formed by creating a matrix out of the
*n* fundamental vectors. We will denote the fundamental matrix with a
script capital X:
$$\mathcal{X} = \begin{bmatrix}x_1 & x_2 & \cdots & x_n\end{bmatrix}$$
The fundamental matrix will satisfy the state equation:
$$\mathcal{X}'(t) = A(t)\mathcal{X}(t)$$
Also, *any matrix that solves this equation can be a fundamental matrix*
if and only if the determinant of the matrix is non-zero for all time
*t* in the interval T. The determinant must be non-zero, because we are
going to use the inverse of the fundamental matrix to solve for the
state-transition matrix.
### State Transition Matrix
Once we have the fundamental matrix of a system, we can use it to find
the state transition matrix of the system:
$$\phi(t, t_0) = \mathcal{X}(t)\mathcal{X}^{-1}(t_0)$$
The inverse of the fundamental matrix exists, because we specify in the
definition above that it must have a non-zero determinant, and therefore
must be non-singular. The reader should note that this is only one
possible method for determining the state transition matrix, and we will
discuss other methods below.
### Example: 2-Dimensional System
{e\^{-2t}}`</math>`{=html}$= \begin{bmatrix} {e}^{t}&-\frac{1}{2}\,{e}^{3t}\\0&{e}^{t}\end{bmatrix}$
The state-transition matrix is given by:
$$\phi(t, t_0) = \mathcal{X}(t)\mathcal{X}^{-1}(t_0) = \begin{bmatrix}e^{-t} & -\frac{1}{2} e^{t} \\ 0 & e^{-t}\end{bmatrix} \begin{bmatrix} {e}^{t_0}&\frac{1}{2}\,{e}^{3t_0}\\0&{e}^{t_0}\end{bmatrix}$$
$$\phi(t, t_0) = \begin{bmatrix} e^{-t + t_0} & \frac{1}{2}(e^{t + t_0} - e^{-t + 3t_0}) \\ 0 & e^{-t+t_0}\end{bmatrix}$$}}
### Other Methods
There are other methods for finding the state transition matrix besides
having to find the fundamental matrix.
Method 1:If A(t) is triangular (upper or lower triangular), the state transition matrix can be determined by sequentially integrating the individual rows of the state equation.
```{=html}
<!-- -->
```
Method 2:If for every τ and t, the state matrix commutes as follows:
:
: $A(t)\left[\int_{\tau}^{t}A(\zeta)d\zeta\right]=\left[\int_{\tau}^{t}A(\zeta)d\zeta\right]A(t)$
: Then the state-transition matrix can be given as:
$$\phi(t, \tau) = e^{\int_\tau^tA(\zeta)d\zeta}$$
: The state transition matrix will commute as described above if any
of the following conditions are true:
1. A is a constant matrix (time-invariant)
2. A is a diagonal matrix
3. If $A = \bar{A}f(t)$, where $\bar{A}$ is a constant matrix, and
f(t) is a scalar-valued function (not a matrix).
: If none of the above conditions are true, then you must use **method
3**.
Method 3:If A(t) can be decomposed as the following sum:
:
: $A(t) = \sum_{i = 1}^n M_i f_i(t)$
: Where *M*~i~ is a constant matrix such that M~i~M~j~ = M~j~M~i~, and
*f*~i~ is a scalar-valued function. If A(t) can be decomposed in
this way, then the state-transition matrix can be given as:
$$\phi(t, \tau) = \prod_{i=1}^n e^{M_i \int_\tau^t f_i(\theta)d\theta}$$
It will be left as an exercise for the reader to prove that if A(t) is
time-invariant, that the equation in **method 2** above will reduce to
the state-transition matrix $e^{A(t-\tau)}$.
### Example: Using Method 3
e\^{\\begin{bmatrix}0 & t-\\tau \\\\ -t+\\tau &
0\\end{bmatrix}}`</math>`{=html}
The first term is a diagonal matrix, and the solution to that matrix
function is all the individual elements of the matrix raised as an
exponent of *e*. The second term can be decomposed as:
$$e^{\begin{bmatrix}0 & t-\tau \\ -t+\tau & 0\end{bmatrix}} = e^{\begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}(t-\tau)} = \begin{bmatrix}\cos(t-\tau) & \sin(t-\tau)\\ -\sin(t-\tau) & \cos(t-\tau)\end{bmatrix}$$
The final solution is given as:
$$\phi(t, \tau) =$$$\begin{bmatrix}e^{\frac{1}{2}(t^2-\tau^2)} & 0 \\ 0 & e^{\frac{1}{2}(t^2-\tau^2)}\end{bmatrix}\begin{bmatrix}\cos(t-\tau) & \sin(t-\tau)\\ -\sin(t-\tau) & \cos(t-\tau)\end{bmatrix}$$= \begin{bmatrix}e^{\frac{1}{2}(t^2-\tau^2)}\cos(t-\tau) & e^{\frac{1}{2}(t^2-\tau^2)}\sin(t-\tau)\\ -e^{\frac{1}{2}(t^2-\tau^2)}\sin(t-\tau) & e^{\frac{1}{2}(t^2-\tau^2)}\cos(t-\tau)\end{bmatrix}$}}
## Time-Variant, Non-zero Input
If the input to the system is not zero, it turns out that all the
analysis that we performed above still holds. We can still construct the
fundamental matrix, and we can still represent the system solution in
terms of the state transition matrix $\phi$.
We can show that the general solution to the state-space equations is
actually the solution:
$$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^{t} \phi(t,\tau)B(\tau)u(\tau)d\tau$$
|
# Control Systems/Digital State Space
## Digital Systems
Digital systems, expressed previously as difference equations or
Z-Transform transfer functions, can also be used with the state-space
representation. All the same techniques for dealing with analog systems
can be applied to digital systems with only minor changes.
## Digital Systems
For digital systems, we can write similar equations using discrete data
sets:
$$x[k + 1] = Ax[k] + Bu[k]$$
$$y[k] = Cx[k] + Du[k]$$
### Zero-Order Hold Derivation
If we have a continuous-time state equation:
$$x'(t) = Ax(t) + Bu(t)$$
We can derive the digital version of this equation that we discussed
above. We take the Laplace transform of our equation:
$$X(s) = (sI - A)^{-1}Bu(s) + (sI - A)^{-1}x(0)$$
Now, taking the inverse Laplace transform gives us our time-domain
system, keeping in mind that the inverse Laplace transform of the *(sI -
A)* term is our state-transition matrix, Φ:
$$x(t) = \mathcal{L}^{-1}(X(s)) = \Phi(t - t_0)x(0) + \int_{t_0}^t\Phi(t - \tau)Bu(\tau)d\tau$$
Now, we apply a zero-order hold on our input, to make the system
digital. Notice that we set our start time *t~0~ = kT*, because we are
only interested in the behavior of our system during a single sample
period:
$$u(t) = u(kT), kT \le t \le (k+1)T$$
$$x(t) = \Phi(t, kT)x(kT) + \int_{kT}^t \Phi(t, \tau)Bd\tau u(kT)$$
We were able to remove *u(kT)* from the integral because it did not rely
on τ. We now define a new function, Γ, as follows:
$$\Gamma(t, t_0) = \int_{t_0}^t \Phi(t, \tau)Bd\tau$$
Inserting this new expression into our equation, and setting *t = (k +
1)T* gives us:
$$x((k + 1)T) = \Phi((k+1)T, kT)x(kT) + \Gamma((k+1)T, kT)u(kT)$$
Now Φ(T) and Γ(T) are constant matrices, and we can give them new names.
The *d* subscript denotes that they are digital versions of the
coefficient matrices:
$$A_d = \Phi((k+1)T, kT)$$
$$B_d = \Gamma((k+1)T, kT)$$
We can use these values in our state equation, converting to our bracket
notation instead:
$$x[k + 1] = A_dx[k] + B_du[k]$$
## Relating Continuous and Discrete Systems
Continuous and discrete systems that perform similarly can be related
together through a set of relationships. It should come as no surprise
that a discrete system and a continuous system will have different
characteristics and different coefficient matrices. If we consider that
a discrete system is the same as a continuous system, except that it is
sampled with a sampling time T, then the relationships below will hold.
The process of converting an analog system for use with digital hardware
is called **discretization**. We\'ve given a basic introduction to
discretization already, but we will discuss it in more detail here.
### Discrete Coefficient Matrices
Of primary importance in discretization is the computation of the
associated coefficient matrices from the continuous-time counterparts.
If we have the continuous system *(A, B, C, D)*, we can use the
relationship *t = kT* to transform the state-space solution into a
sampled system:
$$x(kT) = e^{AkT}x(0) + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$
$$x[k] = e^{AkT}x[0] + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$
Now, if we want to analyze the *k+1* term, we can solve the equation
again:
$$x[k+1] = e^{A(k+1)T}x[0] + \int_0^{(k+1)T} e^{A((k+1)T - \tau)}Bu(\tau)d\tau$$
Separating out the variables, and breaking the integral into two parts
gives us:
$$x[k+1] = e^{AT}e^{AkT}x[0] + \int_0^{kT}e^{AT}e^{A(kT - \tau)}Bu(\tau)d\tau + \int_{kT}^{(k+1)T} e^{A(kT + T - \tau)}Bu(\tau)d\tau$$
If we substitute in a new variable *β = (k + 1)T + τ*, and if we see the
following relationship:
$$e^{AkT}x[0] = x[k]$$
We get our final result:
$$x[k+1] = e^{AT}x[k] + \left(\int_0^T e^{A\alpha}d\alpha\right)Bu[k]$$
Comparing this equation to our regular solution gives us a set of
relationships for converting the continuous-time system into a
discrete-time system. Here, we will use \"d\" subscripts to denote the
system matrices of a discrete system, and we will use a \"c\" subscript
to denote the system matrices of a continuous system.
: {\| class=\"wikitable\"
\|- \|$A_d = e^{A_cT}$ \|- \|$B_d = \int_0^Te^{A_c\tau}d\tau B_c$ \|-
\|$C_d = C_c$ \|- \|$D_d = D_c$ \|}
If the A~c~ matrix is nonsingular, then we can find its inverse and
instead define B~d~ as:
$$B_d = A_c^{-1}(A_d - I)B_c$$
The differences in the discrete and continuous matrices are due to the
fact that the underlying equations that describe our systems are
different. Continuous-time systems are represented by linear
differential equations, while the digital systems are described by
difference equations. High order terms in a difference equation are
delayed copies of the signals, while high order terms in the
differential equations are derivatives of the analog signal.
If we have a complicated analog system, and we would like to implement
that system in a digital computer, we can use the above transformations
to make our matrices conform to the new paradigm.
### Notation
Because the coefficient matrices for the discrete systems are computed
differently from the continuous-time coefficient matrices, and because
the matrices technically represent different things, it is not uncommon
in the literature to denote these matrices with different variables. For
instance, the following variables are used in place of *A* and *B*
frequently:
$$\Omega = A_d$$
$$R = B_d$$
These substitutions would give us a system defined by the ordered
quadruple *(Ω, R, C, D)* for representing our equations.
As a matter of notational convenience, we will use the letters *A* and
*B* to represent these matrices throughout the rest of this book.
## Converting Difference Equations
## Solving for x\[n\]
We can find a general time-invariant solution for the discrete time
difference equations. Let us start working up a pattern. We know the
discrete state equation:
$$x[n+1] = Ax[n] + Bu[n]$$
Starting from time *n = 0*, we can start to create a pattern:
$$x[1] = Ax[0] + Bu[0]$$
$$x[2] = Ax[1] + Bu[1] = A^2x[0] + ABu[0] + Bu[1]$$
$$x[3] = Ax[2] + Bu[2] = A^3x[0] + A^2Bu[0] + ABu[1] + Bu[2]$$
With a little algebraic trickery, we can reduce this pattern to a single
equation:
$$x[n] = A^nx[n_0] + \sum_{m=0}^{n-1}A^{n-1-m}Bu[m]$$
Substituting this result into the output equation gives us:
$$y[n] = CA^nx[n_0] + \sum_{m=0}^{n-1}CA^{n-1-m}Bu[m] + Du[n]$$
## Time Variant Solutions
If the system is time-variant, we have a general solution that is
similar to the continuous-time case:
$$x[n] = \phi[n, n_0]x[n_0] + \sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m]$$
$$y[n] = C[n]\phi[n, n_0]x[n_0] + C[n]\sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m] + D[n]u[n]$$
Where φ, the **state transition matrix**, is defined in a similar manner
to the state-transition matrix in the continuous case. However, some of
the properties in the discrete time are different. For instance, the
inverse of the state-transition matrix does not need to exist, and in
many systems it does not exist.
### State Transition Matrix
The discrete time state transition matrix is the unique solution of the
equation:
$$\phi[k+1, k_0] = A[k] \phi[k, k_0]$$
Where the following restriction must hold:
$$\phi[k_0, k_0] = I$$
From this definition, an obvious way to calculate this state transition
matrix presents itself:
$$\phi[k, k_0] = A[k - 1]A[k-2]A[k-3]\cdots A[k_0]$$
Or,
$$\phi[k, k_0] = \prod_{m = 1}^{k-k_0}A[k-m]$$
## MATLAB Calculations
MATLAB is a computer program, and therefore calculates all systems using
digital methods. The MATLAB function **lsim** is used to simulate a
continuous system with a specified input. This function works by calling
the **c2d**, which converts a system *(A, B, C, D)* into the equivalent
discrete system. Once the system model is discretized, the function
passes control to the **dlsim** function, which is used to simulate
discrete-time systems with the specified input.
Because of this, simulation programs like MATLAB are subjected to
round-off errors associated with the discretization process.
|
# Control Systems/Eigenvalues and Eigenvectors
## Eigenvalues and Eigenvectors
The eigenvalues and eigenvectors of the system matrix play a key role in
determining the response of the system. It is important to note that
only square matrices have eigenvalues and eigenvectors associated with
them. Non-square matrices cannot be analyzed using the methods below.
The word \"eigen\" comes from German and means \"own\" as in
\"characteristic\", so this chapter could also be called
\"Characteristic values and characteristic vectors\". The terms
\"Eigenvalues\" and \"Eigenvectors\" are most commonly used. Eigenvalues
and Eigenvectors have a number of properties that make them valuable
tools in analysis, and they also have a number of valuable relationships
with the matrix from which they are derived. Computing the eigenvalues
and the eigenvectors of the system matrix is one of the most important
things that should be done when beginning to analyze a system matrix,
second only to calculating the matrix exponential of the system matrix.
The eigenvalues and eigenvectors of the system determine the
relationship between the individual system state variables (the members
of the *x* vector), the response of the system to inputs, and the
stability of the system. Also, the eigenvalues and eigenvectors can be
used to calculate the matrix exponential of the system matrix through
spectral decomposition. The remainder of this chapter will discuss
eigenvalues, eigenvectors, and the ways that they affect their
respective systems.
## Characteristic Equation
The characteristic equation of the system matrix A is given as:
$$Av = \lambda v$$
Where λ are scalar values called the **eigenvalues**, and *v* are the
corresponding **eigenvectors**. To solve for the eigenvalues of a
matrix, we can take the following determinant:
$$|A - \lambda I| = 0$$
To solve for the eigenvectors, we can then add an additional term, and
solve for *v*:
$$(A - \lambda I)v = 0$$
Another value worth finding are the **left eigenvectors** of a system,
defined as *w* in the modified characteristic equation:
$$wA = \lambda w$$
For more information about eigenvalues, eigenvectors, and left
eigenvectors, read the appropriate sections in the following books:
- Linear Algebra
- Engineering Analysis
### Diagonalization
If the matrix *A* has a complete set of distinct eigenvalues, the matrix
can be **diagonalized**. A diagonal matrix is a matrix that only has
entries on the diagonal, and all the rest of the entries in the matrix
are zero. We can define a **transformation matrix**, *T*, that satisfies
the diagonalization transformation:
$$A = TDT^{-1}$$
Which in turn will satisfy the relationship:
$$e^{At} = Te^{Dt}T^{-1}$$
The right-hand side of the equation may look more complicated, but
because*D*is a diagonal matrix here (not to be confused with the
feed-forward matrix from the output equation), the calculations are much
easier.
We can define the transition matrix, and the inverse transition matrix
in terms of the eigenvectors and the left eigenvectors:
$$T = \begin{bmatrix} v_1 & v_2 & v_3 & \cdots & v_n\end{bmatrix}$$
$$T^{-1} = \begin{bmatrix} w_1' \\w_2' \\ w_3' \\\vdots \\ w_n'\end{bmatrix}$$
We will further discuss the concept of diagonalization later in this
chapter.
## Exponential Matrix Decomposition
A matrix exponential can be decomposed into a sum of the eigenvectors,
eigenvalues, and left eigenvectors, as follows:
$$e^{At} = \sum_{i = 1}^n e^{\lambda_i t}v_i w_i'$$
Notice that this equation only holds in this form if the matrix A has a
complete set of n distinct eigenvalues. Since w\'~i~ is a row vector,
and x(0) is a column vector of the initial system states, we can combine
those two into a scalar coefficient α:
$$e^{At} x(t_0) = \sum_{i = 1}^n \alpha_i e^{\lambda_i t} v_i$$
Since the state transition matrix determines how the system responds to
an input, we can see that the system eigenvalues and eigenvectors are a
key part of the system response. Let us plug this decomposition into the
general solution to the state equation:
$$x(t) = \sum_{i = 1}^n \alpha_i e^{\lambda_i t} v_i + \sum_{i = 1}^n \int_0^t e^{\lambda_i (t-\tau)}v_i w_i' Bu(\tau) d\tau$$
We will talk about this equation in the following sections.
### State Relationship
As we can see from the above equation, the individual elements of the
state vector *x(t)* cannot take arbitrary values, but they are instead
related by weighted sums of multiples of the systems right-eigenvectors.
### Decoupling
If a system can be designed such that the following relationship holds
true:
$$w_i'B = 0$$
then the system response from that particular eigenvalue will not be
affected by the system input *u*, and we say that the system has been
**decoupled**. Such a thing is difficult to do in practice.
### Condition Number
With every matrix there is associated a particular number called the
**condition number** of that matrix. The condition number tells a number
of things about a matrix, and it is worth calculating. The condition
number, *k*, is defined as:
$$k = \frac{\|w_i\|\|v_i\|}{|w_i'v_i|}$$
Systems with smaller condition numbers are better, for a number of
reasons:
1. Large condition numbers lead to a large transient response of the
system
2. Large condition numbers make the system eigenvalues more sensitive
to changes in the system.
We will discuss the issue of **eigenvalue sensitivity** more in a later
section.
### Stability
We will talk about stability at length in later chapters, but is a good
time to point out a simple fact concerning the eigenvalues of the
system. Notice that if the eigenvalues of the system matrix A are
*positive*, or (if they are complex) that they have positive real parts,
that the system state (and therefore the system output, scaled by the C
matrix) will approach infinity as time *t* approaches infinity. In
essence, if the eigenvalues are positive, the system will not satisfy
the condition of BIBO stability, and will therefore become *unstable*.
Another factor that is worth mentioning is that a manufactured system
*never exactly matches the system model*, and there will always been
inaccuracies in the specifications of the component parts used, *within
a certain tolerance*. As such, the system matrix will be slightly
different from the mathematical model of the system (although good
systems will not be severely different), and therefore the eigenvalues
and eigenvectors of the system will not be the same values as those
derived from the model. These facts give rise to several results:
1. Systems with high *condition numbers* may have eigenvalues that
differ by a large amount from those derived from the mathematical
model. This means that the system response of the physical system
may be very different from the intended response of the model.
2. Systems with high condition numbers may become *unstable* simply as
a result of inaccuracies in the component parts used in the
manufacturing process.
For those reasons, the system eigenvalues and the condition number of
the system matrix are highly important variables to consider when
analyzing and designing a system. We will discuss the topic of stability
in more detail in later chapters.
## Non-Unique Eigenvalues
The decomposition above only works if the matrix *A* has a full set of n
distinct eigenvalues (and corresponding eigenvectors). If *A* does not
have *n* distinct eigenvectors, then a set of **generalized
eigenvectors** need to be determined. The generalized eigenvectors will
produce a similar matrix that is in **Jordan canonical form**, not the
diagonal form we were using earlier.
### Generalized Eigenvectors
Generalized eigenvectors can be generated using the following equation:
$$(A - \lambda I) v_{n+1} = v_n$$
if *d* is the number of times that a given eigenvalue is repeated, and
*p* is the number of unique eigenvectors derived from those eigenvalues,
then there will be *q = d - p* generalized eigenvectors. Generalized
eigenvectors are developed by plugging in the regular eigenvectors into
the equation above (*v~n~*). Some regular eigenvectors might not produce
any non-trivial generalized eigenvectors. Generalized eigenvectors may
also be plugged into the equation above to produce additional
generalized eigenvectors. It is important to note that the generalized
eigenvectors form an ordered series, and they must be kept in order
during analysis or the results will not be correct.
### Example: One Repeated Set
### Example: Two Repeated Sets
### Jordan Canonical Form
If a matrix has a complete set of distinct eigenvectors, the transition
matrix *T* can be defined as the matrix of those eigenvectors, and the
resultant transformed matrix will be a diagonal matrix. However, if the
eigenvectors are not unique, and there are a number of generalized
eigenvectors associated with the matrix, the transition matrix *T* will
consist of the ordered set of the regular eigenvectors and generalized
eigenvectors. The regular eigenvectors that did not produce any
generalized eigenvectors (if any) should be first in the order, followed
by the eigenvectors that did produce generalized eigenvectors, and the
generalized eigenvectors that they produced (in appropriate sequence).
Once the *T* matrix has been produced, the matrix can be transformed by
it and it\'s inverse:
$$A = T^{-1}JT$$
The *J* matrix will be a **Jordan block matrix**. The format of the
Jordan block matrix will be as follows:
$$J = \begin{bmatrix}
D & 0 & \cdots & 0 \\
0 & J_1 & \cdots & 0 \\
\vdots & \vdots &\ddots & \vdots \\
0 & 0 & \cdots & J_n
\end{bmatrix}$$
Where *D* is the diagonal block produced by the regular eigenvectors
that are not associated with generalized eigenvectors (if any). The
*J~n~* blocks are standard Jordan blocks with a size corresponding to
the number of eigenvectors/generalized eigenvectors in each sequence. In
each *J~n~* block, the eigenvalue associated with the regular
eigenvector of the sequence is on the main diagonal, and there are 1\'s
in the sub-diagonal.
### System Response
## Equivalence Transformations
If we have a non-singular *n × n* matrix *P*, we can define a
transformed vector \"x bar\" as:
$$\bar{x} = Px$$
We can transform the entire state-space equation set as follows:
$$\bar{x}'(t) = \bar{A}\bar{x}(t) + \bar{B}u(t)$$
$$\bar{y}(t) = \bar{C}\bar{x}(t) + \bar{D}u(t)$$
Where:
: {\| class=\"wikitable\"
\|- \|$\bar{A} = PAP^{-1}$ \|- \|$\bar{B} = PB$ \|-
\|$\bar{C} = CP^{-1}$ \|- \|$\bar{D} = D$ \|}
We call the matrix *P* the **equivalence transformation** between the
two sets of equations.
It is important to note that the **eigenvalues** of the matrix *A*
(which are of primary importance to the system) do not change under the
equivalence transformation. The eigenvectors of *A*, and the
eigenvectors of $\bar{A}$ are related by the matrix *P*.
### Lyapunov Transformations
The transformation matrix *P* is called a **Lyapunov Transformation** if
the following conditions hold:
- *P(t)* is nonsingular.
- *P(t)* and *P\'(t)* are continuous
- *P(t)* and the inverse transformation matrix *P^-1^(t)* are finite
for all *t*.
If a system is time-variant, it can frequently be useful to use a
Lyapunov transformation to convert the system to an equivalent system
with a constant *A* matrix. This is not always possible in general,
however it is possible if the *A(t)* matrix is periodic.
### System Diagonalization
If the *A* matrix is time-invariant, we can construct the matrix *V*
from the eigenvectors of *A*. The *V* matrix can be used to transform
the *A* matrix to a diagonal matrix. Our new system becomes:
$$Vx'(t) = VAV^{-1}Vx(t) + VBu(t)$$
$$y(t) = CV^{-1}Vx(t) + Du(t)$$
Since our system matrix is now diagonal (or Jordan canonical), the
calculation of the state-transition matrix is simplified:
$$e^{VAV^{-1}} = \Lambda$$
Where Λ is a diagonal matrix.
### MATLAB Transformations
The MATLAB function **ss2ss** can be used to apply an equivalence
transformation to a system. If we have a set of matrices *A*, *B*, *C*
and *D*, we can create equivalent matrices as such:
`[Ap, Bp, Cp, Dp] = ss2ss(A, B, C, D, p);`
Where *p* is the equivalence transformation matrix.
|
# Control Systems/Standard Forms
## Companion Form
A **companion form** contains the coefficients of a corresponding
characteristic polynomial along one of its far rows or columns. For
example, one companion form matrix is:
$$\begin{bmatrix} 0 & 0 & 0 & \cdots & 0 & -a_0 \\
1 & 0 & 0 & \cdots & 0 & -a_1 \\
0 & 1 & 0 & \cdots & 0 & -a_2 \\
0 & 0 & 1 & \cdots & 0 & -a_3 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & -a_{n-1}
\end{bmatrix}$$
and another is:
$$\begin{bmatrix} -a_{n-1} & -a_{n-2} & -a_{n-3} & \cdots & -a_1 & -a_0 \\
1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0
\end{bmatrix}$$
Two companion forms are convenient to use in control theory, namely the
observable canonical form and the controllable canonical form. These two
forms are roughly transposes of each other (just as observability and
controllability are dual ideas). When placed in one of these forms, the
design of controllers or observers is simplified because the structure
of the system is made apparent (and is easily modified with the desired
control).
### Observable Canonical Form
**Observable-Canonical Form** is helpful in several cases, especially
for designing observers.
The observable-canonical form is as follows:
$$A = \begin{bmatrix} -a_1 & 1 & 0 & \cdots & 0 \\
-a_2 & 0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
-a_{n-1} & 0 & 0 & \cdots & 1 \\
-a_n & 0 & 0 & \cdots & 0
\end{bmatrix}$$
$$B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}$$
$$C = \begin{bmatrix} 1 & 0 & \cdots & 0 \end{bmatrix}$$
### Controllable Canonical Form
**Controllable-Canonical Form** is helpful in many cases, especially for
designing controllers when the full state of the system is known.
The controllable-canonical form is as follows:
$$A = \begin{bmatrix} -a_1 & -a_2 & -a_3 & \cdots & -a_{n-1} & -a_n \\
1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0
\end{bmatrix}$$
$$B = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}$$
$$C = \begin{bmatrix} b_1 & b_2 & b_3 & \cdots & b_n \end{bmatrix}$$
$$D = \begin{bmatrix} b_0 \end{bmatrix}$$
If we have two spaces, space *v* which is the original space of the
system (*A*, *B*, *C*, and *D*), then we can transform our system into
the *w* space which is in controllable-canonical form (*A~w~*, *B~w~*,
*C~w~*, *D~w~*) using a transformation matrix *T~w~*. We define this
transformation matrix as:
$$T = \zeta_v \zeta_w^{-1}$$
Where ζ is the controllability matrix.
Notice that we know beforehand *A~w~* and *B~w~*, since we know both the
form of the matrices and the coefficients of the equation (e.g. a linear
ODE with constant coefficients or a transfer function).
We can form ζ~w~ if we know these two matrices. We can then use this
matrix to create our transformation matrix.
We will discuss the controllable canonical form later when discussing
state feedback and closed-loop systems.
### Phase Variable Form
The **Phase Variable Form** is obtained simply by renumbering the phase
variables in the opposite order of the controllable canonical form.
Thus:
$$A_c = \begin{bmatrix}
0 & 1 & \cdots & 0 & 0 & 0\\
\vdots & \vdots &\ddots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & 0 & 0\\
0 & 0 & \cdots & 0 & 1 & 0\\
0 & 0 & \cdots & 0 & 0 & 1\\
-a_n & -a_{n-1} & -a_{n-2} & \cdots & -a_2 & -a_1
\end{bmatrix}$$
$$B_c = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}$$
$$C_c = \begin{bmatrix} b_n & b_{n-1} & \cdots & b_2 & b_1 \end{bmatrix}$$
$$D_c = \begin{bmatrix} b_0 \end{bmatrix}$$
## Modal Form
In this form, the state matrix is a diagonal matrix of its
(non-repeated) eigenvalues. The control has a unitary influence on each
eigenspace, and the output is a linear combination of the contributions
from the eigenspaces (where the weights are the complex residuals at
each pole).
$$A_m = \begin{bmatrix} -p_1 & 0 & 0 & \cdots & 0 & 0 \\
0 & -p_2 & 0 & \cdots & 0 & 0 \\
0 & 0 & -p_3 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & -p_n
\end{bmatrix}$$
$$B_m = \begin{bmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}$$
$$C_m = \begin{bmatrix} c_1 & c_2 & \cdots & c_n \end{bmatrix}$$
$$D_m = \begin{bmatrix} D_c \end{bmatrix}$$
### Jordan Form
This \"almost diagonal\" form handles the case where eigenvalues are
repeated. The repeated eigenvalues represent a multi-dimensional
eigenspace, and so the control only enters the eigenspace once and it\'s
integrated through the other states of that small subsystem.
$$A = \begin{bmatrix}
-p_1 & 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & -p_1 & 1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 0 & -p_1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 0 & 0 & -p_4 & 0 &\cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & 0 & \cdots & 0 & -p_n
\end{bmatrix}$$
$$B = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}$$
$$C = \begin{bmatrix} c_1 & c_2 & \cdots & c_n \end{bmatrix}$$
## Computing Standard Forms in MATLAB
MATLAB can convert a transfer function into a control canonical form by
using the command tf2ss.
`tf2ss(num, den);`\
`% num and den on the form: [x_0*s^n, x_1*s^n-1,..., x_n*s^n-n].`
MATLAB contains a function for automatically transforming a state-space
equation into a companion
(e.g., controllable or observable canonical form) form.
`[Ap, Bp, Cp, Dp, P] = canon(A, B, C, D, 'companion');`
Moving from one companion form to the other usually involves elementary
operations on matrices and vectors (e.g., transposes or interchanging
rows). Given a vector with the coefficients of a characteristic
polynomial, MATLAB can compute a companion form with the coefficients in
the top row (there are other 3 possible companion forms not generated by
that function)
`compan(P)`
Given another vector with the coefficients of a transfer function\'s
numerator polynomial, the `canon` command can do the same.
`[Ap, Bp, Cp, Dp, P] = canon(tf(Pnum,Pden), 'companion');`
The same command can be used to transform a state-space equation into a
modal (e.g., diagonal) form.
`[Ap, Bp, Cp, Dp, P] = canon(A, B, C, D, 'modal');`
However, MATLAB also includes a command to compute the Jordan form of a
matrix, a modified modal form suited for matrices with repeated
eigenvalues.
`jordan(A);`
## Computing Standard Forms by Hand
### Control Canonical Form
We are given a system that is described by the transfer function:
$$\frac{Y(s)}{U(s)} = \frac{s^2+6s+8}{s^3+9s^2+23s+15}$$
and are now tasked to obtain the state-space matrixes in control
canonical form. We first note that the order of the numerator is 2,
while the corresponding order is 3 in the denominator. This means that
we will not have any feed-forward and thus, the scalar D is 0.
We now split the transfer function into two factors:
$$\frac{Y(s)}{x_1(s)} \times \frac{x_1(s)}{U(s)} = \frac{s^2+6s+8}{1} \times \frac{1}{s^3+9s^2+23s+15}.$$
We then retrieve the time-domain description,
$$Y(s) = (s^2+6s+8)x_1.$$
$$Y(s) = s^{2} x_1+6s x_1 +8 x_1.$$
$$y(t) = \ddot{x_1} + 6\dot{x_1} +8x_1.$$
We now create two new states that describe all these derivatives. Since
the highest order of the system is 3 (in the denominator of TF), we must
create 2 new states, so that we have 3 states in total.
$$\dot{x_1} = x_2, \dot{x_2} = x_3 = \ddot{x_1}.$$
We substitute the new states into the equation above:
$$y(t) = x_3 + 6x_2 + 8x_1.$$
Now we evaluate the input of the system in a similar manner:
$$U(s) = (s^3+9s^2+23s+15)x_1.$$
$$U(s) = s^3x_1+9s^2x_1+23sx_1+15x_1.$$
$$u(t) = x_1^{(3)}+9\ddot{x_1}+23\dot{x_1}+15x_1.$$
Note that the first term is a third derivative. We can now insert our
new states into the above equation:
$$u(t) = \dot{x_3}+9x_3+23x_2+15x_1.$$
We move the derivative of the third state to the left side and get:
$$\dot{x_3} = -15x_1 - 23x_2 -9x_3 + u.$$
We are now ready to rewrite these equations in the state-space form. We
start by moving the input to the right side of the equation so that we
have an expression for each state.
$\begin{bmatrix}
\dot{x_1} \\
\dot{x_2} \\
\dot{x_3} \\
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
-15 & 0-23 & -9 \\
\end{bmatrix}
\times
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
\end{bmatrix}
+
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}
u$
$y =
\begin{bmatrix}
8 & 6 & 1 \\
\end{bmatrix}
\times
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
\end{bmatrix}
+
\begin{bmatrix}
0 \\
\end{bmatrix}
u$
|
# Control Systems/MIMO Systems
## Multi-Input, Multi-Output
Systems with more than one input and/or more than one output are known
as **Multi-Input Multi-Output** systems, or they are frequently known by
the abbreviation **MIMO**. This is in contrast to systems that have only
a single input and a single output (SISO), like we have been discussing
previously.
## State-Space Representation
MIMO systems that are lumped and linear can be described easily with
state-space equations. To represent multiple inputs we expand the input
*u(t)* into a vector *U*(t) with the desired number of inputs. Likewise,
to represent a system with multiple outputs, we expand *y(t)* into
*Y*(t), which is a vector of all the outputs. For this method to work,
the outputs must be linearly dependent on the input vector and the state
vector.
$$X'(t) = AX(t) + BU(t)$$
$$Y(t) = CX(t) + DU(t)$$
### Example: Two Inputs and Two Outputs
## Transfer Function Matrix
If the system is LTI and Lumped, we can take the Laplace Transform of
the state-space equations, as follows:
$$\mathcal{L}[X'(t)] = \mathcal{L}[AX(t)] + \mathcal{L}[BU(t)]$$
$$\mathcal{L}[Y(t)] = \mathcal{L}[CX(t)] + \mathcal{L}[DU(t)]$$
Which gives us the result:
$$s\mathbf{X}(s) - X(0) = A\mathbf{X}(s) + B\mathbf{U}(s)$$
$$\mathbf{Y}(s) = C\mathbf{X}(s) + D\mathbf{U}(s)$$
Where X(0) is the initial conditions of the system state vector in the
time domain. If the system is relaxed, we can ignore this term, but for
completeness we will continue the derivation with it.
We can separate out the variables in the state equation as follows:
$$s\mathbf{X}(s) - A\mathbf{X}(s) = X(0) + B\mathbf{U}(s)$$
Then factor out an **X**(s):
$$\mathbf[sI - A]{X}(s) = X(0) + B\mathbf{U}(s)$$
And then we can multiply both sides by the inverse of *\[sI - A\]* to
give us our state equation:
$$\mathbf{X}(s) = [sI - A]^{-1}X(0) + [sI - A]^{-1}B\mathbf{U}(s)$$
Now, if we plug in this value for **X**(s) into our output equation,
above, we get a more complicated equation:
$$\mathbf{Y}(s) = C([sI - A]^{-1}X(0) + [sI - A]^{-1}B\mathbf{U}(s)) + D\mathbf{U}(s)$$
And we can distribute the matrix **C** to give us our answer:
$$\mathbf{Y}(s) = C[sI - A]^{-1}X(0) + C[sI - A]^{-1}B\mathbf{U}(s) + D\mathbf{U}(s)$$
Now, if the system is relaxed, and therefore *X(0)* is 0, the first term
of this equation becomes 0. In this case, we can factor out a **U**(s)
from the remaining two terms:
$$\mathbf{Y}(s) = (C[sI - A]^{-1}B + D)\mathbf{U}(s)$$
We can make the following substitution to obtain the **Transfer Function
Matrix**, or more simply, the **Transfer Matrix**, **H**(s):
$$C[sI - A]^{-1}B + D = \mathbf{H}(s)$$
And rewrite our output equation in terms of the transfer matrix as
follows:
$$\mathbf{Y}(s) = \mathbf{H}(s)\mathbf{U}(s)$$
If **Y***(s)* and **X***(s)* are *1 × 1* vectors (a SISO system), then
we have our external description:
$$Y(s) = H(s)X(s)$$
Now, since *X(s) =* **X**(s), and *Y(s) =* **Y**(s), then **H**(s) must
be equal to *H(s)*. These are simply two different ways to describe the
same exact equation, the same exact system.
### Dimensions
If our system has *q* inputs, and *r* outputs, our transfer function
matrix will be an *r × q* matrix.
### Relation to Transfer Function
For SISO systems, the Transfer Function matrix will reduce to the
transfer function as would be obtained by taking the Laplace transform
of the system response equation.
For MIMO systems, with *n* inputs and *m* outputs, the transfer function
matrix will contain *n × m* transfer functions, where each entry is the
transfer function relationship between each individual input, and each
individual output.
Through this derivation of the transfer function matrix, we have shown
the equivalency between the Laplace methods and the State-Space method
for representing systems. Also, we have shown how the Laplace method can
be generalized to account for MIMO systems. Through the rest of this
explanation, we will use the Laplace and State Space methods
interchangeably, opting to use one or the other where appropriate.
### Zero-State and Zero-Input
If we have our complete system response equation from above:
$$\mathbf{Y}(s) = C[sI - A]^{-1}\mathbf{x}(0) + (C[sI - A]^{-1}B + D)\mathbf{U}(s)$$
We can separate this into two separate parts:
- $C[sI - A]^{-1}X(0)$ The **Zero-Input Response**.
- $(C[sI - A]^{-1}B + D)\mathbf{U}(s)$ The **Zero-State Response**.
These are named because if there is no input to the system (zero-input),
then the output is the response of the system to the initial system
state. If there is no state to the system, then the output is the
response of the system to the system input. The complete response is the
sum of the system with no input, and the input with no state.
## Discrete MIMO Systems
In the discrete case, we end up with similar equations, except that the
*X*(0) initial conditions term is preceded by an additional *z*
variable:
$$\mathbf{X}(z) = [zI - A]^{-1}zX(0) + [zI - A]^{-1}B\mathbf{U}(z)$$
$$\mathbf{Y}(z) = C[zI - A]^{-1}zX(0) + C[zI - A]^{-1}B\mathbf{U}(z) + D\mathbf{U}(z)$$
If *X*(0) is zero, that term drops out, and we can derive a Transfer
Function Matrix in the Z domain as well:
$$\mathbf{Y}(z) = (C[zI - A]^{-1}B + D)\mathbf{U}(z)$$
$$C[zI - A]^{-1}B + D = \mathbf{H}(z)$$
$$\mathbf{Y}(z) = \mathbf{H}(z)\mathbf{U}(z)$$
{{-}}
### Example: Pulse Response
## Controller Design
The controller design for MIMO systems is more extensive and thus more
complicated than for SISO system. Ackermann\'s formula, the typical full
state feedback design for
SISO system, could not be used for MIMO systems because the additional
inputs lead to an overdetermined system. This means, in case of MIMO
systems, the feedback matrix *K* is **not unique**.
Approaches for the controller design are stated in Section Eigenvalue
Assignment for MIMO
Systems.
|
# Control Systems/Realizations
## Realization
**Realization** is the process of taking a mathematical model of a
system (either in the Laplace domain or the State-Space domain), and
creating a physical system. Some systems are not realizable.
An important point to keep in mind is that the Laplace domain
representation, and the state-space representations are equivalent, and
both representations describe the same physical systems. We want,
therefore, a way to convert between the two representations, because
each one is well suited for particular methods of analysis.
The state-space representation, for instance, is preferable when it
comes time to move the system design from the drawing board to a
constructed physical device. For that reason, we call the process of
converting a system from the Laplace representation to the state-space
representation \"realization\".
## Realization Conditions
- A transfer function *G(s)* is realizable if and only if the system
can be described by a finite-dimensional state-space equation.
- *(A B C D)*, an ordered set of the four system matrices, is called a
**realization** of the system *G(s)*. If the system can be expressed
as such an ordered quadruple, the system is realizable.
- A system *G* is realizable if and only if the transfer matrix
**G**(s) is a proper rational matrix. In other words, every entry in
the matrix **G**(s) (only 1 for SISO systems) is a rational
polynomial, and if the degree of the denominator is higher or equal
to the degree of the numerator.
We\'ve already covered the method for realizing a SISO system, the
remainder of this chapter will talk about the general method of
realizing a MIMO system.
## Realizing the Transfer Matrix
We can decompose a transfer matrix **G**(s) into a *strictly proper*
transfer matrix:
$$\mathbf{G}(s) = \mathbf{G}(\infty) + \mathbf{G}_{sp}(s)$$
Where G~sp~(s) is a strictly proper transfer matrix. Also, we can use
this to find the value of our *D* matrix:
$$D = \mathbf{G}(\infty)$$
We can define *d(s*) to be the lowest common denominator polynomial of
all the entries in **G**(s):
$$d(s) = s^r + a_1s^{r-1} + \cdots + a_{r-1}s + a_r$$
Then we can define **G**~sp~ as:
$$\mathbf{G}_{sp}(s) = \frac{1}{d(s)}N(s)$$
Where
$$N(s) = N_1s^{r-1} + \cdots + N_{r-1}s + N_r$$
And the *N~i~* are *p × q* constant matrices.
If we remember our method for converting a transfer function to a
state-space equation, we can follow the same general method, except that
the new matrix *A* will be a block matrix, where each block is the size
of the transfer matrix:
$$A = \begin{bmatrix}
-a_1I_p & -a_2I_p & \cdots & -a_{r-1}I_p & -a_rI_p \\
I_p & 0 & \cdots & 0 & 0 \\
0 & I_p & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & I_p & 0
\end{bmatrix}$$
$$B = \begin{bmatrix}I_p \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}$$
$$C = \begin{bmatrix}N_1 & N_2 & N_3 & \cdots & Nr\end{bmatrix}$$
### Realizing System by Column
We can divide the **G(s)** into multiple column, realize them
individually and join them back together later, for **G(s)**:
$G(s) =\begin{bmatrix} G_1 & G_2 & G_3 &\dots & G_n \end{bmatrix}$
where we realize them and yield:
$G_i => (A_i,B_i,C_i,D_i)$
and the realization of the system will be:
$A = \begin{bmatrix}
A_1 & 0 & 0 & \dots &0\\
0 & A_2 & 0&&\vdots\\
0 & 0 & A_3\\
\vdots& & & \ddots &0\\
0&0&0&\dots &A_n
\end{bmatrix}$
$B = \begin{bmatrix}
B_1 & 0 & 0 & \dots &0\\
0 & B_2 & 0&&\vdots\\
0 & 0 & B_3\\
\vdots& & & \ddots &0\\
0&0&0&\dots &B_n
\end{bmatrix}$
$C = \begin{bmatrix} C_1 & C_2 & C_3 &\dots& C_n \end{bmatrix}$
$D = \begin{bmatrix} D_1 & D_2 & D_3 &\dots& D_n \end{bmatrix}$
|
# Control Systems/Gain
## What is Gain?
**Gain** is a proportional value that shows the relationship between the
magnitude of the input to the magnitude of the output signal at steady
state. Many systems contain a method by which the gain can be altered,
providing more or less \"power\" to the system. However, increasing gain
or decreasing gain beyond a particular safety zone can cause the system
to become unstable.
Consider the given second-order system:
$$T(s) = \frac{1}{s^2 + 2s + 1}$$
We can include an arbitrary gain term, K in this system that will
represent an amplification, or a power increase:
$$T(s) = K\frac{1}{s^2 + 2s + 1}$$
In a state-space system, the gain term *k* can be inserted as follows:
$$x'(t) = Ax(t) + kBu(t)$$
$$y(t) = Cx(t) + kDu(t)$$
The gain term can also be inserted into other places in the system, and
in those cases the equations will be slightly different.
![](Gain_Block.svg "Gain_Block.svg"){width="400"}
### Example: Gain
## Responses to Gain
As the gain to a system increases, generally the rise-time decreases,
the percent overshoot increases, and the settling time increases.
However, these relationships are not always the same. A **critically
damped system**, for example, may decrease in rise time while not
experiencing any effects of percent overshoot or settling time.
## Gain and Stability
If the gain increases to a high enough extent, some systems can become
unstable. We will examine this effect in the chapter on **Root Locus**.
But it will decrease the steady state error.
### Conditional Stability
Systems that are stable for some gain values, and unstable for other
values are called **conditionally stable** systems. The stability is
conditional upon the value of the gain, and often the threshold where
the system becomes unstable is important to find.
|
# Control Systems/Block Diagrams
When designing or analyzing a system, often it is useful to model the
system graphically. **Block Diagrams** are a useful and simple method
for analyzing a system graphically. A \"block\" looks on paper exactly
what it means:
## Systems in Series
When two or more systems are in series, they can be combined into a
single representative system, with a transfer function that is the
product of the individual systems.
![](Time_Series_Block.svg "Time_Series_Block.svg")
If we have two systems, *f(t)* and *g(t)*, we can put them in series
with one another so that the output of system *f(t)* is the input to
system *g(t)*. Now, we can analyze them depending on whether we are
using our classical or modern methods.
If we define the output of the first system as *h(t)*, we can define
*h(t)* as:
$$h(t) = x(t) * f(t)$$
Now, we can define the system output *y(t)* in terms of *h(t)* as:
$$y(t) = h(t) * g(t)$$
We can expand *h(t)*:
$$y(t) = [x(t) * f(t)] * g(t)$$
But, since convolution is associative, we can re-write this as:
$$y(t) = x(t) * [f(t) * g(t)]$$
Our system can be simplified therefore as such:
![](Time_Convolution_Block.svg "Time_Convolution_Block.svg")
### Series Transfer Functions
If two or more systems are in series with one another, the total
transfer function of the series is the product of all the individual
system transfer functions.
![](S-Domain_Series_Block.svg "S-Domain_Series_Block.svg")
In the time-domain we know that:
$$y(t) = x(t) * [f(t) * g(t)]$$
But, in the frequency domain we know that convolution becomes
multiplication, so we can re-write this as:
$$Y(s) = X(s)[F(s)G(s)]$$
We can represent our system in the frequency domain as:
![](S-Domain_Multiplication_Block.svg "S-Domain_Multiplication_Block.svg")
### Series State Space
If we have two systems in series (say system F and system G), where the
output of F is the input to system G, we can write out the state-space
equations for each individual system.
System 1:
$$x_F' = A_Fx_F + B_Fu$$
$$y_F = C_Fx_F + D_Fu$$
System 2:
$$x_G' = A_Gx_G + B_Gy_F$$
$$y_G = C_Gx_G + D_Gy_F$$
And we can write substitute these equations together form the complete
response of system H, that has input u, and output y~G~:
$$\begin{bmatrix}x_G' \\ x_F'\end{bmatrix}
= \begin{bmatrix}A_G & B_GC_F \\ 0 & A_F\end{bmatrix}
\begin{bmatrix}x_G \\ x_F\end{bmatrix} +
\begin{bmatrix}B_GD_F \\ B_F\end{bmatrix}u$$
$$\begin{bmatrix}y_G \\ y_F\end{bmatrix}
= \begin{bmatrix}C_G & D_GC_F \\ 0 & C_F\end{bmatrix}
\begin{bmatrix}x_G \\ x_F\end{bmatrix} +
\begin{bmatrix}D_GD_F \\ D_F\end{bmatrix}u$$
## Systems in Parallel
![](S-Domain_Parallel_Block.svg "S-Domain_Parallel_Block.svg")
Blocks may not be placed in parallel without the use of an adder. Blocks
connected by an adder as shown above have a total transfer function of:
$$Y(s) = X(s) [F(s) + G(s)]$$
Since the Laplace transform is linear, we can easily transfer this to
the time domain by converting the multiplication to convolution:
$$y(t) = x(t) * [f(t) + g(t)]$$
![](S-Domain_Addition_Block.svg "S-Domain_Addition_Block.svg")
## State Space Model
The state-space equations, with non-zero A, B, C, and D matrices
conceptually model the following system:
![](Typical_State_Space_Model_(General).svg "Typical_State_Space_Model_(General).svg")
In this image, the strange-looking block in the center is either an
integrator or an ideal delay, and can be represented in the transfer
domain as:
$$\frac{1}{s}$$ or $\frac{1}{z}$
Depending on the time characteristics of the system. If we only consider
continuous-time systems, we can replace the funny block in the center
with an integrator:
![](Typical_State_Space_Model_(CT).svg "Typical_State_Space_Model_(CT).svg")
### In the Laplace Domain
The state space model of the above system, if *A*, *B*, *C*, and *D* are
transfer functions *A(s)*, *B(s)*, *C(s)* and *D(s)* of the individual
subsystems, and if *U(s)* and *Y(s)* represent a single input and
output, can be written as follows:
$$\frac{Y(s)}{U(s)} = B(s)\left(\frac{1}{s - A(s)}\right)C(s) + D(s)$$
We will explain how we got this result, and how we deal with feedforward
and feedback loop structures in the next chapter.
## Adders and Multipliers
Some systems may have dedicated summation or multiplication devices,
that automatically add or multiply the transfer functions of multiple
systems together
## Simplifying Block Diagrams
Block diagrams can be systematically simplified. Note that this table is
from Schaum\'s Outline: Feedback and Controls Systems by DiStefano et al
Transformation Equation Block Diagram Equivalent Block Diagram
--------------------------------------------------------------------------------------------------- ------------------------------------------------------- ------------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------
1 Cascaded Blocks $Y=\left(P_1 P_2 \right) X$ image:Cascaded Blocks.svg
2 Combining Blocks in Parallel $Y=P_1 X \pm P_2 X$ image:Parallel Blocks.svg
3 Removing a Block from a Forward Loop $Y=P_1 X \pm P_2 X$ image:Parallel Blocks Equivalent 2.svg
4 Eliminating a Feedback Loop $Y=P_1 \left( X \mp P_2 Y \right)$ image:Feedback Loop.svg
5 Removing a Block from a Feedback Loop $Y=P_1 \left( X \mp P_2 Y \right)$ image:Feedback Loop Equivalent 2.svg
6 Rearranging Summing Junctions $Z=W \pm X \pm Y$ image:Rearranging Summing Junctions 1.svg
image:Rearranging Summing Junctions 3.svg
7 Moving a Summing Junction in front of a Block $Z = P X \pm Y$ image:Moving Summing Junction in front of Block 1.svg
8 Moving a Summing Junction beyond a Block $Z = P \left( X \pm Y \right)$ ![](Moving_Summing_Junction_beyond_Block_1.svg "Moving_Summing_Junction_beyond_Block_1.svg")
9 Moving a Takeoff Point in front of a Block $Y= PX\,$ image:Moving Takeoff Point in front of Block 1.svg
10 Moving a Takeoff Point beyond a Block $Y=PX\,$ image:Moving Takeoff Point beyond Block 1.svg
11 Moving a Takeoff Point in front of a Summing Junction $Z=W \pm X$ image:Moving Takeoff Point ahead of a Summing Junction 1.svg
12 Moving a Takeoff Point beyond a Summing Junction $Z=X \pm Y$ file:Moving Takeoff Point beyond a Summing Junction 1.svg
## External links
- SISO Block Diagram with transfer functions on
ControlTheoryPro.com
|
# Control Systems/Feedback Loops
## Feedback
A **feedback loop** is a common and powerful tool when designing a
control system. Feedback loops take the system output into
consideration, which enables the system to adjust its performance to
meet a desired output response.
When talking about control systems it is important to keep in mind that
engineers typically are given existing systems such as actuators,
sensors, motors,
and other devices with set parameters, and are asked to adjust the
performance of those systems. In many cases, it may not be possible to
open the system (the \"plant\") and adjust it from the inside:
modifications need to be made external to the system to force the system
response to act as desired. This is performed by adding controllers,
compensators, and feedback structures to the system.
## Basic Feedback Structure
center\|framed
This is a basic feedback structure.
Here, we are using the output value of the system to help us prepare the
next output value. In this way, we can create systems that correct
errors. Here we see a feedback loop with a value of one. We call this a
**unity feedback**.
**Here is a list of some relevant vocabulary, that will be used in the
following sections:**
Plant:The term \"Plant\" is a carry-over term from chemical engineering to refer to the main system process. The plant is the preexisting system that does not (without the aid of a controller or a compensator) meet the given specifications. Plants are usually given \"as is\", and are not changeable. In the picture above, the plant is denoted with a P.\
Controller:A controller, or a \"compensator\" is an additional system that is added to the plant to control the operation of the plant. The system can have multiple compensators, and they can appear anywhere in the system: Before the pick-off node, after the summer, before or after the plant, and in the feedback loop. In the picture above, our compensator is denoted with a C.
Summer:A summer is a symbol on a system diagram, (denoted above with parenthesis) that conceptually adds two or more input signals, and produces a single sum output signal.\
Pick-off node:A pickoff node is simply a fancy term for a split in a wire.\
Forward Path:The forward path in the feedback loop is the path after the summer, that travels through the plant and towards the system output.\
Reverse Path:The reverse path is the path after the pick-off node, that loops back to the beginning of the system. This is also known as the \"feedback path\".\
Unity feedback:When the multiplicative value of the feedback path is 1.
## Negative vs Positive Feedback
It turns out that negative feedback is almost always the most useful
type of feedback. When we subtract the value of the output from the
value of the input (our desired value), we get a value called the
**error signal**. The error signal shows us how far off our output is
from our desired input.
Positive feedback has the property that signals tend to reinforce
themselves, and grow larger. In a positive feedback system, noise from
the system is added back to the input, and that in turn produces more
noise. As an example of a positive feedback system, consider an audio
amplification system with a speaker and a microphone. Placing the
microphone near the speaker creates a positive feedback loop, and the
result is a sound that grows louder and louder. Because the majority of
noise in an electrical system is high-frequency, the sound output of the
system becomes high-pitched.
### Example: State-Space Equation
{1 - \\frac{1}{s}A} = \\frac{1}{s - A}`</math>`{=html}
Pre-multiplying by the factor B, and post-multiplying by C, we get the
transfer function of the entire lower-half of the loop:
$$T_{lower}(s) = B\left(\frac{1}{s - A}\right)C$$
We can see that the upper path (D) and the lower-path T~lower~ are added
together to produce the final result:
$$T_{total}(s) = B\left(\frac{1}{s - A}\right)C + D$$
Now, for an alternate method, we can assume that **x**\' is the value of
the inner-feedback loop, right before the integrator. This makes sense,
since the integral of **x**\' should be **x** (which we see from the
diagram that it is. Solving for **x**\', with an input of **u**, we get:
$$x' = Ax + Bu$$
This is because the value coming from the feedback branch is equal to
the value **x** times the feedback loop matrix A, and the value coming
from the left of the summer is the input **u** times the matrix B.
If we keep things in terms of **x** and **u**, we can see that the
system output is the sum of **u** times the feed-forward value D, and
the value of **x** times the value C:
$$y = Cx + Du$$
These last two equations are precisely the state-space equations of our
system.}}
## Feedback Loop Transfer Function
We can solve for the output of the system by using a series of
equations:
$$E(s) = X(s) - Y(s)$$
$$Y(s) = G(s)E(s)$$
and when we solve for Y(s) we get:
$$Y(s) = X(s) \frac{Gp(s)}{1 + Gp(s)}$$
The reader is encouraged to use the above equations to derive the result
by themselves.
The function E(s) is known as the **error signal**. The error signal is
the difference between the system output (Y(s)), and the system input
(X(s)). Notice that the error signal is now the direct input to the
system G(s). X(s) is now called the **reference input**. The purpose of
the negative feedback loop is to make the system output equal to the
system input, by identifying large differences between X(s) and Y(s) and
correcting for them.
### Example: Elevator
Here is a simple example of reference inputs and feedback systems:
### State-Space Feedback Loops
In the state-space representation, the plant is typically defined by the
state-space equations:
$$x'(t) = Ax(t) + Bu(t)$$
$$y(t) = Cx(t) + Du(t)$$
The plant is considered to be pre-existing, and the matrices A, B, C,
and D are considered to be internal to the plant (and therefore
unchangeable). Also, in a typical system, the state variables are either
fictional (in the sense of dummy-variables), or are not measurable. For
these reasons, we need to add external components, such as a gain
element, or a feedback element to the plant to enhance performance.
Consider the addition of a gain matrix K installed at the input of the
plant, and a negative feedback element F that is multiplied by the
system output *y*, and is added to the input signal of the plant. There
are two cases:
1. The feedback element F is subtracted from the input before
multiplication of the K gain matrix.
2. The feedback element F is subtracted from the input after
multiplication of the K gain matrix.
In case 1, the feedback element F is added to the input before the
multiplicative gain is applied to the input. If *v* is the input to the
entire system, then we can define *u* as:
$$u(t) = Fv(t) - FKy(t)$$
In case 2, the feeback element F is subtracted from the input after the
multiplicative gain is applied to the input. If *v* is the input to the
entire system, then we can define *u* as:
$$u(t) = Kv(t) - Fy(t)$$
## Open Loop vs Closed Loop
![](System_3_KGpGb.png "System_3_KGpGb.png"){width="600"}
Let\'s say that we have the generalized system shown above. The top
part, Gp(s) represents all the systems and all the controllers on the
forward path. The bottom part, Gb(s) represents all the feedback
processing elements of the system. The letter \"K\" in the beginning of
the system is called the **Gain**. We will talk about the gain more in
later chapters. We can define the **Closed-Loop Transfer Function** as
follows:
$$H_{cl}(s) = \frac{KGp(s)}{1 + Gp(s)Gb(s)}$$
If we \"open\" the loop, and break the feedback node, we can define the
**Open-Loop Transfer Function**, as:
$$H_{ol}(s) = KGp(s)$$
We can redefine the closed-loop transfer function in terms of this
open-loop transfer function:
$$H_{cl}(s) = \frac{H_{ol}(s)}{1 +Gp(s)Gb(s)}$$
These results are important, and they will be used without further
explanation or derivation throughout the rest of the book.
## Placement of a Controller
There are a number of different places where we could place an
additional controller.
: {\| class=\"wikitable\"
\|- \|![](System_5_Positions.png "System_5_Positions.png"){width="400"}
\|- \|
1. In front of the system, before the feedback loop.
2. Inside the feedback loop, in the forward path, before the plant.
3. In the forward path, after the plant.
4. In the feedback loop, in the reverse path.
5. After the feedback loop.
\|}
Each location has certain benefits and problems, and hopefully we will
get a chance to talk about all of them.
## Second-Order Systems
The general expression of the transfer function of a second order system
is given as:
$\frac{\omega_n^2}{s^2 + 2\zeta\omega_ns + \omega_n^2}$
where $\zeta$ and $\omega_n$ are damping ratio and natural frequency of
the system respectively.
### Damping Ratio
The damping ratio is defined by way of the sign $\zeta$. The damping
ratio gives us an idea about the nature of the transient response
detailing the amount of overshoot & oscillation that the system will
undergo. This is completely regardless of time scaling.
If :
- $\zeta$ = zero, the system is undamped;
- $\zeta$ \< 1, the system is underdamped;
- $\zeta$ = 1, the system is critically damped;
- $\zeta$ \> 1, the system is overdamped.
$\zeta$ is used in conjunction with the natural frequency to determine
system properties. To find the zeta value you must first find the
natural response!
### Natural Frequency
Natural Frequency, denoted by $\omega_n$ is defined as the frequency
with which the system would oscillate if it were not damped and we
define the damping ratio as $\zeta = \frac{\sigma}{\omega_n}$.
## System Sensitivity
|
# Control Systems/Signal Flow Diagrams
## Signal-flow graphs
**Signal-flow graphs** are another method for visually representing a
system. Signal Flow Diagrams are especially useful, because they allow
for particular methods of analysis, such as **Mason\'s Gain Formula**.
Signal flow diagrams typically use curved lines to represent wires *and
systems*, instead of using lines at right-angles, and boxes,
respectively. Every curved line is considered to have a multiplier
value, which can be a constant gain value, or an entire transfer
function. Signals travel from one end of a line to the other, and lines
that are placed in series with one another have their total multiplier
values multiplied together (just like in block diagrams).
Signal flow diagrams help us to identify structures called \"loops\" in
a system, which can be analyzed individually to determine the complete
response of the system.
!An example of a signal flow
diagram.
### Forward Paths
A **forward path** is a path in the signal flow diagram that connects
the input to the output without touching any single node or path more
than once. A single system can have multiple forward paths.
### Loops
A **loop** is a structure in a signal flow diagram that leads back to
itself. A loop does not contain the beginning and ending points, and the
end of the loop is the same node as the beginning of a loop.
Loops are said to touch if they share a node or a line in common.
The **Loop gain** is the total gain of the loop, as you travel from one
point, around the loop, back to the starting point.
### Delta Values
The Delta value of a system, denoted with a Greek Δ is computed as
follows:
$$\Delta = 1 - A + B - C + D - E + F......+ \infty$$
Where:
- A is the sum of all individual loop gains
- B is the sum of the products of all the pairs of non-touching loops
- C is the sum of the products of all the sets of 3 non-touching loops
- D is the sum of the products of all the sets of 4 non-touching loops
- et cetera.
If the given system has no pairs of loops that do not touch, for
instance, B and all additional letters after B will be zero.
### Mason\'s Rule
**Mason\'s rule** is a rule for determining the gain of a system.
Mason\'s rule can be used with block diagrams, but it is most commonly
(and most easily) used with signal flow diagrams.
If we have computed our delta values (above), we can then use **Mason\'s
Gain Rule** to find the complete gain of the system:
$$M = \frac{y_{out}}{y_{in}} = \sum_{k=1}^N \frac{M_k \Delta\ _k}{ \Delta\ }$$
Where M is the total gain of the system, represented as the ratio of the
output gain (y~out~) to the input gain (y~in~) of the system. M~k~ is
the gain of the k^th^ forward path, and Δ~k~ is the loop gain of the
k^th^ loop.
## Examples
### Solving a signal-flow graph by systematic reduction : Two interlocking loops
This example shows how a system of five equations in five unknowns is
solved using systematic reduction rules. The independent variable is
$x_{in}$. The dependent variables are $x_1$, $x_2$, $x_3$, $x_4$,
$x_{out}$. The coefficients are labeled $a, b, c, d, e$. Here is the
starting flowgraph:
frameless\|upright=2
$$\begin{align}
x_1 &= x_\mathrm{in}+e x_3 \\
x_2 &= b x_1+a x_4 \\
x_3 &= c x_2 \\
x_4 &= d x_3 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
The steps for solving $x_{out}$ follow.
#### Removing edge c from x2 to x3
frameless\|upright=2
$$\begin{align}
x_1 &= x_\mathrm{in}+e x_3 \\
x_2 &= b x_1+a x_4 \\
x_3 &= c x_2 \\
x_3 &= c (b x_1+a x_4) \\
x_3 &= bc x_1+ca x_4 \\
x_4 &= d x_3 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
frameless\|upright=2
#### Removing node x2 and its inflows
$x_2$ has no outflows, and is not a node of interest.
frameless\|upright=2
frameless\|upright=2
#### Removing edge e from x3 to x1
frameless\|upright=2
$$\begin{align}
x_1 &= x_\mathrm{in}+e x_3 \\
x_1 &= x_\mathrm{in}+e (bc x_1+ca x_4) \\
x_1 &= x_\mathrm{in}+ bce x_1+ ace x_4 \\
x_2 &= b x_1+a x_4 \\
x_3 &= bc x_1+ca x_4 \\
x_4 &= d x_3 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
frameless\|upright=2
#### Remove edge d from x3 to x4
frameless\|upright=2
$$\begin{align}
x_1 &= x_\mathrm{in}+ace x_4 + bce x_1 \\
x_3 &= bc x_1+ac x_4 \\
x_4 &= d x_3 \\
x_4 &= d (bc x_1+ac x_4) \\
x_4 &= bcd x_1 + acd x_4 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
Node $x_3$ has no outflows and is not a node of interest. It is deleted
along with its inflows.
$$\begin{align}
x_1 &= x_\mathrm{in}+ace x_4 + bce x_1 \\
x_4 &= bcd x_1 + acd x_4 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
frameless\|upright=2
#### Removing self-loop at x1
frameless\|upright=2
$$\begin{align}
x_1 &= x_\mathrm{in}+ace x_4 + bce x_1 \\
x_1 (1-bce) &= x_\mathrm{in}+ace x_4 \\
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} x_4 \\
x_4 &= bcd x_1 + acd x_4 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
frameless\|upright=2
#### Removing self-loop at x4
frameless\|upright=2
$$\begin{align}
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} x_4 \\
x_4 &= bcd x_1 + acd x_4 \\
x_4 (1-acd) &= bcd x_1 \\
x_4 &= \frac{bcd}{1-acd} x_1 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
frameless\|upright=2
#### Remove edge from x4 to x1
frameless\|upright=2
$$\begin{align}
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} x_4 \\
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} \times \frac{bcd}{1-acd} x_1 \\
x_4 &= \frac{bcd}{1-acd} x_1 \\
x_\mathrm{out} &= x_4 \\
\end{align}$$
#### Remove outflow from x4 to x~out~
$$\begin{align}
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} \times \frac{bcd}{1-acd} x_1 \\
x_\mathrm{out} &= \frac{bcd}{1-acd} x_1 \\
\end{align}$$ $x_4$\'s outflow is then eliminated: $x_\mathrm{out}$ is
connected directly to $x_1$ using the product of the gains from the two
edges replaced.
$x_4$ is not a variable of interest; thus, its node and its inflows are
eliminated.
frameless\|upright=2
#### Eliminating self-loop at x1
$$\begin{align}
x_1 &= \frac{1}{1-bce}x_\mathrm{in}+\frac{ace}{1-bce} \times \frac{bcd}{1-acd} x_1 \\
x_1 (1-\frac{ace}{1-bce} \times \frac{bcd}{1-acd}) &= \frac{1}{1-bce}x_\mathrm{in} \\
x_1 &= \frac{1}{(1-bce) \times (1-\frac{ace}{1-bce} \times \frac{bcd}{1-acd}) }x_\mathrm{in} \\
x_\mathrm{out} &= \frac{bcd}{1-acd} x_1 \\
\end{align}$$
frameless\|upright=2
frameless\|upright=2
#### Eliminating outflow from x1, then eliminating x1 and its inflows
frameless\|upright=2
$$\begin{align}
x_1 &= \frac{1}{(1-bce) \times (1-\frac{ace}{1-bce} \times \frac{bcd}{1-acd}) }x_\mathrm{in} \\
x_\mathrm{out} &= \frac{bcd}{1-acd} x_1 \\
x_\mathrm{out} &= \frac{bcd}{1-acd} \times \frac{1}{(1-bce) \times (1-\frac{ace}{1-bce} \times \frac{bcd}{1-acd}) }x_\mathrm{in} \\
\end{align}$$ $x_1$ is not a variable of interest; $x_1$ and its inflows
are eliminated
$$\begin{align}
x_\mathrm{out} &= \frac{bcd}{1-acd} \times \frac{1}{(1-bce) \times (1-\frac{ace}{1-bce} \times \frac{bcd}{1-acd}) }x_\mathrm{in} \\
\end{align}$$
frameless\|upright=2
#### Simplifying the gain expression
$\begin{align}
x_\mathrm{out} &= \frac{-bcd}{bce+acd-1} x_\mathrm{in} \\
\end{align}$
frameless\|upright=2
### Solving a signal-flow graph by systematic reduction: Three equations in three unknowns
This example shows how a system of three equations in three unknowns is
solved using systematic reduction rules. The independent variables are
$y_1$, $y_2$, $y_3$. The dependent variables are $x_1$, $x_2$, $x_3$.
The coefficients are labeled $c_{jk}$. The steps for solving $x_1$
follow:
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
frameless\|upright=2
## Electrical engineering: Construction of a flow graph for a RC circuit
![](AC_Source-R-C.svg "AC_Source-R-C.svg")
This illustration shows the physical connections of the circuit.
Independent voltage source S is connected in series with a resistor R
and capacitor C. The example is developed from the physical circuit
equations and solved using signal-flow graph techniques. Polarity is
important:
- S is a source with the positive terminal at **N~1~** and the
negative terminal at **N~3~**
- R is a resistor with the positive terminal at **N~1~** and the
negative terminal at **N~2~**
- C is a capacitor with the positive terminal at **N~2~** and the
negative terminal at **N~3~**.
The unknown variable of interest is the voltage across capacitor **C**.
Approach to the solution:
- Find the set of equations from the physical network. These equations
are acausal in nature.
- Branch equations for the capacitor and resistor. The equations
will be developed as transfer functions using Laplace
transforms.
- Kirchhoff\'s voltage and current laws
- Build a signal-flow graph from the equations.
- Solve the signal-flow graph.
### Branch equations
![](AC_Source-R-C-Branches.svg "AC_Source-R-C-Branches.svg") The branch
equations are shown for R and C.
#### Resistor R (Branch equation $B_R$)
The resistor\'s branch equation in the time domain is:
$$V_R(t) = R I_R(t)$$ In the Laplace-transformed signal space:
$$V_R(s) = R I_R(s)$$
#### Capacitor C (Branch equation $B_C$)
The capacitor\'s branch equation in the time domain is:
$$V_C(t) = \frac{Q_C(t)}{C} = \frac{1}{C}\int_{t_0}^t I_C(\tau) \mathrm{d}\tau + V_C(t_0)$$
Assuming the capacitor is initially discharged, the equation becomes:
$$V_C(t) = \frac{Q_C(t)}{C} = \frac{1}{C}\int_{t_0}^t I_C(\tau) \mathrm{d}\tau$$
Taking the derivative of this and multiplying by *C* yields the
derivative form:
$$I_C(t) = \frac{\mathrm{d}Q(t)}{\mathrm{d}t} = C\frac{\mathrm{d}V_C(t)}{\mathrm{d}t}$$
In the Laplace-transformed signal space:
$$I_C(s) = V_C(s) sC$$
### Kirchhoff\'s laws equations
![](AC_Source-R-C-KCL-KVL.svg "AC_Source-R-C-KCL-KVL.svg")
#### Kirchhoff\'s Voltage Law equation $\mathrm{KVL}_1$
This circuit has only one independent loop. Its equation in the time
domain is:
$$V_R(t) + V_C(t) - V_S (t) =0$$ In the Laplace-transformed signal
space:
$$V_R(s) + V_C(s) - V_S (s) =0$$
#### Kirchhoff\'s Current Law equations $\mathrm{KCL}_1, {KCL}_2, {KCL}_3$
The circuit has three nodes, thus three Kirchhoff\'s current equations
(expresses here as the currents flowing from the nodes):
: `<math>`{=html}
\\begin{align} I_S(t) + I_R (t) & = 0 & & \\mathrm{(KCL_1)} \\\\
I_C(t) - I_R (t) & = 0 & & \\mathrm{(KCL_2)} \\\\ I_S(t) - I_C (t) & = 0
& & \\mathrm{(KCL_3)} \\\\ \\end{align} `</math>`{=html} In the
Laplace-transformed signal space:
: `<math>`{=html}
\\begin{align} I_S(s) + I_R (s) & = 0 & & \\mathrm{(KCL_1)} \\\\
I_C(s) - I_R (s) & = 0 & & \\mathrm{(KCL_2)} \\\\ I_S(s) - I_C (s) & = 0
& & \\mathrm{(KCL_3)} \\\\ \\end{align} `</math>`{=html} A set of
independent equations must be chosen. For the current laws, it is
necessary to drop one of these equations. In this example, let us choose
$\mathrm{KCL}_1, {KCL}_2$.
### Building the signal-flow graph
We then look at the inventory of equations, and the signals that each
equation relates:
Equation Signals
------------------ --------------------------
$\mathrm{B_C}$ $\mathrm{V_C, I_C}$
$\mathrm{B_R}$ $\mathrm{V_R, I_R}$
$\mathrm{KVL}_1$ $\mathrm{V_R, V_C, V_S}$
$\mathrm{KCL}_1$ $\mathrm{I_S, I_R}$
$\mathrm{KCL}_2$ $\mathrm{I_R, I_C}$
The next step consists in assigning to each equation a signal that will
be represented as a node. Each independent source signal is represented
in the signal-flow graph as a source node, therefore no equation is
assigned to the independent source $\mathrm{V_S}$. There are many
possible valid signal flow graphs from this set of equations. An
equation must only be used once, and the variables of interest must be
represented.
Equation Signals Assigned signal node
------------------ -------------------------- ----------------------
$\mathrm{B_C}$ $\mathrm{V_C, I_C}$ $\mathrm{I_C}$
$\mathrm{B_R}$ $\mathrm{V_R, I_R}$ $\mathrm{V_R}$
$\mathrm{KVL}_1$ $\mathrm{V_R, V_C, V_S}$ $\mathrm{V_C}$
$\mathrm{KCL}_1$ $\mathrm{I_S, I_R}$ $\mathrm{I_S}$
$\mathrm{KCL}_2$ $\mathrm{I_R, I_C}$ $\mathrm{I_R}$
### The resulting flow graph is then drawn
![](AC_Source-R-C-SFG.svg "AC_Source-R-C-SFG.svg")
The next step consists in solving the signal-flow graph.
Using either Mason or systematic reduction, the resulting signal flow
graph is:\
![](AC_Source-R-C-SFG-Solved-V_C.svg "AC_Source-R-C-SFG-Solved-V_C.svg")
### Mechatronics example
thumb\|upright=3\| Angular position servo and signal flow graph. θ~C~ =
desired angle command, θ~L~ = actual load angle, K~P~ = position loop
gain, V~ωC~ = velocity command, V~ωM~ = motor velocity sense voltage,
K~V~ = velocity loop gain, V~IC~ = current command, V~IM~ = current
sense voltage, K~C~ = current loop gain, V~A~ = power amplifier output
voltage, L~M~ = motor inductance, V~M~ = voltage across motor
inductance, I~M~ = motor current, R~M~ = motor resistance, R~S~ =
current sense resistance, K~M~ = motor torque constant (Nm/amp) , T =
torque, M = momment of inertia of all rotating components α = angular
acceleration, ω = angular velocity, β = mechanical damping, G~M~ = motor
back EMF constant, G~T~ = tachometer conversion gain constant,. There is
one forward path (shown in a different color) and six feedback loops.
The drive shaft assumed to be stiff enough to not treat as a spring.
Constants are shown in black and variables in
purple.
|
# Control Systems/Bode Plots
## Bode Plots
A Bode Plot is a useful tool that shows the gain and phase response of a
given LTI system for different frequencies. Bode Plots are generally
used with the Fourier Transform of a given system.
center\|framed\|An example of a Bode magnitude and phase plot set. The
Magnitude plot is typically on the top, and the Phase plot is typically
on the bottom of the set.
The frequency of the bode plots are plotted against a logarithmic
frequency axis. Every tickmark on the frequency axis represents a power
of 10 times the previous value. For instance, on a standard Bode plot,
the values of the markers go from (0.1, 1, 10, 100, 1000, \...) Because
each tickmark is a power of 10, they are referred to as a **decade**.
Notice that the \"length\" of a decade decreases as you move to the
right on the graph. (note that this description doesn\'t match the chart
above\... there are 10 tickmarks per decade, not one, but since it is a
log chart they are not evenly spaced).
The bode Magnitude plot measures the system Input/Output ratio in
special units called **decibels**. The Bode phase plot measures the
phase shift in degrees (typically, but radians are also used).
### Decibels
A **Decibel** is a ratio between two numbers on a logarithmic scale. To
express a ratio between two numbers (A and B) as a decibel we apply the
following formula for numbers that represent amplitudes (numbers that
represent a power measurement use a factor of 10 rather than 20):
$$dB = 20 \log\left({A \over B}\right)$$
Where dB is the decibel result.
Or, if we just want to take the decibels of a single number C, we could
just as easily write:
$$dB = 20 \log(C)$$
### Frequency Response Notations
If we have a system transfer function T(s), we can separate it into a
numerator polynomial N(s) and a denominator polynomial D(s). We can
write this as follows:
$$T(s) = \frac{N(s)}{D(s)}$$
To get the magnitude gain plot, we must first transit the transfer
function into the frequency response by using the change of variables:
$$s = j\omega$$
From here, we can say that our frequency response is a composite of two
parts, a real part R and an imaginary part X:
$$T(j\omega) = R(\omega) + jX(\omega)$$
We will use these forms below.
### Straight-Line Approximations
The Bode magnitude and phase plots can be quickly and easily
approximated by using a series of straight lines. These approximate
graphs can be generated by following a few short, simple rules (listed
below). Once the straight-line graph is determined, the actual Bode plot
is a smooth curve that follows the straight lines, and travels through
the **breakpoints**.
### Break Points
If the frequency response is in pole-zero form:
$$T(j\omega) = \frac{\prod_n|j\omega + z_n|}{\prod_m|j\omega + p_m|}$$
We say that the values for all z~n~ and p~m~ are called **break points**
of the Bode plot. These are the values where the Bode plots experience
the largest change in direction.
Break points are sometimes also called \"break frequencies\", \"cutoff
points\", or \"corner points\".
## Bode Gain Plots
**Bode Gain Plots**, or **Bode Magnitude Plots** display the ratio of
the system gain at each input frequency.
### Bode Gain Calculations
The magnitude of the transfer function T is defined as:
$$|T(j\omega)| = \sqrt{R^2 + X^2}$$
However, it is frequently difficult to transition a function that is in
\"numerator/denominator\" form to \"real+imaginary\" form. Luckily, our
decibel calculation comes in handy. Let\'s say we have a frequency
response defined as a fraction with numerator and denominator
polynomials defined as:
$$T(j\omega) = \frac{\prod_n|j\omega + z_n|}{\prod_m|j\omega + p_m|}$$
If we convert both sides to decibels, the logarithms from the decibel
calculations convert multiplication of the arguments into additions, and
the divisions into subtractions:
$$Gain = \sum_n20\log(|j\omega + z_n|) - \sum_m20\log(|j\omega + p_m|)$$
And calculating out the gain of each term and adding them together will
give the gain of the system at that frequency.
### Bode Gain Approximations
The slope of a straight line on a Bode magnitude plot is measured in
units of **dB/Decade**, because the units on the vertical axis are dB,
and the units on the horizontal axis are decades.
The value ω = 0 is infinitely far to the left of the bode plot (because
a logarithmic scale never reaches zero), so finding the value of the
gain at ω = 0 essentially sets that value to be the gain for the Bode
plot from all the way on the left of the graph up till the first break
point. The value of the slope of the line at ω = 0 is 0 dB/Decade.
From each pole break point, the slope of the line decreases by
20 dB/Decade. The line is straight until it reaches the next break
point. From each zero break point the slope of the line increases by
20 dB/Decade. Double, triple, or higher amounts of repeat poles and
zeros affect the gain by multiplicative amounts. Here are some examples:
- 2 poles: -40 dB/Decade
- 10 poles: -200 dB/Decade
- 5 zeros: +100 dB/Decade
## Bode Phase Plots
**Bode phase plots** are plots of the phase shift to an input waveform
dependent on the frequency characteristics of the system input. Again,
the Laplace transform does not account for the phase shift
characteristics of the system, but the Fourier Transform can. The phase
of a complex function, in \"real+imaginary\" form is given as:
$$\angle T(j\omega) = \tan^{-1}\left(\frac{X}{R}\right)$$
## Bode Procedure
Given a frequency response in pole-zero form:
$$T(j\omega) = A\frac{\prod_n|j\omega + z_n|}{\prod_m|j\omega + p_m|}$$
Where A is a non-zero constant (can be negative or positive).
Here are the steps involved in sketching the approximate Bode magnitude
plots:
Here are the steps to drawing the Bode phase plots:
## Examples
### Example: Constant Gain
### Example: Integrator
### Example: Differentiator
### Example: 1st Order, Low-pass Filter (1 Break Point)
## Further reading
- Circuit Theory/Bode Plots
- Bode Plots on
ControlTheoryPro.com
|
# Control Systems/Stability
## Stability
When a system is unstable, the output of the system may be infinite even
though the input to the system was finite. This causes a number of
practical problems. For instance, a robot arm controller that is
unstable may cause the robot to move dangerously. Also, systems that are
unstable often incur a certain amount of physical damage, which can
become costly. Nonetheless, many systems are inherently unstable - a
fighter jet, for instance, or a rocket at liftoff, are examples of
naturally unstable systems. Although we can design controllers that
stabilize the system, it is first important to understand what stability
is, how it is determined, and why it matters.
The chapters in this section are heavily mathematical and many require a
background in linear differential equations. Readers without a strong
mathematical background might want to review the necessary chapters in
the Calculus and Ordinary Differential
Equations books (or
equivalent) before reading this material.
For most of this chapter we will be assuming that the system is linear
and can be represented either by a set of transfer functions or in state
space. Linear systems have an associated characteristic polynomial which
tells us a great deal about the stability of the system. If any
coefficient of the characteristic polynomial is zero or negative then
the system is either unstable or at most marginally stable. It is
important to note that even if all of the coefficients of the
characteristic polynomial are positive the system may still be unstable.
We will look into this in more detail below.
## BIBO Stability
A system is defined to be **BIBO Stable** if every bounded input to the
system results in a bounded output over the time interval
$[t_0, \infty)$. This must hold for all initial times t~o~. So long as
we don\'t input infinity to our system, we won\'t get infinity output.
A system is defined to be **uniformly BIBO Stable** if there exists a
positive constant *k* that is independent of t~0~ such that for all t~0~
the following conditions:
$$\|u(t)\| \le 1$$
$$t \ge t_0$$
implies that
$$\|y(t)\| \le k$$
There are a number of different types of stability, and keywords that
are used with the topic of stability. Some of the important words that
we are going to be discussing in this chapter, and the next few chapters
are: **BIBO Stable**, **Marginally Stable**, **Conditionally Stable**,
**Uniformly Stable**, **Asymptotically Stable**, and **Unstable**. All
of these words mean slightly different things.
## Determining BIBO Stability
We can prove mathematically that a system f is BIBO stable if an
arbitrary input x is bounded by two finite but large arbitrary constants
M and -M:
$$-M < x \le M$$
We apply the input x, and the arbitrary boundaries M and -M to the
system to produce three outputs:
$$y_x = f(x)$$
$$y_M = f(M)$$
$$y_{-M} = f(-M)$$
Now, all three outputs should be finite for all possible values of M and
x, and they should satisfy the following relationship:
$$y_{-M} \le y_x \le y_M$$
If this condition is satisfied, then the system is BIBO stable.
A SISO linear time-invariant (LTI) system is BIBO stable if and only if
$g(t)$ is absolutely integrable from \[0,∞\] or from:
$$\int_{0}^{\infty} |g(t)| \,dt \leq M < {\infty}$$
### Example
## Poles and Stability
When the poles of the closed-loop transfer function of a given system
are located in the right-half of the S-plane (RHP), the system becomes
unstable. When the poles of the system are located in the left-half
plane (LHP) and the system is not improper, the system is shown to be
stable. A number of tests deal with this particular facet of stability:
The **Routh-Hurwitz Criteria**, the **Root-Locus**, and the **Nyquist
Stability Criteria** all test whether there are poles of the transfer
function in the RHP. We will learn about all these tests in the upcoming
chapters.
If the system is a multivariable, or a MIMO system, then the system is
stable if and only if *every pole of every transfer function* in the
transfer function matrix has a negative real part and every transfer
function in the transfer function matrix is not improper. For these
systems, it is possible to use the Routh-Hurwitz, Root Locus, and
Nyquist methods described later, but these methods must be performed
once for each individual transfer function in the transfer function
matrix.
## Poles and Eigenvalues
The poles of the transfer function, and the eigenvalues of the system
matrix A are related. In fact, we can say that the eigenvalues of the
system matrix A *are the poles of the transfer function* of the system.
In this way, if we have the eigenvalues of a system in the state-space
domain, we can use the Routh-Hurwitz, and Root Locus methods as if we
had our system represented by a transfer function instead.
On a related note, eigenvalues and all methods and mathematical
techniques that use eigenvalues to determine system stability *only work
with time-invariant systems*. In systems which are time-variant, the
methods using eigenvalues to determine system stability fail.
## Transfer Functions Revisited
We are going to have a brief refesher here about transfer functions,
because several of the later chapters will use transfer functions for
analyzing system stability.
Let us remember our generalized feedback-loop transfer function, with a
gain element of K, a forward path Gp(s), and a feedback of Gb(s). We
write the transfer function for this system as:
$$H_{cl}(s) = \frac{KGp(s)}{1 + H_{ol}(s)}$$
Where $H_{cl}$ is the closed-loop transfer function, and $H_{ol}$ is the
open-loop transfer function. Again, we define the open-loop transfer
function as the product of the forward path and the feedback elements,
as such:
$$H_{ol}(s) = KGp(s)Gb(s)$$ \<\-\--Note this definition now contradicts
the updated definition in the Feedback
Loops
section.
Now, we can define F(s) to be the **characteristic equation**. F(s) is
simply the denominator of the closed-loop transfer function, and can be
defined as such:
$$F(s) = 1 + H_{ol} = D(s)$$
We can say conclusively that the roots of the characteristic equation
are the poles of the transfer function. Now, we know a few simple facts:
1. The locations of the poles of the closed-loop transfer function
determine if the system is stable or not
2. The zeros of the characteristic equation are the poles of the
closed-loop transfer function.
3. The characteristic equation is always a simpler equation than the
closed-loop transfer function.
These functions combined show us that we can focus our attention on the
characteristic equation, and find the roots of that equation.
## State-Space and Stability
As we have discussed earlier, the system is stable if the eigenvalues of
the system matrix A have negative real parts. However, there are other
stability issues that we can analyze, such as whether a system is
*uniformly stable*, *asymptotically stable*, or otherwise. We will
discuss all these topics in a later chapter.
## Marginal Stability
When the poles of the system in the complex s-domain exist on the
imaginary axis (the vertical axis), or when the eigenvalues of the
system matrix are imaginary (no real part), the system exhibits
oscillatory characteristics, and is said to be marginally stable. A
marginally stable system may become unstable under certain
circumstances, and may be perfectly stable under other circumstances. It
is impossible to tell by inspection whether a marginally stable system
will become unstable or not.
We will discuss marginal stability more in the following chapters.
|
# Control Systems/State-Space Stability
## State-Space Stability
If a system is represented in the state-space domain, it doesn\'t make
sense to convert that system to a transfer function representation (or
even a transfer matrix representation) in an attempt to use any of the
previous stability methods. Luckily, there are other analysis methods
that can be used with the state-space representation to determine if a
system is stable or not. First, let us first introduce the notion of
unstability:
Also, a key concept when we are talking about stability of systems is
the concept of an **equilibrium point**:
The definitions below typically require that the equilibrium point be
zero. If we have an equilibrium point *x~e~ = a*, then we can use the
following change of variables to make the equilibrium point zero:
$$\bar{x} = x_e - a = 0$$
We will also see below that a system\'s stability is defined in terms of
an equilibrium point. Related to the concept of an equilibrium point is
the notion of a **zero point**:
### Stability Definitions
The equilibrium *x = 0* of the system is stable if and only if the
solutions of the zero-input state equation are bounded. Equivalently, *x
= 0* is a stable equilibrium if and only if for every initial time t~0~,
there exists an associated finite constant *k*(t~0~) such that:
$$\operatorname{sup}_{t \ge t_0}\|\phi(t, t_0)\| = k(t_0) < \infty$$
Where *sup* is the **supremum**, or \"maximum\" value of the equation.
The maximum value of this equation must never exceed the arbitrary
finite constant *k* (and therefore it may not be infinite at any point).
Uniform stability is a more general, and more powerful form of stability
than was previously provided.
A time-invariant system is asymptotically stable if all the eigenvalues
of the system matrix A have negative real parts. If a system is
asymptotically stable, it is also BIBO stable. However the inverse is
not true: A system that is BIBO stable might not be asymptotically
stable.
For linear systems, uniform asymptotic stability is the same as
**exponential stability**. This is not the case with non-linear systems.
### Marginal Stability
Here we will discuss some rules concerning systems that are marginally
stable. Because we are discussing eigenvalues and eigenvectors, these
theorems only apply to time-invariant systems.
1. A time-invariant system is marginally stable if and only if all the
eigenvalues of the system matrix A are zero or have negative real
parts, and those with zero real parts are simple roots of the
minimal polynomial of A.
2. The equilibrium *x = 0* of the state equation is *uniformly stable*
if all eigenvalues of A have non-positive real parts, and there is a
complete set of distinct eigenvectors associated with the
eigenvalues with zero real parts.
3. The equilibrium *x = 0* of the state equation is *exponentially
stable* if and only if all eigenvalues of the system matrix A have
negative real parts.
## Eigenvalues and Poles
A Linearly Time Invariant (LTI) system is stable (asymptotically stable,
see above) if all the eigenvalues of A have negative real parts.
Consider the following state equation:
$$x' = Ax(t) + Bu(t)$$
We can take the Laplace Transform of both sides of this equation, using
initial conditions of x~0~ = 0:
$$sX(s) = AX(s) + BU(s)$$
Subtract AX(s) from both sides:
$$sX(s) - AX(s) = BU(s)$$
$$(sI - A)X(s) = BU(s)$$
Assuming (sI - A) is nonsingular, we can multiply both sides by the
inverse:
$$X(s) = (sI - A)^{-1}BU(s)$$
Now, if we remember our formula for finding the matrix inverse from the
adjoint matrix:
$$A^{-1} = \frac{\operatorname{adj}(A)}{|A|}$$
We can use that definition here:
$$X(s) = \frac{\operatorname{adj}(sI - A)BU(s)}{|(sI - A)|}$$
Let\'s look at the denominator (which we will now call D(s)) more
closely. To be stable, the following condition must be true:
$$D(s) = |(sI - A)| = 0$$
And if we substitute λ for s, we see that this is actually the
characteristic equation of matrix A! This means that the values for s
that satisfy the equation (the poles of our transfer function) are
precisely the eigenvalues of matrix A. In the S domain, it is required
that all the poles of the system be located in the left-half plane, and
therefore all the eigenvalues of A must have negative real parts.
## Impulse Response Matrix
We can define the **Impulse response matrix**, *G*(t, τ) in order to
define further tests for stability:
$$G(t, \tau) = \left\{\begin{matrix}C(t)\phi(t, \tau)B(\tau) & \mbox{ if } t \ge \tau \\0 & \mbox{ if } t < \tau\end{matrix}\right.$$
The system is *uniformly stable* if and only if there exists a finite
positive constant *L* such that for all time *t* and all initial
conditions t~0~ with $t \ge t_0$ the following integral is satisfied:
$$\int_0^t \|G(t, \tau)\|d\tau \le L$$
In other words, the above integral must have a finite value, or the
system is not uniformly stable.
In the time-invariant case, the impulse response matrix reduces to:
$$G(t) = \left\{\begin{matrix}Ce^{At}B & \mbox{ if } t \ge 0 \\0 & \mbox{ if } t < 0\end{matrix}\right.$$
In a time-invariant system, we can use the impulse response matrix to
determine if the system is uniformly BIBO stable by taking a similar
integral:
$$\int_0^\infty \|G(t)\|dt \le L$$
Where *L* is a finite constant.
## Positive Definiteness
These terms are important, and will be used in further discussions on
this topic.
- f(x) is **positive definite** if f(x) \> 0 for all x.
- f(x) is **positive semi-definite** if $f(x) \ge 0$ for all x, and
f(x) = 0 only if x = 0.
- f(x) is **negative definite** if f(x) \< 0 for all x.
- f(x) is **negative semi-definite** if $f(x) \le 0$ for all x, and
f(x) = 0 only if x = 0.
A Hermitian matrix X is positive definite if all its principle minors
are positive. Also, a matrix X is positive definite if all its
eigenvalues have positive real parts. These two methods may be used
interchangeably.
Positive definiteness is a very important concept. So much so that the
Lyapunov stability test depends on it. The other categorizations are not
as important, but are included here for completeness.
## Lyapunov Stability
### Lyapunov\'s Equation
For linear systems, we can use the **Lyapunov Equation**, below, to
determine if a system is stable. We will state the Lyapunov Equation
first, and then state the **Lyapunov Stability Theorem**.
$$MA + A^TM = -N$$
Where A is the system matrix, and M and N are *p* × *p* square matrices.
Notice that for the Lyapunov Equation to be satisfied, the matrices must
be compatible sizes. In fact, matrices A, M, and N must all be square
matrices of equal size. Alternatively, we can write:
If the matrix M can be calculated in this manner, the system is
asymptotically stable.
|
# Control Systems/Discrete Time Stability
## Discrete-Time Stability
The stability analysis of a discrete-time or digital system is similar
to the analysis for a continuous time system. However, there are enough
differences that it warrants a separate chapter.
## Input-Output Stability
### Uniform Stability
An LTI causal system is uniformly BIBO stable if there exists a positive
constant L such that the following conditions:
$$x[n_0] = 0$$
$$\|u[n]\| \le k$$
$$k \ge 0$$
imply that
$$\|y[n]\| \le L$$
### Impulse Response Matrix
We can define the **impulse response matrix** of a discrete-time system
as:
$$G[n] = \left\{\begin{matrix}CA^{k-1}B & \mbox{ if } k > 0 \\ 0 & \mbox{ if } k \le 0\end{matrix}\right.$$
Or, in the general time-varying case:
$$G[n] = \left\{\begin{matrix}C\phi[n, n_0]B & \mbox{ if } k > 0 \\ 0 & \mbox{ if } k \le 0\end{matrix}\right.$$
A digital system is BIBO stable if and only if there exists a positive
constant *L* such that for all non-negative *k*:
$$\sum_{n = 0}^{k}\|G[n]\| \le L$$
## Stability of Transfer Function
A MIMO discrete-time system is BIBO stable if and only if every pole of
every transfer function in the transfer function matrix has a magnitude
less than 1. All poles of all transfer functions must exist inside the
unit circle on the Z plane.
## Lyapunov Stability
There is a discrete version of the Lyapunov stability theorem that
applies to digital systems. Given the **discrete Lyapunov equation**:
$$A^TMA - M = -N$$
We can use this version of the Lyapunov equation to define a condition
for stability in discrete-time systems:
## Poles and Eigenvalues
Every pole of G(z) is an eigenvalue of the system matrix A. Not every
eigenvalue of A is a pole of G(z). Like the poles of the transfer
function, all the eigenvalues of the system matrix must have magnitudes
less than 1. Mathematically:
$$\sqrt{\operatorname{Re}(z)^2 + \operatorname{Im}(z)^2} \le 1$$
If the magnitude of the eigenvalues of the system matrix A, or the poles
of the transfer functions are greater than 1, the system is unstable.
## Finite Wordlengths
Digital computer systems have an inherent problem because implementable
computer systems have finite wordlengths to deal with. Some of the
issues are:
1. Real numbers can only be represented with a finite precision.
Typically, a computer system can only accurately represent a number
to a finite number of decimal points.
2. Because of the fact above, computer systems with feedback can
compound errors with each program iteration. Small errors in one
step of an algorithm can lead to large errors later in the program.
3. Integer numbers in computer systems have finite lengths. Because of
this, integer numbers will either **roll-over**, or **saturate**,
depending on the design of the computer system. Both situations can
create inaccurate results.
|
# Control Systems/Routh-Hurwitz Criterion
## Stability Criteria
The Routh-Hurwitz stability criterion provides a simple algorithm to
decide whether or not the zeros of a polynomial are all in the left half
of the complex plane (such a polynomial is called at times \"Hurwitz\").
A Hurwitz polynomial is a key requirement for a linear continuous-time
invariant to be stable (all bounded inputs produce bounded outputs).
Necessary stability conditions: Conditions that must hold for a polynomial to be Hurwitz.
If any of them fails - the polynomial is not stable. However, they may
all hold without implying stability.
Sufficient stability conditions: Conditions that if met imply that the polynomial is stable. However, a polynomial may be stable without implying some or any of them.
The Routh criteria provides condition that are both necessary and
sufficient for a polynomial to be Hurwitz.
## Routh-Hurwitz Criteria
The Routh-Hurwitz criteria is comprised of three separate tests that
must be satisfied. If any single test fails, the system is not stable
and further tests need not be performed. For this reason, the tests are
arranged in order from the easiest to determine to the hardest.
The Routh Hurwitz test is performed on the denominator of the transfer
function, the **characteristic equation**. For instance, in a
closed-loop transfer function with G(s) in the forward path, and H(s) in
the feedback loop, we have:
$$T(s) = \frac{G(s)}{1 + G(s)H(s)}$$
If we simplify this equation, we will have an equation with a numerator
N(s), and a denominator D(s):
$$T(s) = \frac{N(s)}{D(s)}$$
The Routh-Hurwitz criteria will focus on the denominator polynomial
D(s).
### Routh-Hurwitz Tests
Here are the three tests of the Routh-Hurwitz Criteria. For convenience,
we will use N as the order of the polynomial (the value of the highest
exponent of s in D(s)). The equation D(s) can be represented generally
as follows:
$$D(s) = a_0 + a_1s + a_2s^2 + \cdots + a_Ns^N$$
We will explain the Routh array below.
### The Routh\'s Array
The Routh array is formed by taking all the coefficients a~i~ of D(s),
and staggering them in array form. The final columns for each row should
contain zeros:
$$\begin{matrix}s^N \\ s^{N-1} \end{matrix}
\begin{vmatrix}a_N & a_{N - 2} & \cdots & 0 \\ a_{N-1} & a_{N-3} & \cdots & 0
\end{vmatrix}$$
Therefore, if N is odd, the top row will be all the odd coefficients. If
N is even, the top row will be all the even coefficients. We can fill in
the remainder of the Routh Array as follows:
$$\begin{matrix}s^N \\ s^{N-1} \\ \\ \\ s^0 \end{matrix}
\begin{vmatrix}a_N & a_{N - 2} & \cdots & 0
\\ a_{N-1} & a_{N-3} & \cdots & 0
\\ b_{N-1} & b_{N-3} & \cdots
\\ c_{N-1} & c_{N-3} & \cdots
\\ \cdots
\end{vmatrix}$$
Now, we can define all our b, c, and other coefficients, until we reach
row s^0^. To fill them in, we use the following formulae:
$$b_{N-1} = \frac{-1}{a_{N-1}}\begin{vmatrix}a_N & a_{N-2} \\ a_{N-1} & a_{N-3}\end{vmatrix}$$
And
$$b_{N-3} = \frac{-1}{a_{N-1}}\begin{vmatrix}a_N & a_{N-4} \\ a_{N-1} & a_{N-5}\end{vmatrix}$$
For each row that we are computing, we call the left-most element in the
row directly above it the **pivot element**. For instance, in row b, the
pivot element is a~N-1~, and in row c, the pivot element is b~N-1~ and
so on and so forth until we reach the bottom of the array.
To obtain any element, we negate the determinant of the following
matrix, and divide by the pivot element:
$$\begin{vmatrix}k & m \\ l & n \end{vmatrix}$$
Where:
- **k** is the left-most element two rows above the current row.
- **l** is the pivot element.
- **m** is the element two rows up, and one column to the right of the
current element.
- **n** is the element one row up, and one column to the right of the
current element.
In terms of **k l m n**, our equation is:
$$v = \frac{(lm) - (kn)}{l}$$
### Example: Calculating C~N-3~
\\begin{vmatrix}a\_{N-1} & a\_{N-5} \\\\ b\_{N-1} & b\_{N-5}
\\end{vmatrix} = \\frac{a\_{N-1}b\_{N-5} - b\_{N-1}a\_{N-5}}{-
b\_{N-1}}`</math>`{=html}}}
### Example: Stable Third Order System
= 3`</math>`{=html}
$$c_{N-3} = \frac{(2)(0) - \left(\frac{5}{2}\right)(0)}{\frac{5}{2}} = 0$$
And filling these values into our Routh Array, we can determine whether
the system is stable:
$$\begin{matrix}s^3 \\ s^2 \\ s^1 \\ s^0 \end{matrix}
\begin{vmatrix}1 & 4 & 0
\\ 2 & 3 & 0
\\ \frac{5}{2} & 0 & 0
\\ 3 & 0 & 0
\end{vmatrix}$$
From this array, we can clearly see that all of the signs of the first
column are positive, there are no sign changes, and therefore there are
no poles of the characteristic equation in the RHP.}}
### Special Case: Row of All Zeros
If, while calculating our Routh-Hurwitz, we obtain a row of all zeros,
we do not stop, but can actually learn more information about our
system.
If we have a row of all zeros, the row directly above it is known as the
**Auxiliary Polynomial**, and can be very helpful. The roots of the
auxiliary polynomial give us the precise locations of complex conjugate
roots that lie on the jω axis. However, one important point to notice is
that if there are repeated roots on the jω axis, the system is actually
**unstable**. Therefore, we must use the auxiliary polynomial to
determine whether the roots are repeated or not.
The auxiliary equation is to be differentiated with respect to s and the
coefficients of this equation replaces the all zero row. Routh array can
be further calculated using these new values.
### Special Case: Zero in the First Column
In this special case, there is a zero in the first column of the Routh
Array, but the other elements of that row are non-zero. Like the above
case, we can replace the zero with a small variable epsilon (ε) and use
that variable to continue our calculations. After we have constructed
the entire array, we can take the limit as epsilon approaches zero to
get our final values. If the sign coefficient above the (ε) is the same
as below it, this indicates a pure imaginary root.
|
# Control Systems/Jurys Test
## Routh-Hurwitz in Digital Systems
Because of the differences in the Z and S domains, the Routh-Hurwitz
criteria can not be used directly with digital systems. This is because
digital systems and continuous-time systems have different regions of
stability. However, there are some methods that we can use to analyze
the stability of digital systems. Our first option (and arguably not a
very good option) is to convert the digital system into a
continuous-time representation using the **bilinear transform**. The
bilinear transform converts an equation in the Z domain into an equation
in the W domain, that has properties similar to the S domain. Another
possibility is to use **Jury\'s Stability Test**. Jury\'s test is a
procedure similar to the RH test, except it has been modified to analyze
digital systems in the Z domain directly.
### Bilinear Transform
One common, but time-consuming, method of analyzing the stability of a
digital system in the z-domain is to use the bilinear transform to
convert the transfer function from the z-domain to the w-domain. The
w-domain is similar to the s-domain in the following ways:
- Poles in the right-half plane are unstable
- Poles in the left-half plane are stable
- Poles on the imaginary axis are partially stable
The w-domain is warped with respect to the s domain, however, and except
for the relative position of poles to the imaginary axis, they are not
in the same places as they would be in the s-domain.
Remember, however, that the Routh-Hurwitz criterion can tell us whether
a pole is unstable or not, and nothing else. Therefore, it doesn\'t
matter where exactly the pole is, so long as it is in the correct
half-plane. Since we know that stable poles are in the left-half of the
w-plane and the s-plane, and that unstable poles are on the right-hand
side of both planes, we can use the Routh-Hurwitz test on functions in
the w domain exactly like we can use it on functions in the s-domain.
### Other Mappings
There are other methods for mapping an equation in the Z domain into an
equation in the S domain, or a similar domain. We will discuss these
different methods in the
**Appendix**.
## Jury\'s Test
Jury\'s test is a test that is similar to the Routh-Hurwitz criterion,
except that it can be used to analyze the stability of an LTI digital
system in the Z domain. To use Jury\'s test to determine if a digital
system is stable, we must check our z-domain characteristic equation
against a number of specific rules and requirements. If the function
fails any requirement, it is not stable. If the function passes all the
requirements, it is stable. Jury\'s test is a necessary and sufficient
test for stability in digital systems.
Again, we call D(z) the **characteristic polynomial** of the system. It
is the denominator polynomial of the Z-domain transfer function. Jury\'s
test will focus exclusively on the Characteristic polynomial. To perform
Jury\'s test, we must perform a number of smaller tests on the system.
If the system fails any test, it is unstable.
### Jury Tests
Given a characteristic equation in the form:
$$D(z) = a_0 + a_1z + a_2z^2 + \cdots + a_Nz^N$$
The following tests determine whether this system has any poles outside
the unit circle (the instability region). These tests will use the value
N as being the degree of the characteristic polynomial.
While you are constructing the Jury Array, you can be making the tests
of **Rule 4**. If the Array fails **Rule 4** at any point, you can stop
calculating the array: your system is unstable. We will discuss the
construction of the Jury Array below.
### The Jury Array
The Jury Array is constructed by first writing out a row of
coefficients, and then writing out another row with the same
coefficients in reverse order. For instance, if your polynomial is a
third order system, we can write the First two lines of the Jury Array
as follows:
$$\overline{\underline{
\begin{matrix} z^0 & z^1 & z^2 & z^3 & \ldots & z^N
\\ a_0 & a_1 & a_2 & a_3 & \ldots& a_N
\\ a_N & \ldots & a_3 & a_2 & a_1 & a_0
\end{matrix}}}$$
Now, once we have the first row of our coefficients written out, we add
another row of coefficients (we will use **b** for this row, and **c**
for the next row, as per our previous convention), and we will calculate
the values of the lower rows from the values of the upper rows. Each new
row that we add will have one fewer coefficient then the row before it:
$$\overline{\underline{
\begin{matrix} 1) & a_0 & a_1 & a_2 & a_3 & \ldots & a_N
\\ 2) & a_N & \ldots & a_3 & a_2 & a_1 & a_0
\\ 3) & b_0 & b_1 & b_2 & \ldots & b_{N-1}
\\ 4) & b_{N-1}& \ldots & b_2 & b_1 & b_0
\\ \vdots & \vdots & \vdots & \vdots
\\ 2N-3) & v_0 & v_1 & v_2
\end{matrix}}}$$ Note: The last file is the (2N-3) file, and
always has 3 elements. This test doesn\'t have sense if N=1, but in this
case you know the pole!
Once we get to a row with 2 members, we can stop constructing the array.
To calculate the values of the odd-number rows, we can use the following
formulae. The even number rows are equal to the previous row in reverse
order. We will use k as an arbitrary subscript value. These formulae are
reusable for all elements in the array:
$$b_k = \begin{vmatrix} a_0 & a_{N-k}
\\ a_N & a_k
\end{vmatrix}$$
$$c_k = \begin{vmatrix} b_0 & b_{N-1-k}
\\ b_{N-1} & b_k
\end{vmatrix}$$
$$d_k = \begin{vmatrix} c_0 & c_{N-2-k}
\\ c_{N-2} & c_k
\end{vmatrix}$$ This pattern can be carried on to all
lower rows of the array, if needed.
### Example: Calculating e~5~
## Further reading
We will discuss the bilinear transform, and other methods to convert
between the Laplace domain and the Z domain in the appendix:
- Z Transform
Mappings
|
# Control Systems/Root Locus
## The Problem
Consider a system like a radio. The radio has a \"volume\" knob, that
controls the amount of gain of the system. High volume means more power
going to the speakers, low volume means less power to the speakers. As
the volume value increases, the poles of the transfer function of the
radio change, and they might potentially become unstable. We would like
to find out *if* the radio becomes unstable, and if so, we would like to
find out what values of the volume cause it to become unstable. Our
current methods would require us to plug in each new value for the
volume (gain, \"K\"), and solve the open-loop transfer function for the
roots. This process can be a long one. Luckily, there is a method called
the **root-locus** method, that allows us to graph the locations of all
the poles of the system for all values of gain, K
## Root-Locus
As we change gain, we notice that the system poles and zeros actually
move around in the S-plane. This fact can make
life particularly difficult, when we need to solve higher-order
equations repeatedly, for each new gain value. The solution to this
problem is a technique known as **Root-Locus** graphs. Root-Locus allows
you to graph the locations of the poles and zeros *for every value of
gain*, by following several simple rules. As we know that a fan switch
also can control the speed of the fan.
Let\'s say we have a closed-loop transfer function for a particular
system:
$$\frac{N(s)}{D(s)} = \frac{KG(s)}{1 + KG(s)H(s)}$$
Where N is the numerator polynomial and D is the denominator polynomial
of the transfer functions, respectively. Now, we know that to find the
poles of the equation, we must set the denominator to 0, and solve the
characteristic equation. In other words, the locations of the poles of a
specific equation must satisfy the following relationship:
$$D(s) = 1 + KG(s)H(s) = 0$$
from this same equation, we can manipulate the equation as such:
$$1 + KG(s)H(s) = 0$$
$$KG(s)H(s) = -1$$
And finally by converting to polar coordinates:
$$\angle KG(s)H(s) = 180^\circ$$
Now we have 2 equations that govern the locations of the poles of a
system for all gain values:
$$1 + KG(s)H(s) = 0$$
$$\angle KG(s)H(s) = 180^\circ$$
### Digital Systems
The same basic method can be used for considering digital systems in the
Z-domain:
$$\frac{N(z)}{D(z)} = \frac{KG(z)}{1 + K\overline{GH}(z)}$$
Where N is the numerator polynomial in z, D is the denominator
polynomial in z, and $\overline{GH}(z)$ is the open-loop transfer
function of the system, in the Z domain.
The denominator D(z), by the definition of the characteristic equation
is equal to:
$$D(z) = 1 + K\overline{GH}(z) = 0$$
We can manipulate this as follows:
$$1 + K\overline{GH}(z) = 0$$
$$K\overline{GH}(z) = -1$$
We can now convert this to polar coordinates, and take the angle of the
polynomial:
$$\angle K\overline{GH}(z) = 180^\circ$$
We are now left with two important equations:
$$1 + K\overline{GH}(z) = 0$$
$$\angle K\overline{GH}(z) = 180^\circ$$
If you will compare the two, the Z-domain equations are nearly identical
to the S-domain equations, and act exactly the same. For the remainder
of the chapter, we will only consider the S-domain equations, with the
understanding that digital systems operate in nearly the same manner.
## The Root-Locus Procedure
In the transform domain (see note at right), when the gain is small, the
poles start at the poles of the open-loop transfer function. When gain
becomes infinity, the poles move to overlap the zeros of the system.
This means that on a root-locus graph, all the poles move towards a
zero. Only one pole may move towards one zero, and this means that there
must be the same number of poles as zeros.
If there are fewer zeros than poles in the transfer function, there are
a number of implicit zeros located at infinity, that the poles will
approach.
First thing, we need to convert the magnitude equation into a slightly
more convenient form:
$$KG(s)H(s) + 1 = 0 \to G(s)H(s) = \frac{-1}{K}$$
Now, we can assume that G(s)H(s) is a fraction of some sort, with a
numerator and a denominator that are both polynomials. We can express
this equation using arbitrary functions a(s) and b(s), as such:
$$\frac{a(s)}{b(s)} = \frac{-1}{K}$$
We will refer to these functions a(s) and b(s) later in the procedure.
We can start drawing the root-locus by first placing the roots of b(s)
on the graph with an \'X\'. Next, we place the roots of a(s) on the
graph, and mark them with an \'O\'.
Next, we examine the real-axis. starting from the right-hand side of the
graph and traveling to the left, we draw a root-locus line on the
real-axis at every point to the left of an odd number of poles or zeros
on the real-axis. This may sound tricky at first, but it becomes easier
with practice.
Now, a root-locus line starts at every pole. Therefore, any place that
two poles appear to be connected by a root locus line on the real-axis,
the two poles actually move towards each other, and then they \"break
away\", and move off the axis. The point where the poles break off the
axis is called the **breakaway point**. From here, the root locus lines
travel towards the nearest zero.
It is important to note that the s-plane is symmetrical about the real
axis, so whatever is drawn on the top-half of the S-plane, must be drawn
in mirror-image on the bottom-half plane.
Once a pole breaks away from the real axis, they can either travel out
towards infinity (to meet an implicit zero), or they can travel to meet
an explicit zero, or they can re-join the real-axis to meet a zero that
is located on the real-axis. If a pole is traveling towards infinity, it
always follows an asymptote. The number of asymptotes is equal to the
number of implicit zeros at infinity.
## Root Locus Rules
Here is the complete set of rules for drawing the root-locus graph. We
will use p and z to denote the number of poles and the number of zeros
of the open-loop transfer function, respectively. We will use P~i~ and
Z~i~ to denote the location of the *i*th pole and the *i*th zero,
respectively. Likewise, we will use ψ~i~ and ρ~i~ to denote the angle
from a given point to the *i*th pole and zero, respectively. All angles
are given in radians (π denotes π radians).
There are 11 rules that, if followed correctly, will allow you to create
a correct root-locus graph.
We will explain these rules in the rest of the chapter.
## Root Locus Equations
Here are the two major equations:
: {\| class=\"wikitable\"
! S-Domain Equations !! Z-Domain Equations \|- \|$1 + KG(s)H(s) = 0$
\|$1 + K\overline{GH}(z) = 0$ \|- \|$\angle KG(s)H(s) = 180^o$
\|$\angle K\overline{GH}(z) = 180^o$ \|}
Note that the sum of the angles of all the poles and zeros must equal to
180.
### Number of Asymptotes
If the number of explicit zeros of the system is denoted by Z (uppercase
z), and the number of poles of the system is given by P, then the number
of asymptotes (N~a~) is given by:
$$N_a = P - Z$$
The angles of the asymptotes are given by:
$$\phi_k = (2k + 1)\frac{\pi}{P - Z}$$
for values of $k = [0, 1, ... N_a - 1]$.
### Asymptote Intersection Point
The asymptotes intersect the real axis at the point:
$$\sigma_0 = \frac{\sum_P - \sum_Z}{P - Z}$$
Where $\sum_P$ is the sum of all the locations of the poles, and
$\sum_Z$ is the sum of all the locations of the explicit zeros.
### Breakaway Points
The breakaway points are located at the roots of the following equation:
$$\frac{dG(s)H(s)}{ds} = 0$$ or $\frac{d\overline{GH}(z)}{dz} = 0$
Once you solve for z, the real roots give you the breakaway/reentry
points. Complex roots correspond to a lack of breakaway/reentry.
The breakaway point equation can be difficult to solve, so many times
the actual location is approximated.
## Root Locus and Stability
The root locus procedure should produce a graph of where the poles of
the system are for all values of gain K. When any or all of the roots of
D are in the unstable region, the system is unstable. When any of the
roots are in the marginally stable region, the system is marginally
stable (oscillatory). When all of the roots of D are in the stable
region, then the system is stable.
It is important to note that a system that is stable for gain K~1~ may
become unstable for a different gain K~2~. Some systems may have poles
that cross over from stable to unstable multiple times, giving multiple
gain values for which the system is unstable.
Here is a quick refresher:
: {\|class=\"wikitable\"
! Region ! colspan=2 \| S-Domain ! colspan=2 \| Z-Domain \|- ! Stable
Region \| Left-Hand S Plane \|\| $\sigma < 0$\|\| Inside the Unit Circle
\|\| $|z| < 1$ \|- ! Marginally Stable Region \| The vertical axis \|\|
$\sigma = 0$ \|\| The Unit Circle \|\| $|z| = 1$ \|- ! Unstable Region
\| Right-Hand S Plane \|\| $\sigma > 0$ \|\| Outside the Unit Circle,
\|\| $|z| > 1$ \|}
## Examples
### Example 1: First-Order System
### Example 2: Third Order System
### Example: Complex-Conjugate Zeros
### Example: Root-Locus Using MATLAB/Octave
{{TextBox\|1=Use MATLAB, Octave, or another piece of mathematical
simulation software to produce the root-locus graph for the following
system:
$$T(s) = K\frac{s^2+7s+12}{(s^2 + 3s + 6)}$$
First, we must multiply through in the denominator:
$$N(s) = S^2+7S+12$$
$$D(s) = S^2+3S+2$$
Now, we can generate the coefficient vectors from the numerator and
denominator:
num = [0 1 7 12];
den = [0 1 3 2];
Next, we can feed these vectors into the **rlocus** command:
rlocus(num, den);
**Note**:In Octave, we need to create a system structure first, by
typing:
sys = tf(num, den);
rlocus(sys);
Either way, we generate the following graph:
![](Root_Locus_diagram_.svg "Root_Locus_diagram_.svg"){width="600"}
|
# Control Systems/Nyquist Stability Criteria
## Nyquist Stability Criteria
The **Nyquist Stability Criteria** is a test for system stability, just
like the
Routh-Hurwitz
test, or the Root-Locus
Methodology. However, the Nyquist Criteria can also give us additional
information about a system. Routh-Hurwitz and Root-Locus can tell us
where the poles of the system are for particular values of gain. By
altering the gain of the system, we can determine if any of the poles
move into the RHP, and therefore become unstable. The Nyquist Criteria,
however, can tell us things about the *frequency characteristics* of the
system. For instance, some systems with constant gain might be stable
for low-frequency inputs, but become unstable for high-frequency inputs.
Also, the Nyquist Criteria can tell us things about the phase of the
input signals, the time-shift of the system, and other important
information.
## Contours
A **contour** is a complicated mathematical construct, but luckily we
only need to worry ourselves with a few points about them. We will
denote contours with the Greek letter Γ (gamma). Contours are lines,
drawn on a graph, that follow certain rules:
1. The contour must close (it must form a complete loop)
2. The contour may not cross directly through a pole of the system.
3. Contours must have a direction (clockwise or counterclockwise,
generally).
4. A contour is called \"simple\" if it has no self-intersections. We
only consider simple contours here.
Once we have such a contour, we can develop some important theorems
about them, and finally use these theorems to derive the **Nyquist
stability criterion**.
## Argument Principle
Here is the argument principle, which we will use to derive the
stability criterion. Do not worry if you do not understand all the
terminology, we will walk through it:
When we have our contour, Γ, we transform it into $\Gamma_{F(s)}$ by
plugging every point of the contour into the function F(s), and taking
the resultant value to be a point on the transformed contour.
### Example: First Order System
### Example: Second-Order System
## The Nyquist Contour
The Nyquist contour, the contour that makes the entire nyquist criterion
work, must encircle the entire unstable region of the complex plane. For
analog systems, this is the right half of the complex s plane. For
digital systems, this is the entire plane outside the unit circle.
Remember that if a pole to the closed-loop transfer function (or
equivalently a zero of the characteristic equation) lies in the unstable
region of the complex plane, the system is an unstable system.
Analog Systems:The Nyquist contour for analog systems is an infinite semi-circle that encircles the entire right-half of the s plane. The semicircle travels up the imaginary axis from negative infinity to positive infinity. From positive infinity, the contour breaks away from the imaginary axis, in the clock-wise direction, and forms a giant semicircle.\
Digital Systems:The Nyquist contour in digital systems is a counter-clockwise encirclement of the unit circle.
## Nyquist Criteria
Let us first introduce the most important equation when dealing with the
Nyquist criterion:
$$N = Z - P$$
Where:
- N is the number of encirclements of the (-1, 0) point.
- Z is the number of zeros of the characteristic equation.
- P is the number of poles in the of the open-loop characteristic
equation.
With this equation stated, we can now state the **Nyquist Stability
Criterion**: `{{TextBox|1=
;Nyquist Stability Criterion:A feedback control system is stable, if and only if the contour <math>\Gamma_{F(s)}</math> in the F(s) plane does not encircle the (-1, 0) point when P is 0.
:A feedback control system is stable, if and only if the contour <math>\Gamma_{F(s)}</math> in the F(s) plane encircles the (-1, 0) point a number of times equal to the number of poles of F(s) enclosed by Γ.}}`{=mediawiki}
In other words, if P is zero then N must equal zero. Otherwise, N must
equal P. Essentially, we are saying that Z must always equal zero,
because Z is the number of zeros of the characteristic equation (and
therefore the number of poles of the closed-loop transfer function) that
are in the right-half of the s plane.
Keep in mind that we don\'t necessarily know the locations of all the
zeros of the characteristic equation. So if we find, using the nyquist
criterion, that the number of poles is not equal to N, then we know that
there must be a zero in the right-half plane, and that therefore the
system is unstable.
## Nyquist ↔ Bode
A careful inspection of the Nyquist plot will reveal a surprising
relationship to the Bode plots of the system. If we use the Bode phase
plot as the angle θ, and the Bode magnitude plot as the distance r, then
it becomes apparent that the Nyquist plot of a system is simply the
polar representation of the Bode plots.
To obtain the Nyquist plot from the Bode plots, we take the phase angle
and the magnitude value at each frequency ω. We convert the magnitude
value from decibels back into gain ratios. Then, we plot the ordered
pairs (r, θ) on a polar graph.
## Nyquist in the Z Domain
The Nyquist Criteria can be utilized in the digital domain in a similar
manner as it is used with analog systems. The primary difference in
using the criteria is that the shape of the Nyquist contour must change
to encompass the unstable region of the Z plane. Therefore, instead of
an infinitesimal semi-circle, the Nyquist contour for digital systems is
a counter-clockwise unit circle. By changing the shape of the contour,
the same N = Z - P equation holds true, and the resulting Nyquist graph
will typically look identical to one from an analog system, and can be
interpreted in the same way.
|
# Control Systems/Controllability and Observability
## System Interaction
In the world of control engineering, there are a slew of systems
available that need to be controlled. The task of a control engineer is
to design controller and compensator units to interact with these
pre-existing systems. However, some systems simply cannot be controlled
(or, more often, cannot be controlled in specific ways). The concept of
**controllability** refers to the ability of a controller to arbitrarily
alter the functionality of the system plant.
The state-variable of a system, *x*, represents the internal workings of
the system that can be separate from the regular input-output
relationship of the system. This also needs to be measured, or
*observed*. The term **observability** describes whether the internal
state variables of the system can be externally measured.
## Controllability
Complete state controllability (or simply controllability if no other
context is given) describes the ability of an external input to move the
internal state of a system from any initial state to any other final
state in a finite time interval
We will start off with the definitions of the term **controllability**,
and the related terms **reachability** and **stabilizability**.
We can also write out the definition of reachability more precisely:
Similarly, we can more precisely define the concept of controllability:
### Controllability Matrix
For LTI (linear time-invariant) systems, a system is reachable if and
only if its **controllability matrix**, ζ, has a full row rank of *p*,
where *p* is the dimension of the matrix A, and *p* × *q* is the
dimension of matrix B.
$$\zeta = \begin{bmatrix}B & AB & A^2B & \cdots & A^{p-1}B\end{bmatrix} \in R^{p \times pq}$$
A system is controllable or \"Controllable to the origin\" when any
state x~1~ can be driven to the zero state *x = 0* in a finite number of
steps.
A system is controllable when the rank of the system matrix A is *p*,
and the rank of the controllability matrix is equal to:
$$Rank(\zeta) = Rank(A^{-1}\zeta) = p$$
If the second equation is not satisfied, the system is not .
MATLAB allows one to easily create the controllability matrix with the
**ctrb** command. To create the controllability matrix $\zeta$ simply
type
$$\zeta=ctrb(A,B)$$
where A and B are mentioned above. Then in order to determine if the
system is controllable or not one can use the rank command to determine
if it has full rank.
If
$$Rank(A) < p$$
Then controllability does not imply reachability.
- Reachability always implies controllability.
- Controllability only implies reachability when the state transition
matrix is nonsingular.
### Determining Reachability
There are four methods that can be used to determine if a system is
reachable or not:
1. If the *p* rows of $\phi(t, \tau)B(t)$ are linearly independent over
the field of complex numbers. That is, if the rank of the product of
those two matrices is equal to *p* for all values of *t* and *τ*
2. If the rank of the controllability matrix is the same as the rank of
the system matrix A.
3. If the rank of $\operatorname{rank}[\lambda I - A, B] = p$ for all
eigenvalues λ of the matrix A.
4. If the rank of the **reachability gramian** (described below) is
equal to the rank of the system matrix A.
Each one of these conditions is both necessary and sufficient. If any
one test fails, all the tests will fail, and the system is not
reachable. If any test is positive, then all the tests will be positive,
and the system is reachable.
### Gramians
**Gramians** are complicated mathematical functions that can be used to
determine specific things about a system. For instance, we can use
gramians to determine whether a system is controllable or reachable.
Gramians, because they are more complicated than other methods, are
typically only used when other methods of analyzing a system fail (or
are too difficult).
All the gramians presented on this page are all matrices with dimension
*p* × *p* (the same size as the system matrix A).
All the gramians presented here will be described using the general case
of Linear time-variant systems. To change these into LTI (time-invariant
equations), the following substitutions can be used:
$$\phi(t, \tau) \to e^{A(t-\tau)}$$
$$\phi'(t, \tau) \to e^{A'(t-\tau)}$$
Where we are using the notation X\' to denote the transpose of a matrix
X (as opposed to the traditional notation X^T^).
### Reachability Gramian
We can define the **reachability gramian** as the following integral:
$$W_r(t_0, t_1) = \int_{t_0}^{t_1}\phi(t_1, \tau)B(\tau)B'(\tau)\phi'(t_1, \tau)d\tau$$
The system is reachable if the rank of the reachability gramian is the
same as the rank of the system matrix:
$$\operatorname{rank}(W_r) = p$$ `<chemistry>`{=html}/control{range}
### Controllability Gramian
We can define the **controllability gramian** of a system (A, B) as:
$$W_c(t_0, t_1) = \int_{t_0}^{t_1}\phi(t_0, \tau)B(\tau)B'(\tau)\phi'(t_0, \tau)d\tau$$
The system is controllable if the rank of the controllability gramian is
the same as the rank of the system matrix:
$$\operatorname{rank}(W_c) = p$$
If the system is time-invariant, there are two important points to be
made. First, the reachability gramian and the controllability gramian
reduce to be the same equation. Therefore, for LTI systems, if we have
found one gramian, then we automatically know both gramians. Second, the
controllability gramian can also be found as the solution to the
following Lyapunov equation:
$$AW_c + W_cA' = -BB'$$
Many software packages, notably MATLAB, have functions to solve the
Lyapunov equation. By using this last relation, we can also solve for
the controllability gramian using these existing functions.
## Observability
The state-variables of a system might not be able to be measured for any
of the following reasons:
1. The location of the particular state variable might not be
physically accessible (a capacitor or a spring, for instance).
2. There are no appropriate instruments to measure the state variable,
or the state-variable might be measured in units for which there
does not exist any measurement device.
3. The state-variable is a derived \"dummy\" variable that has no
physical meaning.
If things cannot be directly observed, for any of the reasons above, it
can be necessary to calculate or **estimate** the values of the internal
state variables, using only the input/output relation of the system, and
the output history of the system from the starting time. In other words,
we must ask whether or not it is possible to determine what the inside
of the system (the internal system states) is like, by only observing
the outside performance of the system (input and output)? We can provide
the following formal definition of mathematical observability:
A system state x~i~ is unobservable at a given time t~i~ if the
zero-input response of the system is zero for all time t. If a system is
observable, then the only state that produces a zero output for all time
is the zero state. We can use this concept to define the term
**state-observability**.
### Constructability
A state *x* is **unconstructable** at a time t~1~ if for every finite
time t \< t~1~ the zero input response of the system is zero for all
time t.
A system is completely **state constructable** at time t~1~ if the only
state *x* that is unconstructable at t~0~ is *x* = 0.
If a system is observable at an initial time t~0~, then it is
constructable at some time t \> t~0~, if it is constructable at t~1~.
### Observability Matrix
The observability of the system is dependent only on the system states
and the system output, so we can simplify our state equations to remove
the input terms:
$$x'(t) = Ax(t)$$
$$y(t) = Cx(t)$$
Therefore, we can show that the observability of the system is dependent
only on the coefficient matrices A and C. We can show precisely how to
determine whether a system is observable, using only these two matrices.
If we have the **observability matrix** Q:
$$Q = \begin{bmatrix}C\\CA\\CA^2\\\vdots\\CA^{p-1}\end{bmatrix}$$
we can show that the system is observable if and only if the Q matrix
has a rank of *p*. Notice that the Q matrix has the dimensions *pr* ×
*p*.
MATLAB allows one to easily create the observability matrix with the
**obsv** command. To create the observability matrix $Q$ simply type
: Q=obsv(A,C)
where A and C are mentioned above. Then in order to determine if the
system is observable or not one can use the rank command to determine if
it has full rank.
### Observability Gramian
We can define an **observability gramian** as:
$$W_o(t_0, t_1) = \int_{t_0}^{t_1} \phi'(\tau, t_0)C'(\tau)C(\tau)\phi(\tau, t_0)d\tau$$
A system is completely state observable at time t~0~ \< t \< t~1~ if and
only if the rank of the observability gramian is equal to the size *p*
of the system matrix A.
If the system (A, B, C, D) is time-invariant, we can construct the
observability gramian as the solution to the Lyapunov equation:
$$A'W_o + W_oA = -C'C$$
### Constructability Gramian
We can define a **constructability gramian** as:
$$W_{cn}(t_0, t_1) = \int_{t_0}^{t_1} \phi'(\tau, t_1)C'(\tau)C(\tau)\phi(\tau, t_1)d\tau$$
A system is completely state observable at an initial time t~0~ if and
only if there exists a finite t~1~ such that:
$$\operatorname{rank} (W_0) = \operatorname{rank} (W_{cn}) = p$$
Notice that the constructability and observability gramians are very
similar, and typically they can both be calculated at the same time,
only substituting in different values into the state-transition matrix.
## Duality Principle
The concepts of controllability and observability are very similar. In
fact, there is a concrete relationship between the two. We can say that
a system (A, B) is controllable if and only if the system (A\', C, B\',
D) is observable. This fact can be proven by plugging A\' in for A, and
B\' in for C into the observability Gramian. The resulting equation will
exactly mirror the formula for the controllability gramian, implying
that the two results are the same.
|
# Control Systems/System Specifications
## System Specification
There are a number of different specifications that might need to be met
by a new system design. In this chapter we will talk about some of the
specifications that systems use, and some of the ways that engineers
analyze and quantify technical systems.
## Steady-State Accuracy
## Sensitivity
The **sensitivity** of a system is a parameter that is specified in
terms of a given output and a given input. The sensitivity measures how
much change is caused in the output by small changes to the reference
input. Sensitive systems have very large changes in output in response
to small changes in the input. The sensitivity of system H to input X is
denoted as:
$$S_H^X(s)$$
## Disturbance Rejection
All physically-realized systems have to deal with a certain amount of
noise and disturbance. The ability of a system to reject the noise is
known as the **disturbance rejection** of the system.
## Control Effort
The control effort is the amount of energy or power necessary for the
controller to perform its duty.
|
# Control Systems/State Feedback
## State Observation
The state space model of a system is the model of a single plant, not a
true feedback system. The feedback mechanism that relates *x\'* to *x*
is a representation of the mechanism internal to the plant, where the
state of the plant is related to its derivative. As such, we do not have
an *A* \"component\" in the sense that we can swap one *A* \"chip\" with
another *A* \"chip\". The entire state-space model, incorporating *A*,
*B*, *C*, and *D* are all part of one device. Frequently, these matrices
are immutable, that is that they cannot be altered by the engineer,
because they are intrinsic parts of the plant. However, these matrices
can change if the plant itself is altered, such as through thermal
effects and RF interference.
If the system can be treated as basically immutable (except for effects
out of the engineers control), then we need to find a way to modify the
system *externally*. From our studies in classical controls, we know
that the best system for such modifications is a feedback loop. What we
would like to do, ultimately, is to add an additional feedback element,
*K* that can be used to move the poles of the system to any desired
location. Using a technique called \"state feedback\" on a controllable
system, we can do just that.
## State Feedback
In **state feedback**, the value of the state vector is fed back to the
input of the system. We define a new input, *r*, and define the
following relationship:
$$u(t) = r(t) + Kx(t)$$
*K* is a constant matrix that is external to the system, and therefore
can be modified to adjust the locations of the poles of the system. This
technique can only work if the system is controllable.
### Closed-Loop System
If we have an external feedback element *K*, the system is said to be a
**closed-loop system**. Without this feedback element, the system is
said to be an **open-loop system**. Using the relationship we\'ve
outlined above between *r* and *u*, we can write the equations for the
closed-loop system:
$$x' = Ax + B(r + Kx)$$
$$x' = (A + BK)x + Br$$
Now, our closed-loop state equation appears to have the same form as our
open loop state equation, except that the sum *(A + BK)* replaces the
matrix *A*. We can define the closed-loop state matrix as:
$$A_{cl} = (A_{ol} + BK)$$
*A~cl~* is the closed-loop state matrix, and *A~ol~* is the open-loop
state matrix. By altering *K*, we can change the eigenvalues of this
matrix, and therefore change the locations of the poles of the system.
If the system is controllable, we can find the characteristic equation
of this system as:
$$\alpha(s) = |sI - A_{cl}| = |sI - (A_{ol} + BK)|$$
Computing the determinant is not a trivial task, the determinant of that
matrix can be very complicated, especially for larger systems. However,
if we transform the system into **controllable canonical form**, the
calculations become much easier. Another alternative to compute *K* is
by **Ackermann\'s Formula**.
### Controllable Canonical Form
### Ackermann\'s Formula
Consider a linear feedback system with no reference input:
$$u(t) = -Kx(t)$$
where *K* is a vector of gain elements. Systems of this form are
typically referred to as **regulators**. Notice that this system is a
simplified version of the one we introduced above, except that we are
ignoring the reference input. Substituting this into the state equation
gives us:
$$x' = Ax - BKx$$
**Ackermann\'s Formula** (by Jürgen Ackermann) gives us a way to select
these gain values *K* in order to control the location\'s of the system
poles. Using Ackermann\'s formula, if the system is controllable, we can
select arbitrary poles for our regulator system.
$$K = \begin{bmatrix}0 & 0 & \cdots & 1\end{bmatrix}\zeta^{-1}a(z)$$
where *a(z)* is the desired characteristic equation of the system and ζ
is the controllability matrix of the original system.
The gain *K* can be computed in MATLAB using Ackermann\'s formula with
the following command:
`K=acker(A, B, p);`
where K is the state feedback gain and *p* is the desired closed-loop
pole locations. The goal of this type of regulator is to drive the state
vector to zero. By using a reference input instead of a linear state
feedback, we can use the same kind of idea to drive the state vector to
any arbitrary state, and to give the system arbitrary poles.
### Reference Inputs
The idea of the system above with a linear feedback and no reference
input is to drive the system state vector to zero. If we have a system
reference input *r*, we can define a vector *N* that is the desired
value for our state. This combined input is equal to:
$$rN = x_r$$
where *x~r~* is the reference state we want our state *x* to reach. Here
is a block diagram of a system that uses this kind of state reference:
![](State_Feedback_with_Reference.svg "State_Feedback_with_Reference.svg"){width="400"}
We have our gain matrix, *K*, and our reference input *rN*.
Mathematically, we can show that:
$$u = -K(x - x_r)$$
In this system, assuming the system is type 1 or higher, we can prove
that
$$x(\infty) = x_r$$
The state will approach the reference state as time approaches infinity.
The Reference Input is calculated in the continuous domain using the
below equations:
$\left[ \begin{matrix}N_x \\ N_u\end{matrix} \right]=\left[ \begin{matrix}A & B \\ C & D\end{matrix} \right]^{-1} \left[ \begin{matrix}0 \\ 1\end{matrix} \right]$
and $\bar{N}=N_u+KN_x$
|
# Control Systems/Estimators and Observers
## Estimators and Observers
A problem arises in which the internal states of many systems cannot be
directly observed, and therefore state feedback is not possible. What we
can do is try to design a separate system, known as an **observer** or
an **estimator** that attempts to duplicate the values of the state
vector of the plant, except in a way that is observable for use in state
feedback. Some literature calls these components \"observers\", although
they do not strictly observe the state directly. Instead, these devices
use mathematical relations to try and determine an estimate of the
state. Therefore, we will use the term \"estimator\", although the terms
may be used interchangeably.
### Creating an Estimator
There are several observer structures including Kalman\'s, sliding mode,
high gain, Tau\'s, extended, cubic and linear observers. To illustrate
the basics of observer design, consider a linear observer used to
estimate the state of a linear system. Notice that we know the *A*, *B*,
*C*, and *D* matrices of our plant, so we can use these exact values in
our estimator. We know the input to the system, we know the output of
the system, and we have the system matrices of the system. What we do
not know, necessarily, are the initial conditions of the plant. What the
estimator tries to do is make the estimated state vector approach the
actual state vector quickly, and then mirror the actual state vector. We
do this using the following system for an observer:
$$\hat{x}' = A\hat{x} + Bu + L(y - \hat{y})$$
$$\hat{y} = C\hat{x} + Du$$
*L* is a matrix that we define that will help drive the error to zero,
and therefore drive the estimate to the actual value of the state. We do
this by taking the difference between the plant output and the estimator
output.
![](State_Feedback_with_Estimator.svg "State_Feedback_with_Estimator.svg"){width="500"}
In order to make the estimator state approach the plant state, we need
to define a new additional state vector called *state error signal*
$e_x(t)$. We define this error signal as:
$$e_x(t) = x - \hat{x}$$
and its derivative:
$$e_x'(t) = x' - \hat{x}'$$
We can show that the error signal will satisfy the following
relationship:
$$e_x'(t) = Ax + Bu - (A\hat{x} + Bu + L(y - \hat{y}))$$
$$e_x'(t) = A(x- \hat{x}) - L(Cx - C\hat{x})$$
$$e_x'(t) = (A - LC)e_x(t)$$
We know that if the eigenvalues of the matrix *(A - LC)* all have
negative real parts that:
$$e_x(t) = e^{(A - LC)(t-t_0)}e_x(t_0) \to 0$$ when $t \to \infty$. This
$e_x(\infty) = 0$ means that the difference between the state of the
plant $x(t)$ and the estimated state of the observer $\hat{x}(t)$ tends
to fade as time approaches infinity.
### Separation Principle
We have two equations:
$$e_x[k + 1] = (A - LC)e_x[k]$$
$$x[k + 1] = (A - BK)x[k] + BK \cdot e_x[k]$$
We can combine them into a single system of equations to represent the
entire system:
$$\begin{bmatrix}e_x[k + 1] \\ x[k + 1]\end{bmatrix} = \begin{bmatrix}A - LC & 0 \\ +BK & A - BK\end{bmatrix} \begin{bmatrix}e_x[k] \\ x[k]\end{bmatrix}$$
We can find the characteristic equation easily using the **separation
principle**. We take the Z-Transform of this digital system, and take
the determinant of the coefficient matrix to find the characteristic
equation. The characteristic equation of the whole system is: (remember
the well known $(zI-A)^{-1}$)
$$\begin{vmatrix}zI - A + LC & 0 \\ -BK & zI - A + BK\end{vmatrix} = |zI - A + LC| |zI - A + BK|$$
Notice that the determinant of the large matrix can be broken down into
the product of two smaller determinants. The first determinant is
clearly the characteristic equation of the estimator, and the second is
clearly the characteristic equation of the plant. Notice also that we
can design the *L* and *K* matrices independently of one another.
It is worth mentioning that if the order of the system is *n*, this
characteristic equation (full-order state observer plus original system)
becomes of order *2n* and so has twice the number of roots of the
original system.
### The L Matrix
You should select the *L* matrix in such a way that the error signal is
driven towards zero as quickly as possible. The transient response of
the estimator, that is the amount of time it takes the error to
approximately reach zero, should be significantly shorter than the
transient response of the plant. The poles of the estimator should be,
by rule of thumb, at least 2-6 times faster then the poles of your
plant. As we know from our study of classical controls, to make a pole
faster we need to:
S-Domain:Move them further away from the imaginary axis (in the Left Half Plane!).\
Z-Domain:Move them closer to the origin.
Notice that in these situations, the faster poles of the estimator will
have less effect on the system, and we say that the plant poles dominate
the response of the system. The estimator gain *L* can be computed using
the dual of **Ackerman\'s formula** for selecting the gain *K* of the
state feedback controller:
$$L = \alpha_e(z)Q\begin{bmatrix}0\\0\\ \vdots \\1\end{bmatrix}$$
Where *Q* is the observability matrix of the plant, and α~e~ is the
characteristic equation of your estimator.
This can be computed in MATLAB with the following command:
`L=acker(A', C', K)';`
where L is the estimator gain and K is the poles for the estimator.
### Composite System
Once we have our *L* and *K* matrices, we can put these together into a
single composite system equation for the case of state-feedback and zero
input:
$$\begin{bmatrix}x[n + 1]\\ \bar{x}[n+1]\end{bmatrix} = \begin{bmatrix} A & -BK \\ LH & A - BK - LH\end{bmatrix} \begin{bmatrix}x[n] \\ \bar{x}[n]\end{bmatrix}$$
$$u[n] = -K \bar{x}[n]$$
Taking the Z-Transform of this discrete system and solving for an
input-output relation gives us:
$$\frac{U(z)}{Y(z)} = -K[zI - A + BK +L C]^{-1}L$$
Notice that this is not the same as the transfer function, because the
input is on top of the fraction and the output is on bottom. To get the
transfer function from this equation we need to take the inverse of both
sides. The determinant of this inverse will then be the characteristic
equation of the composite system.
Notice that this equation gives us the ability to derive the system
input that created the particular output. This will be valuable later.
## Reduced-Order Observers
In many systems, at least one state variable can be either measured
directly, or calculated easily from the output. This can happen in the
case where the *C* matrix has only a single non-zero entry per system
output.
If one or more state variables can be measured or observed directly, the
system only requires a **reduced-order observer**, that is an observer
that has a lower order than the plant. The reduced order observer can
estimate the unmeasurable states, and a direct feedback path can be used
to obtain the measured state values.
|
# Control Systems/Eigenvalue Assignment for MIMO Systems
The design of control laws for MIMO
systems are more extensive in
comparison to SISO systems because the additional inputs ($q > 1$) offer
more options like defining the Eigenvectors or handling the activity of
inputs. This also means that the feedback matrix *K* for a set of
desired Eigenvalues of the closed-loop system is **not unique**. All
presented methods have advantages, disadvantages and certain
limitations. This means not all methods can be applied on every possible
system and it is important to check which method could be applied on the
own considered problem.
# Parametric State Feedback
A simple approach to find the feedback matrix *K* can be derived via
parametric state feedback (in German: *vollständige modale Synthese*). A
MIMO system
$$\dot{x}(t) = A x(t) + B u(t)$$
with input vector
$$u(t) = (u_{1}(t), u_{2}(t), \cdots, u_{q}(t)) = -K ~ x(t)$$
input matrix $B \in \mathbb{R}^{p \times q}$ and feedback matrix
$K \in \mathbb{R}^{q \times p}$ is considered. The Eigenvalue problem of
the closed-loop system
$$\dot{x}(t) = (A - B~K) ~ x(t) = A_{CL} ~ x(t)$$
is noted as
$$A_{CL} ~ \tilde{v}_{i} = (A - B~K) ~ \tilde{v}_{i} = \tilde{\lambda}_{i} ~ \tilde{v}_{i}$$
where $\tilde{\lambda}_{i} \in \mathbb{C}$ denote the assigned
Eigenvalues and $\tilde{v}_{i} \in \mathbb{C}^{p}$ denote the
Eigenvectors of the closed-loop system. Next, new parameter vectors
$\phi_{i} = K \tilde{v}_{i}$ are introduced and assigned and the
Eigenvalue problem is recasted as
$$B~K ~ \tilde{v}_{i} = B ~ \phi_{i} = (A - \tilde{\lambda}_{i} ~ I) ~ \tilde{v}_{i}.$$
## Controller synthesis
1\. From Equation \[1\] one defines the **Eigenvector** with
$$\tilde{v}_{i} = (A - \tilde{\lambda}_{i} ~ I)^{-1} ~ B ~ \phi_{i}$$
2\. The new parameter vectors $\phi_{i}$ are concatenated as
$$\Phi = [\phi_{1}, \phi_{2}, \cdots, \phi_{p}] = K [\tilde{v}_{1}, \tilde{v}_{2}, \cdots, \tilde{v}_{p}],$$
where the **feedback matrix** *K* can be noted as
$$K = \Phi ~ [\tilde{v}_{1}, \tilde{v}_{2}, \cdots, \tilde{v}_{p}]^{-1}.$$
3\. Finally, the Eigenvector definition is used to hold the full
description of the **feedback matrix** with
$$K = [\phi_{1}, \phi_{2}, \cdots, \phi_{p}] ~ [(A - \tilde{\lambda}_{1} ~ I)^{-1} ~ B ~ \phi_{1}, \cdots, (A - \tilde{\lambda}_{p} ~ I)^{-1} ~ B ~ \phi_{p}]^{-1}.$$
The parameter vectors are defined arbitrarily but have to be linear
independent.
## Remarks
- Method works for non-quadratic B
- Parameter vectors $\phi_{i}$ can be chosen arbitrarily
## Example
# Singular Value Decomposition and Diagonalization
If the state matrix $A \in \mathbb{R}^{p \times p}$ of system
$$\dot{x}(t) = A ~ x(t) + B ~ u(t)$$
is diagonalizable, which means the number of Eigenvalues and
Eigenvectors are equal, then the transform
$$x = M ~ x_{M}$$
can be used to yield
$$M ~ \dot{x}_{M}(t) = A M ~ x_{M}(t) + B ~ u(t)$$
and further
$$\dot{x}_{M}(t) = M^{-1} A M ~ x_{M}(t) + M^{-1} ~ B ~ u(t).$$
Transformation matrix *M* contains the Eigenvectors
$v_{i} \in \mathbb{C}^{p}$ as
$$M = [v_{1}, v_{2}, \cdots, v_{p}]$$
which leads to a new diagonal state matrix
$$A_{M} = M^{-1} ~ A ~ M =
\begin{bmatrix}
\lambda_{1} \\
& \lambda_{2} \\
& & \ddots \\
& & & \lambda_{p}
\end{bmatrix}$$ consisting of Eigenvalues $\lambda_{i} \in \mathbb{C}$,
and new input
$$u_{M}(t) = M^{-1} ~ B ~ u(t) =
\begin{bmatrix}
u_{M,1} \\
u_{M,2} \\
\cdots \\
u_{M,p}
\end{bmatrix}.$$
The control law for the new input $u_{M}$ is designed as
$$u_{M}(t) = -K_{M} x_{M}(t) =
-
\begin{bmatrix}
K_{M,1} \\
& K_{M,2} \\
& & \ddots \\
& & & K_{M,p}
\end{bmatrix}
~
\begin{bmatrix}
x_{M,1}(t) \\
x_{M,2}(t) \\
\cdots \\
x_{M,p}(t)
\end{bmatrix}$$
and the closed-loop system in new coordinates is noted as
$$\dot{x}_{M}(t) = A_{M} ~ x_{M}(t) + u_{M}(t) = (A_{M} - K_{M}) ~ x_{M}(t) =
\begin{bmatrix}
\lambda_{1} - K_{M,1} \\
& \lambda_{2} - K_{M,2} \\
& & \ddots \\
& & & \lambda_{p} - K_{M,p}
\end{bmatrix}
~
\begin{bmatrix}
x_{M,1}(t) \\
x_{M,2}(t) \\
\cdots \\
x_{M,p}(t)
\end{bmatrix}$$
Feedback matrix $K_{M}$ can be used to influence or shift each
Eigenvalue directly.
In the last step, the new input is transformed backwards to original
coordinates to yield the original feedback matrix *K*. The new input is
defined by
$$u_{M}(t) = M^{-1} ~ B ~ u(t)$$
and
$$u_{M}(t) = -K_{M} ~ x_{M}(t) = -K_{M} ~ M^{-1} ~ x(t).$$
From these formulas one gains the identity
$$M^{-1} ~ B ~ u(t) = -K_{M} ~ M^{-1} ~ x(t)$$
and further
$$u(t) = - B^{-1} ~ M ~ K_{M} ~ M^{-1} ~ x(t) = - K ~ x(t).$$
Therefore, the feedback matrix is found as
$$K = B^{-1} ~ M ~ K_{M} ~ M^{-1}.$$
## Requirements
This controller design is applicable only if the following requirements
are guaranteed.
- State matrix *A* is
diagonalizable.
- The number of states and inputs are equal $p=q$.
- Input matrix $B \in \mathbb{R}^{p \times p}$ is invertible.
## Example
{2}\\\\ \\frac{\\sqrt{2}}{2} \\end{bmatrix} `</math>`{=html}
and
$$\tilde{v}_{2} =
\begin{bmatrix}
\frac{\sqrt{2}}{2}\\
\frac{\sqrt{2}}{2}
\end{bmatrix}.$$
Thus, the transformation matrix is noted as
$$M =
\begin{bmatrix}
-\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \\
\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}
\end{bmatrix}$$
and the state matrix in new coordinates is derived as
$$A_{M} = M^{-1} ~ A ~ M =
\begin{bmatrix}
1 & 0 \\
0 & 3
\end{bmatrix}.$$
The desired Eigenvalues of the closed-loop system are
$\tilde{\lambda}_{1} = -5$ and $\tilde{\lambda}_{2} = -1$, so feedback
matrix is found with
$$\lambda_{1} - K_{M,1} = 1 - K_{M,1} = \tilde{\lambda}_{1} = -5 \quad \Rightarrow K_{M,1} = 1 + 5 = 6$$
and
$$\lambda_{2} - K_{M,2} = 3 - K_{M,1} = \tilde{\lambda}_{2} = -1 \quad \Rightarrow K_{M,2} = 3 + 1 = 4$$
and thus one holds
$$K_{M} =
\begin{bmatrix}
6 & 0 \\
0 & 4 \\
\end{bmatrix}.$$
Finally, the feedback matrix in original coordinates are calculated by
$$K = B^{-1} ~ M ~ K_{M} ~ M^{-1} =
\begin{bmatrix}
1 & 2 \\
2 & 1
\end{bmatrix}^{-1}
~
\begin{bmatrix}
-\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \\
\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}
\end{bmatrix}
~
\begin{bmatrix}
6 & 0 \\
0 & 4 \\
\end{bmatrix}
~
\begin{bmatrix}
-\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} \\
\frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2}
\end{bmatrix}^{-1}
= \frac{1}{3}
\begin{bmatrix}
-7 & 11 \\
11 & -7 \\
\end{bmatrix} .$$ }}
# Sylvester Equation
This method is taken from the online resource
- KU Leuven: Chapter 3: State Feedback - Pole Placement (PDF, 308,5
kB) (does not exist
anymore).
- Similar resource: Chapter 4 Pole placement (PDF, 269,3
kB) by Zsófia Lendek
Consider the closed-loop system
$$\dot{x}(t) = A ~ x(t) + B ~ u(t) = (A - B ~ K) ~ x(t) = A_{CL} ~ x(t)$$
with input $u(t) = -K ~ x(t)$ and closed-loop state matrix
$A_{CL}=A - B ~ K$. The desired closed-loop Eigenvalues
$\tilde{\lambda}_{i} \in \mathbb{C}$ can be chosen real- or
complex-valued as $\tilde{\lambda}_{i} = \alpha_{i} \pm j \beta_{i}$ and
the matrix of the desired Eigenvalues is noted as
$$\Lambda =
\begin{bmatrix}
\alpha_{1} & \beta_{1} \\
-\beta_{1} & \alpha_{1} \\
& & \ddots \\
& & & \tilde{\lambda}_{i} \\
& & & & \ddots
\end{bmatrix}$$
The closed-loop state matrix $A_{CL}$ has to be similar to $\Lambda$ as
$$A_{CL} = A - B~K \sim \Lambda$$
which means that there exists a transformation matrix
$M \in \mathbb{R}^{p \times p}$ such that
$$M^{-1}~A_{CL}~M = M^{-1} ~(A - B~K)~M = \Lambda$$
holds and further
$$A~M - M~\Lambda = B~K~M.$$
An arbitrary Matrix $G = K~M$ is introduced and Equation \[2\] is
separated in a Sylvester equation
$$A~M - M~\Lambda = B~G$$
and a feedback matrix formula
$$K = G ~ M^{-1}.$$
## Algorithm
1\. Choose an arbitrary matrix $G \in \mathbb{R}^{q \times p}$.
2\. Solve the Sylvester equation for *M* (numerically).
3\. Calculate the feedback matrix *K*.
## Remarks
- State matrix *A* and the negative Eigenvalue matrix $-\Lambda$ shall
not have common Eigenvalues.
- For some choices of *G* the computation could fail. Then another *G*
has to be chosen.
## Example
|
# Control Systems/Controllers and Compensators
## Controllers
There are a number of different standard types of control systems that
have been studied extensively. These controllers, specifically the P,
PD, PI, and PID controllers are very common in the production of
physical systems, but as we will see they each carry several drawbacks.
## Proportional Controllers
center\|framed\|A Proportional controller block
diagram
Proportional controllers are simply gain values. These are essentially
multiplicative coefficients, usually denoted with a *K*. A P controller
can only force the system poles to a spot on the system\'s root locus. A
P controller cannot be used for arbitrary pole placement.
We refer to this kind of controller by a number of different names:
proportional controller, gain, and zeroth-order controller.
## Derivative Controllers
center\|framed\|A Proportional-Derivative controller block
diagram
In the Laplace domain, we can show the derivative of a signal using the
following notation:
$$D(s) = \mathcal{L} \left\{ f'(t) \right\} = sF(s) - f(0)$$
Since most systems that we are considering have zero initial condition,
this simplifies to:
$$D(s) = \mathcal{L} \left\{ f'(t) \right\} = sF(s)$$
The derivative controllers are implemented to account for future values,
by taking the derivative, and controlling based on where the signal is
going to be in the future. Derivative controllers should be used with
care, because even small amount of high-frequency noise can cause very
large derivatives, which appear like amplified noise. Also, derivative
controllers are difficult to implement perfectly in hardware or
software, so frequently solutions involving only integral controllers or
proportional controllers are preferred over using derivative
controllers.
Notice that derivative controllers are not proper systems, in that the
order of the numerator of the system is greater than the order of the
denominator of the system. This quality of being a non-proper system
also makes certain mathematical analysis of these systems difficult.
### Z-Domain Derivatives
We won\'t derive this equation here, but suffice it to say that the
following equation in the Z-domain performs the same function as the
Laplace-domain derivative:
$$D(z) = \frac{z - 1}{Tz}$$
Where T is the sampling time of the signal.
## Integral Controllers
center\|framed\|A Proportional-Integral Controller block
diagram
To implemenent an Integral in a Laplace domain transfer function, we use
the following:
$$\mathcal{L}\left\{ \int_0^t f(t)\, dt \right\} = {1 \over s}F(s)$$
Integral controllers of this type add up the area under the curve for
past time. In this manner, a PI controller (and eventually a PID) can
take account of the past performance of the controller, and correct
based on past errors.
### Z-Domain Integral
The integral controller can be implemented in the Z domain using the
following equation:
$$D(z) = \frac{z + 1}{z - 1}$$
## PID Controllers
!A block diagram of a PID
controller
PID controllers are combinations of the proportional, derivative, and
integral controllers. Because of this, PID controllers have large
amounts of flexibility. We will see below that there are definite
limites on PID control.
{{-}}
### PID Transfer Function
The transfer function for a standard PID controller is an addition of
the Proportional, the Integral, and the Differential controller transfer
functions (hence the name, PID). Also, we give each term a gain
constant, to control the weight that each factor has on the final
output:
$$D(s) = K_p + {K_i \over s} + K_d s$$
Notice that we can write the transfer function of a PID controller in a
slightly different way:
$$D(s) = \frac{A_0 + A_1s}{B_0 + B_1s}$$
This form of the equation will be especially useful to us when we look
at polynomial design.
### PID Signal flow diagram
!Signal flow diagram for a PID
controller
### PID Tuning
The process of selecting the various coefficient values to make a PID
controller perform correctly is called **PID Tuning**. There are a
number of different methods for determining these values:[^1]
1\) Direct Synthesis (DS) method
2\) Internal Model Control (IMC) method
3\) Controller tuning relations
4\) Frequency response techniques
5\) Computer simulation
6\) On-line tuning after the control system is installed
7)Trial and error
**Notes:**
```{=html}
<references />
```
### Digital PID
In the Z domain, the PID controller has the following transfer function:
$$D(z) = K_p + K_i \frac{T}{2} \left[ \frac{z + 1}{z - 1} \right] + K_d \left[ \frac{z - 1}{Tz} \right]$$
And we can convert this into a canonical equation by manipulating the
above equation to obtain:
$$D(z) = \frac{a_0 + a_1 z^{-1} + a_2 z^{-2}}{1 + b_1 z^{-1} + b_2 z^{-2}}$$
Where:
$$a_0 = K_p + \frac{K_i T}{2} + \frac{K_d}{T}$$
$$a_1 = -K_p + \frac{K_i T}{2} + \frac{-2 K_d}{T}$$
$$a_2 = \frac{K_d}{T}$$
$$b_1 = -1$$
$$b_2 = 0$$
Once we have the Z-domain transfer function of the PID controller, we
can convert it into the digital time domain:
$$y[n] = x[n]a_0 + x[n-1]a_1 + x[n-2]a_2 - y[n-1]b_1 - y[n-2]b_2$$
And finally, from this difference equation, we can create a digital
filter structure to implement the PID.
## Bang-Bang Controllers
Despite the low-brow sounding name of the Bang-Bang controller, it is a
very useful tool that is only really available using digital methods. A
better name perhaps for a bang-bang controller is an on/off controller,
where a digital system makes decisions based on target and threshold
values, and decides whether to turn the controller on and off. Bang-bang
controllers are a non-linear style of control.
Consider the example of a household furnace. The oil in a furnace burns
at a specific temperature---it can\'t burn hotter or cooler. To control
the temperature in your house then, the thermostat control unit decides
when to turn the furnace on, and when to turn the furnace off. This
on/off control scheme is a bang-bang controller.
## Compensation
There are a number of different compensation units that can be employed
to help fix certain system metrics that are outside of a proper
operating range. Most commonly, the phase characteristics are in need of
compensation, especially if the magnitude response is to remain
constant. There are four major types of compensation 1. Lead
compensation 2. Lag compensation 3. Lead-lag compensation 4. Lag-lead
compensation
## Phase Compensation
Occasionally, it is necessary to alter the phase characteristics of a
given system, without altering the magnitude characteristics. To do
this, we need to alter the frequency response in such a way that the
phase response is altered, but the magnitude response is not altered. To
do this, we implement a special variety of controllers known as **phase
compensators**. They are called compensators because they help to
improve the phase response of the system.
There are two general types of compensators: **Lead Compensators**, and
**Lag Compensators**. If we combine the two types, we can get a special
**Lag-lead Compensator** system.(lead-lag system is not practically
realisable).
When designing and implementing a phase compensator, it is important to
analyze the effects on the gain and phase margins of the system, to
ensure that compensation doesn\'t cause the system to become unstable.
phase lead compensation:- 1 it is same as addition of zero to open loop
TF since from pole zero point of view zero is nearer to origin than pole
hence effect of zero dominant.
## Phase Lead
The transfer function for a lead-compensator is as follows:
$$T_{lead}(s) = \frac{s-z}{s-p}$$
To make the compensator work correctly, the following property must be
satisfied:
$$| z | < | p |$$
And both the pole and zero location should be close to the origin, in
the LHP. Because there is only one pole and one zero, they both should
be located on the real axis.
Phase lead compensators help to shift the poles of the transfer function
to the left, which is beneficial for stability purposes.
## Phase Lag
The transfer function for a lag compensator is the same as the
lead-compensator, and is as follows:
$$T_{lag}(s) = \frac{s-z}{s-p}$$
However, in the lag compensator, the location of the pole and zero
should be swapped:
$$| p | < | z |$$
Both the pole and the zero should be close to the origin, on the real
axis.
The Phase lag compensator helps to improve the steady-state error of the
system. The poles of the lag compensator should be very close together
to help prevent the poles of the system from shifting right, and
therefore reducing system stability.
## Phase Lag-lead
The transfer function of a **Lag-lead compensator** is simply a
multiplication of the lead and lag compensator transfer functions, and
is given as:
$$T_{Lag-lead}(s) = \frac{(s-z_1)(s-z_2)}{(s-p_1)(s-p_2)}.$$
Where typically the following relationship must hold true:
$$| p_1 | > | z_1 | > | z_2 | > | p_2 |$$
## External links
- Standard Controller Forms on
ControlTheoryPro.com
- PID Control on
ControlTheoryPro.com
- PI Control on
ControlTheoryPro.com
[^1]: Seborg, Dale E.; Edgar, Thomas F.; Mellichamp, Duncan A. (2003).
Process Dynamics and Control, Second Edition. John Wiley & Sons,Inc.
|
# Control Systems/Polynomial Design
## Polynomial Design
A powerful tool for the design of controller and compensator systems is
**polynomial design**. Polynomial design typically consists of two
separate stages:
1. Determine the desired response of the system
2. Adjust your system to match the desired response.
We do this by creating polynomials, such as the transform-domain
transfer functions, and equating coefficients to find the necessary
values. The goal in all this is to be able to arbitrarily place all the
poles in our system at any locations in the transform domain that we
desire. In other words, we want to arbitrarily modify the response of
our system to match any desired response. The requirements in this
chapter are that the system be fully controllable and observable. If
either of these conditions are not satisfied, the techniques in this
method cannot be directly implemented.
Through this method it is assumed that the plant is given and is not
alterable. To adjust the response of the system, a controller unit needs
to be designed that helps the system meet the specifications. Because
the controller is being custom designed, the response of the controller
can be determined arbitrarily (within physical limits, of course).
## Polynomial Representation
Let\'s say that we have a plant, *G(s)*, and a controller, *C(s)*. Both
the controller and the plant are proper systems, composed of monic
numerator and denominator polynomials. The plant, *G(s)* has an order of
*n*, is given, and cannot be altered. The task is to design the
controller *C(s)* of order *m*:
$$G(s) = \frac{b(s)}{a(s)}$$
$$C(s) = \frac{B(s)}{A(s)}$$
Our closed-loop system, *H(s)* will have a transfer function of:
$$H(s) = \frac{C(s)G(s)}{1+C(s)G(s)} = \frac{B(s)b(s)}{A(s)a(s) + B(s)b(s)}$$
Our characteristic equation then is:
$$\alpha_H(s) = A(s)a(s) + B(s)b(s)$$
Our plant is given so we know *a(s)* and *b(s)*, but *A(s)* and *B(s)*
are configurable as part of our controller. To determine values for
these, we must select a desired response, that is the response that our
system *should have*. We call our desired response *D(s)*. We can
configure our controller to have our system match the desired response
by solving the **Diophantine equation**:
$$D(s) = A(s)a(s) + B(s)b(s)$$
### Diophantine Equation
The Diophantine equation becomes a system of linear equations in terms
of the unknown coefficients of the *A(s)* and *B(s)* polynomials. There
are situations where the Diophantine equation will produce a unique
result, but there are also situations where the results will be
non-unique.
We multiply polynomials, and then combine powers of *s*:
$$D(s) = (A_0a_0 + B_0b_0) +$$$(A_0a_1 + A_1a_0 + B_0b_1 + B_1b_0)s +$$\cdots + (A_ma_n + B_mb_n)s^{m + n}$
Now we can equate the coefficients of *D(s)* and our resultant system of
equations is given as:
$$\begin{bmatrix}
a_0 & b_0 & 0 & 0 & \cdots & 0 & 0 \\
a_1 & b_1 & a_0 & b_0 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
a_n & b_n & a_{n-1} & b_{n-1} & \cdots & a_0 & b_0 \\
0 & 0 & a_n & b_n & \cdots & a_1 & b_1 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \cdots & a_n & b_n \end{bmatrix}\begin{bmatrix}A_0 \\ B_0 \\ A_1 \\ B_1 \\ \vdots \\ A_m \\ B_m \end{bmatrix} = \begin{bmatrix}D_0 \\ D_1 \\ \vdots \\ D_{n + m}\end{bmatrix}$$
This matrix can be a large one, but the pattern is simple: new
coefficients are shifted in from the left, and old coefficients are
shifted off to the right on each row.
### Conditions for Uniqueness
The diophantine matrix, which we will call *S*, has dimensions of *(n +
m + 1) × (2m + 2)*. The solution to this equation is unique if the
diophantine matrix is square, and therefore is invertable. If the matrix
has more columns then rows, the solution will be non-unique. If the
matrix has more rows then columns, the poles of the composite system
cannot be arbitrarily placed.
The condition for uniqueness can be satisfied if *m = n - 1*. The order
of the controller must be one less than the order of the plant. If the
order of the controller is higher, the solution will be non-unique. If
the order of the controller is lower, not all the poles can be
arbitrarily assigned.
### Example: Second Order System
### Example: Helicopter Control
Pole placement is the most straightforward means of controller design.
Here are the steps to designing a system using pole placement
techniques:
1. The design starts with an assumption of what form the controller
must take in order to control the given plant.
2. From that assumption a symbolic characteristic equation is formed.
3. At this point the desired closed-loop poles must be determined.
4. Typically, specifications designate overshoot, rise time, etc. This
leads to the formation of a 2nd order
equation.
Most of the time the final characteristic equation will have more
than 2 poles. So additional desired poles must be determined.
5. Once the closed loop poles are decided a desired characteristic
equation is formed.
6. The coefficients for each power of *s* are equated from the symbolic
characteristic equation to the desired.
7. Algebra is used to determine the controller coefficients necessary
to achieve the desired closed-loop poles with the assumed controller
form.
Typically, an integrator is used to drive the steady-state error towards
0. This implies that the final characteristic equation will have at
least 1 more pole than the uncontrolled system started with.
The following pole placement examples show you how to decide on the
desired closed-loop poles, determine the \"extra\" closed-loop poles,
and create a generic and PID
controller
to achieve those desired closed-loop poles.
\\right)}`</math>`{=html}
$$\tau_s=\frac{4.6}{\zeta\omega_n}$$
$$\tau_p=\frac{\pi}{\omega_n\sqrt{1-\zeta^2}}$$
where
$$M_p$$ is overshoot,
$$\tau_s$$ is 1% settling time, and
$$\tau_p$$ is time to peak.
Using the Overshoot equation we find that a common value,
$\zeta=\frac{1}{\sqrt{2}}$, provides an overshoot of only 4.3%.
Examination of the Time to Peak equation lets you know that a value of
$\omega_n=\sqrt{2}$ provides a peak time of $\pi$ seconds. However, a
little over 3 seconds is probably too slow. Let\'s shoot for 0.5 seconds
instead. This requires
$$\omega_n=\sqrt{2}\frac{\pi}{0.5}$$.
Recap
- $\zeta=\zeta_{desired}=\frac{1}{\sqrt{2}}$
- $\omega_{n}=\omega_{desired}=\sqrt{2}\frac{\pi}{0.5}=8.8858$
However, this leaves us with only 2 roots (poles) in our desired
characteristic equation. Since we want the above parameters to dominate
the closed loop system dynamics we choose a 3rd pole that is well above
the desired natural frequency.
$$\left(s+a\right)\left(s^2+2\zeta_{desired}\omega_{desired}s+\omega_{desired}^2\right)$$
where
- $a=10\omega_{desired}$ is our 3rd pole.
This 3rd pole is a high frequency pole that allows the desired poles to
the dominate the closed-loop system response while allowing the desired
characteristic equation to have the correct number of poles.
Our desired characteristic equation, Eqn. 3, can be reduced to
$$\Phi_{desired}\left(s\right)=s^3+\left(2\zeta_{desired}\omega_{desired}+a\right)s^2+\left(\omega_{desired}^2+2\zeta_{desired}\omega_{desired}a\right)s+a\omega_{desired}^2$$
This results in
$$p_3=1$$
$$p_2=2\zeta_{desired}\omega_{desired}+a$$
$$p_1=\omega_{desired}^2+2\zeta_{desired}\omega_{desired}a$$
$$p_0=a\omega_{desired}^2$$
From here we go back to our characteristic equation (Eqn. **2a** or
**2b**) to determine
$$A_1=1$$
$$A_0=2\zeta_{desired}\omega_{desired}+a-2\zeta\omega_n$$
$$B_0=\frac{\left(a\omega_{desired}^2-A_0\omega_n^2\right)}{\omega_n^2}$$
$$B_1=\frac{\omega_{desired}^2+2\zeta_{desired}\omega_{desired}a-2\zeta\omega_nA_0-A_1\omega_n^2}{\omega_n^2}$$
}}
## Caveats
There are a number of problems that can arise from this method.
### Insufficient Order
If *K(s)* has a polynomial degree *m*, and *G(s)* has a polynomial
degree *n*, then our composite system *H(s)* will have a total degree of
*m + n*. If our controller does not have a high enough order, we will
not be able to arbitrarily assign every pole in our system. From state
space, we know that poles that cannot be arbitrarily assigned are called
uncontrollable. The addition of a controller of insufficient order can
make one or more poles in our system uncontrollable.
## External links
- ControlTheoryPro.com article on Pole
Placement
|
# Control Systems/Adaptive Control
## Adaptive Controllers
What we\'ve been studying up till this point are fixed systems, that is
systems that do not change over time. However, real-world applications
and experience tells us that the environment can change over time: New
noise can be added to a signal, the signal quality can be degraded, or
the specifications of the signals can be changed at the source.
There is a distinct need then for controllers which can modify
themselves to produce the same output regardless of changes to the
input. Herein lies the problem of **adaptive control**. We will
introduce some of the concepts in this chapter, and discuss them in
greater detail in future chapters.
|
# Control Systems/State Machines
## State Machines
Digital computers have a lot more power and flexibility to offer than
processing simple difference equations like the kind that we have been
looking at so far in our discrete cases. Computer systems are capable of
handling much more complicated digital control tasks, and they are also
capable of changing their algorithms in the middle of the processing
time. For tasks like this, we will employ **state-machines** to allow us
to dynamically control several aspects of a single problem with a single
computer.
A state machine, in its simplest form, is a system that performs
different actions, depending upon the state of the machine.
## State Diagrams
A state diagram depicts the different states of a state machine, and it
uses arrows to show which states can be reached, and what are the
conditions for reaching them.
### Example
Consider a common controls example of a movable cart that is attached to
a horizontal pole. This cart, like the print-head on a printer, is free
to travel back and forth across this pole at finite speeds. Dangling
from the cart is a pendulum that is capable of freely spinning 360
degrees around its pivot point.
![](Pendulum_cart.svg "Pendulum_cart.svg")
As the cart moves back and forth across the horizontal pole, the
pendulum will swing side-to-side. In fact, if the cart moves quickly
enough, and in the correct pattern, the pendulum will actually swing up
over the top of the cart, and travel a full 360 degrees. The purpose of
this contraption is to swing the pendulum upwards so that it is standing
up vertically above the cart, and to balance it as such. There are then
two distinct stages of operation for our control system:
1. we must swing the pendulum from directly downward, to standing
vertical
2. we must balance the pendulum vertically
![](Pendulum_state_diagram.svg "Pendulum_state_diagram.svg")
## Computer Implementation
|
# Control Systems/Nonlinear Systems
## Nonlinear General Solution
A nonlinear system, in general, can be defined as follows:
$$x'(t) = f(t, t_0, x, x_0)$$
$$x(t_0) = x_0$$
Where *f* is a nonlinear function of the time, the system state, and the
initial conditions. If the initial conditions are known, we can simplify
this as:
$$x'(t) = f(t, x)$$
The general solution of this equation (or the most general form of a
solution that we can state without knowing the form of *f*) is given by:
$$x(t) = x_0 + \int_{t_0}^t f(\tau, x)d\tau$$
and we can prove that this is the general solution to the above equation
because when we differentiate both sides we get the origin equation.
### Iteration Method
The general solution to a nonlinear system can be found through a method
of infinite iteration. We will define *x*~n~ as being an iterative
family of indexed variables. We can define them recursively as such:
$$x_n(t) = x_0 + \int_{t_0}^t f(\tau, x_{n-1}(\tau))d\tau$$
$$x_1(t) = x_0$$
We can show that the following relationship is true:
$$x(t) = \lim_{n \to \infty}x_n(t)$$
The *x*~n~ series of equations will converge on the solution to the
equation as n approaches infinity.
### Types of Nonlinearities
Nonlinearities can be of two types:
1. **Intentional non-linearity**: The non-linear elements that are
added into a system. Eg: Relay
2. **Incidental non-linearity**: The non-linear behavior that is
already present in the system. Eg: Saturation
## Linearization
Nonlinear systems are difficult to analyze, and for that reason one of
the best methods for analyzing those systems is to find a linear
approximation to the system. Frequently, such approximations are only
good for certain operating ranges, and are not valid beyond certain
bounds. The process of finding a suitable linear approximation to a
nonlinear system is known as **linearization**.
![](Linear_Approximation_in_2D.svg "Linear_Approximation_in_2D.svg"){width="500"}
This image shows a linear approximation (dashed line) to a non-linear
system response (solid line). This linear approximation, like most, is
accurate within a certain range, but becomes more inaccurate outside
that range. Notice how the curve and the linear approximation diverge
towards the right of the graph.
|
# Control Systems/Common Nonlinearities
There are some nonlinearities that happen so frequently in physical
systems that they are called \"Common nonlinearities\". These common
nonlinearities include Hysteresis, Backlash, and Dead-zone.
## Hysteresis
Continuing with the example of a household thermostat, let\'s say that
your thermostat is set at 70 degrees (Fahrenheit). The furnace turns on,
and the house heats up to 70 degrees, and then the thermostat dutifully
turns the furnace off again. However, there is still a large amount of
residual heat left in the ducts, and the hot air from the vents on the
ground may not all have risen up to the level of the thermostat. This
means that after the furnace turns off, the house may continue to get
hotter, maybe even to uncomfortable levels.
So the furnace turns off, the house heats up to 80 degrees, and then the
air conditioner turns on. The temperature of the house cools down to 70
degrees again, and the A/C turns back off. However, the house continues
to cool down, and then it gets too cold, and the furnace needs to turn
back on.
As we can see from this example, a bang-bang controller, if poorly
designed, can cause big problems, and it can waste lots of energy. To
avoid this, we implement the idea of **Hysteresis**, which is a set of
threshold values that allow for overflow outputs. Implementing
hysteresis, our furnace now turns off when we get to 65 degrees, and the
house slowly warms up to 75 degrees, and doesn\'t turn on the A/C unit.
This is a far preferable solution.
## Backlash
•Backlash refers to the angle that the output shaft of a gearhead can
rotate without the input shaft moving. Backlash arises due to tolerance
in manufacturing; the gear teeth need some play to avoid jamming when
they mesh. An inexpensive gearhead may have backlash of a degree or
more, while more expensive precision gearheads have nearly zero
backlash. Backlash typically increases with the number of gear stages.
Some gear types, notably harmonic drive gears (see Section 26.1.2), are
specifically designed for near-zero backlash, usually by using flexible
elements.
E.g.: Mechanical gear.
## Dead-Zone
A dead-zone is a kind of non linearity in which the system doesn\'t
respond to the given input until the input reaches a particular level or
it can refer to a condition in which output becomes zero when the input
crosses certain limiting value.
## Inverse Nonlinearities
### Inverse Backlash
### Inverse Dead-Zone
|
# Control Systems/Noise Driven Systems
## Noise-Driven Systems
Systems frequently have to deal with not only the control input *u*, but
also a random noise input *v*. In some disciplines, such as in a study
of electrical communication systems, the noise and the data signal can
be added together into a composite input *r = u + v*. However, in
studying control systems, we cannot combine these inputs together, for a
variety of different reasons:
1. The control input works to stabilize the system, and the noise input
works to destabilize the system.
2. The two inputs are independent random variables.
3. The two inputs may act on the system in completely different ways.
As we will show in the next example, it is frequently a good idea to
consider the noise and the control inputs separately:
## Probability Refresher
We are going to have a brief refesher here for calculus-based
probability, specifically focusing on the topics that we will use in the
rest of this chapter.
### Expectation
The expectation operatior, **E**, is used to find the *expected, or mean
value* of a given random variable. The expectation operator is defined
as:
$$E[x] = \int_{-\infty}^\infty x f_x(x)dx$$
If we have two variables that are independent of one another, the
expectation of their product is zero.
### Covariance
The **covariance** matrix, *Q*, is the expectation of a random vector
times it\'s transpose:
$$E[x(t)x'(t)] = Q(t)$$
If we take the value of the *x* transpose at a different point in time,
we can calculate out the covariance as:
$$E[x(t)x'(s)] = Q(t)\delta(t-s)$$
Where δ is the impulse function.
## Noise-Driven System Description
We can define the state equation to a system incorporating a noise
vector *v*:
$$x'(t) = A(t)x(t) + H(t)u(t) +B(t)v(t)$$
For generality, we will discuss the case of a time-variant system.
Time-invariant system results will then be a simplification of the
time-variant case. Also, we will assume that *v* is a **gaussian random
variable**. We do this because physical systems frequently approximate
gaussian processes, and because there is a large body of mathematical
tools that we can use to work with these processes. We will assume our
gaussian process has zero-mean.
## Mean System Response
We would like to find out how our system will respond to the new noisy
input. Every system iteration will have a different response that varies
with the noise input, but the average of all these iterations should
converge to a single value.
For the system with zero control input, we have:
$$x'(t) = A(t)x(t) + B(t)v(t)$$
For which we know our general solution is given as:
$$x(t) = \phi(t, t_0)x_0 + \int_{t_0}^t \phi(t, \tau)B(\tau)v(\tau)d\tau$$
If we take the **expected value** of this function, it should give us
the expected value of the output of the system. In other words, we would
like to determine what the expected output of our system is going to be
by adding a new, noise input.
$$E[x(t)] = E[\phi(t, t_0)x_0] + E[\int_{t_0}^t \phi(t, \tau)B(\tau)v(\tau)d\tau]$$
In the second term of this equation, neither φ nor B are random
variables, and therefore they can come outside of the expectaion
operation. Since *v* is zero-mean, the expectation of it is zero.
Therefore, the second term is zero. In the first equation, φ is not a
random variable, but x~0~ does create a dependancy on the output of
*x*(t), and we need to take the expectation of it. This means that:
$$E[x(t)] = \phi(t, t_0)E[x_0]$$
In other words, the expected output of the system is, on average, the
value that the output would be if there were no noise. Notice that if
our noise vector *v* was not zero-mean, and if it was not gaussian, this
result would not hold.
## System Covariance
We are now going to analyze the covariance of the system with a noisy
input. We multiply our system solution by its transpose, and take the
expectation: `<small>`{=html}(this equation is long and might break onto
multiple lines)`</small>`{=html}
$$E[x(t)x'(t)] = E[\phi(t, t_0)x_0 + \int_{t_0}^t\phi(\tau, t_0)B(\tau)v(\tau)d\tau]$$$E[(\phi(t, t_0)x_0 + \int_{t_0}^t\phi(\tau, t_0)B(\tau)v(\tau)d\tau)']$
If we multiply this out term by term, and cancel out the expectations
that have a zero-value, we get the following result:
$$E[x(t)x'(t)] = \phi(t, t_0)E[x_0x_0']\phi'(t, t_0) = P$$
We call this result *P*, and we can find the first derivative of P by
using the chain-rule:
$$P'(t) = A(t)\phi(t, t_0)P_0\phi(t, t_0) + \phi(t, t_0)P_0\phi'(t, t_0)A'(t)$$
Where
$$P_0 = E[x_0x_0']$$
We can reduce this to:
$$P'(t) = A(t)P(t) + P(t)A'(t) + B(t)Q(t)B'(t)$$
In other words, we can analyze the system *without needing to calculate
the state-transition matrix*. This is a good thing, because it can often
be very difficult to calculate the state-transition matrix.
## Alternate Analysis
Let us look again at our general solution:
$$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^t \phi(t, \tau)B(\tau)v(\tau)d\tau$$
We can run into a problem because in a gaussian distribution, especially
systems with high variance (especially systems with infinite variance),
the value of *v* can momentarily become undefined (approach infinity),
which will cause the value of *x* to likewise become undefined at
certain points. This is unacceptable, and makes further analysis of this
problem difficult. Let us look again at our original equation, with zero
control input:
$$x'(t) = A(t)x(t)+B(t)v(t)$$
We can multiply both sides by *dt*, and get the following result:
$$dx = A(t)x(t)dt + B(t)v(t)dt$$
We can define a new differential, *dw(t)*, which is an infinitesimal
function of time as:
$$dw(t) = v(t)dt$$
Now, we can integrate both sides of this equation:
$$x(t) = x(t_0) + \int_{t_0}^t A(\tau)x(\tau)d\tau + \int_{t_0}^tB(\tau)dw(\tau)$$
However, this leads us to an
unusual place, and one for which we are (probably) not prepared to
continue further: in the third term on the left-hand side, we are
attempting to integrate with respect to a *function*, not a *variable*.
In this instance, the standard Riemann integrals that we are all
familiar with cannot solve this equation. There are advanced techniques
known as **Ito Calculus** however that can solve this equation, but
these methods are currently outside the scope of this book.
|
# Control Systems/Digital Control Systems
## Digital Systems
Digital systems, expressed previously as difference equations or
Z-Transform transfer functions can also be used with the state-space
representation. Also, all the same techniques for dealing with analog
systems can be applied to digital systems, with only minor changes.
## Digital Systems
For digital systems, we can write similar equations, using discrete data
sets:
$$x[k + 1] = Ax[k] + Bu[k]$$
$$y[k] = Cx[k] + Du[k]$$
### Zero-Order Hold Derivation
If we have a continuous-time state equation:
$$x'(t) = Ax(t) + Bu(t)$$
We can derive the digital version of this equation that we discussed
above. We take the Laplace transform of our equation:
$$X(s) = (sI - A)^{-1}Bu(s) + (sI - A)^{-1}x(0)$$
Now, taking the inverse Laplace transform gives us our time-domain
system, keeping in mind that the inverse Laplace transform of the *(sI -
A)* term is our state-transition matrix, Φ:
$$x(t) = \mathcal{L}^{-1}(X(s)) = \Phi(t - t_0)x(0) + \int_{t_0}^t\Phi(t - \tau)Bu(\tau)d\tau$$
Now, we apply a zero-order hold on our input, to make the system
digital. Notice that we set our start time *t~0~ = kT*, because we are
only interested in the behavior of our system during a single sample
period:
$$u(t) = u(kT), kT \le t \le (k+1)T$$
$$x(t) = \Phi(t, kT)x(kT) + \int_{kT}^t \Phi(t, \tau)Bd\tau u(kT)$$
We were able to remove *u(kT)* from the integral because it did not rely
on τ. We now define a new function, Γ, as follows:
$$\Gamma(t, t_0) = \int_{t_0}^t \Phi(t, \tau)Bd\tau$$
Inserting this new expression into our equation, and setting *t = (k +
1)T* gives us:
$$x((k + 1)T) = \Phi((k+1)T, kT)x(kT) + \Gamma((k+1)T, kT)u(kT)$$
Now Φ(T) and Γ(T) are constant matrices, and we can give them new names.
The *d* subscript denotes that they are digital versions of the
coefficient matrices:
$$A_d = \Phi((k+1)T, kT)$$
$$B_d = \Gamma((k+1)T, kT)$$
We can use these values in our state equation, converting to our bracket
notation instead:
$$x[k + 1] = A_dx[k] + B_du[k]$$
## Relating Continuous and Discrete Systems
Continuous and discrete systems that perform similarly can be related
together through a set of relationships. It should come as no surprise
that a discrete system and a continuous system will have different
characteristics and different coefficient matrices. If we consider that
a discrete system is the same as a continuous system, except that it is
sampled with a sampling time T, then the relationships below will hold.
The process of converting an analog system for use with digital hardware
is called **discretization**. We\'ve given a basic introduction to
discretization already, but we will discuss it in more detail here.
### Discrete Coefficient Matrices
Of primary importance in discretization is the computation of the
associated coefficient matrices from the continuous-time counterparts.
If we have the continuous system *(A, B, C, D)*, we can use the
relationship *t = kT* to transform the state-space solution into a
sampled system:
$$x(kT) = e^{AkT}x(0) + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$
$$x[k] = e^{AkT}x[0] + \int_0^{kT} e^{A(kT - \tau)}Bu(\tau)d\tau$$
Now, if we want to analyze the *k+1* term, we can solve the equation
again:
$$x[k+1] = e^{A(k+1)T}x[0] + \int_0^{(k+1)T} e^{A((k+1)T - \tau)}Bu(\tau)d\tau$$
Separating out the variables, and breaking the integral into two parts
gives us:
$$x[k+1] = e^{AT}e^{AkT}x[0] + \int_0^{kT}e^{AT}e^{A(kT - \tau)}Bu(\tau)d\tau + \int_{kT}^{(k+1)T} e^{A(kT + T - \tau)}Bu(\tau)d\tau$$
If we substitute in a new variable *β = (k + 1)T + τ*, and if we see the
following relationship:
$$e^{AkT}x[0] = x[k]$$
We get our final result:
$$x[k+1] = e^{AT}x[k] + \left(\int_0^T e^{A\alpha}d\alpha\right)Bu[k]$$
Comparing this equation to our regular solution gives us a set of
relationships for converting the continuous-time system into a
discrete-time system. Here, we will use \"d\" subscripts to denote the
system matrices of a discrete system, and we will use a \"c\" subscript
to denote the system matrices of a continuous system.
: {\| class=\"wikitable\"
\|- \|$A_d = e^{A_cT}$ \|- \|$B_d = \int_0^Te^{A\tau}d\tau B_c$ \|-
\|$C_d = C_c$ \|- \|$D_d = D_c$ \|}
If the A~c~ matrix is nonsingular, and we can find it\'s inverse, we can
instead define B~d~ as:
$$B_d = A_c^{-1}(A_d - I)B_c$$
The differences in the discrete and continuous matrices are due to the
fact that the underlying equations that describe our systems are
different. Continuous-time systems are represented by linear
differential equations, while the digital systems are described by
difference equations. High order terms in a difference equation are
delayed copies of the signals, while high order terms in the
differential equations are derivatives of the analog signal.
If we have a complicated analog system, and we would like to implement
that system in a digital computer, we can use the above transformations
to make our matrices conform to the new paradigm.
### Notation
Because the coefficient matrices for the discrete systems are computed
differently from the continuous-time coefficient matrices, and because
the matrices technically represent different things, it is not uncommon
in the literature to denote these matrices with different variables. For
instance, the following variables are used in place of *A* and *B*
frequently:
$$\Omega = A_d$$
$$R = B_d$$
These substitutions would give us a system defined by the ordered
quadruple *(Ω, R, C, D)* for representing our equations.
As a matter of notational convenience, we will use the letters *A* and
*B* to represent these matrices throughout the rest of this book.
## Converting Difference Equations
## Solving for x\[n\]
We can find a general time-invariant solution for the discrete time
difference equations. Let us start working up a pattern. We know the
discrete state equation:
$$x[n+1] = Ax[n] + Bu[n]$$
Starting from time *n = 0*, we can start to create a pattern:
$$x[1] = Ax[0] + Bu[0]$$
$$x[2] = Ax[1] + Bu[1] = A^2x[0] + ABu[0] + Bu[1]$$
$$x[3] = Ax[2] + Bu[2] = A^3x[0] + A^2Bu[0] + ABu[1] + Bu[2]$$
With a little algebraic trickery, we can reduce this pattern to a single
equation:
$$x[n] = A^nx[n_0] + \sum_{m=0}^{n-1}A^{n-1-m}Bu[m]$$
Substituting this result into the output equation gives us:
$$y[n] = CA^nx[n_0] + \sum_{m=0}^{n-1}CA^{n-1-m}Bu[m] + Du[n]$$
## Time Variant Solutions
If the system is time-variant, we have a general solution that is
similar to the continuous-time case:
$$x[n] = \phi[n, n_0]x[n_0] + \sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m]$$
$$y[n] = C[n]\phi[n, n_0]x[n_0] + C[n]\sum_{m = n_0}^{n-1} \phi[n, m+1]B[m]u[m] + D[n]u[n]$$
Where φ, the **state transition matrix**, is defined in a similar manner
to the state-transition matrix in the continuous case. However, some of
the properties in the discrete time are different. For instance, the
inverse of the state-transition matrix does not need to exist, and in
many systems it does not exist.
### State Transition Matrix
The discrete time state transition matrix is the unique solution of the
equation:
$$\phi[k+1, k_0] = A[k] \phi[k, k_0]$$
Where the following restriction must hold:
$$\phi[k_0, k_0] = I$$
From this definition, an obvious way to calculate this state transition
matrix presents itself:
$$\phi[k, k_0] = A[k - 1]A[k-2]A[k-3]\cdots A[k_0]$$
Or,
$$\phi[k, k_0] = \prod_{m = 1}^{k-k_0}A[k-m]$$
## MATLAB Calculations
MATLAB is a computer program, and therefore calculates all systems using
digital methods. The MATLAB function **lsim** is used to simulate a
continuous system with a specified input. This function works by calling
the **c2d**, which converts a system *(A, B, C, D)* into the equivalent
discrete system. Once the system model is discretized, the function
passes control to the **dlsim** function, which is used to simulate
discrete-time systems with the specified input.
Because of this, simulation programs like MATLAB are subjected to
round-off errors associated with the discretization process.
## Sampler Systems
Let\'s say that we introduce a sampler into our system:
![](System_With_Sampler.png "System_With_Sampler.png"){width="600"}
Notice that after the sampler, we must introduce a reconstruction
circuit (described elsewhere) so that we may continue to keep the input,
output, and plant in the laplace domain. Notice that we denote the
reconstruction circuit with the symbol: Gr(s).
The preceding was a particularly simple example. However, the reader is
encouraged to solve for the transfer function for a system with a
sampler (and it\'s associated reconstructor) in the following places:
1. Before the feedback system
2. In the forward path, after the plant
3. In the reverse path
4. After the feedback loop
|
# Control Systems/Discrete-Time Stability
## Discrete-Time Stability
The stability analysis of a discrete-time or digital system is similar
to the analysis for a continuous time system. However, there are enough
differences that it warrants a separate chapter.
## Input-Output Stability
### Uniform Stability
An LTI causal system is uniformly BIBO stable if there exists a positive
constant L such that the following conditions:
$$x[n_0] = 0$$
$$\|u[n]\| \le k$$
$$k \ge 0$$
imply that
$$\|y[n]\| \le L$$
### Impulse Response Matrix
We can define the **impulse response matrix** of a discrete-time system
as:
$$G[n] = \left\{\begin{matrix}CA^{k-1}B & \mbox{ if } k > 0 \\ 0 & \mbox{ if } k \le 0\end{matrix}\right.$$
Or, in the general time-varying case:
$$G[n] = \left\{\begin{matrix}C\phi[n, n_0]B & \mbox{ if } k > 0 \\ 0 & \mbox{ if } k \le 0\end{matrix}\right.$$
A digital system is BIBO stable if and only if there exists a positive
constant *L* such that for all non-negative *k*:
$$\sum_{n = 0}^{k}\|G[n]\| \le L$$
## Stability of Transfer Function
A MIMO discrete-time system is BIBO stable if and only if every pole of
every transfer function in the transfer function matrix has a magnitude
less than 1. All poles of all transfer functions must exist inside the
unit circle on the Z plane.
## Lyapunov Stability
There is a discrete version of the Lyapunov stability theorem that
applies to digital systems. Given the **discrete Lyapunov equation**:
$$A^TMA - M = -N$$
We can use this version of the Lyapunov equation to define a condition
for stability in discrete-time systems:
## Poles and Eigenvalues
Every pole of G(z) is an eigenvalue of the system matrix A. Not every
eigenvalue of A is a pole of G(z). Like the poles of the transfer
function, all the eigenvalues of the system matrix must have magnitudes
less than 1. Mathematically:
$$\sqrt{\operatorname{Re}(z)^2 + \operatorname{Im}(z)^2} \le 1$$
If the magnitude of the eigenvalues of the system matrix A, or the poles
of the transfer functions are greater than 1, the system is unstable.
## Finite Wordlengths
Digital computer systems have an inherent problem because implementable
computer systems have finite wordlengths to deal with. Some of the
issues are:
1. Real numbers can only be represented with a finite precision.
Typically, a computer system can only accurately represent a number
to a finite number of decimal points.
2. Because of the fact above, computer systems with feedback can
compound errors with each program iteration. Small errors in one
step of an algorithm can lead to large errors later in the program.
3. Integer numbers in computer systems have finite lengths. Because of
this, integer numbers will either **roll-over**, or **saturate**,
depending on the design of the computer system. Both situations can
create inaccurate results.
|
# Control Systems/System Delays
## Delays
A system can be built with an inherent **delay**. Delays are units that
cause a time-shift in the input signal, but that don\'t affect the
signal characteristics. An **ideal delay** is a delay system that
doesn\'t affect the signal characteristics at all, and that delays the
signal for an exact amount of time. Some delays, like processing delays
or transmission delays, are unintentional. Other delays however, such as
synchronization delays, are an integral part of a system. This chapter
will talk about how delays are utilized and represented in the Laplace
Domain. Once we represent a delay in the Laplace domain, it is an easy
matter, through change of variables, to express delays in other domains.
### Ideal Delays
An ideal delay causes the input function to be shifted forward in time
by a certain specified amount of time. Systems with an ideal delay cause
the system output to be delayed by a finite, predetermined amount of
time.
![](Ideal_Delay.svg "Ideal_Delay.svg"){width="400"}
## Time Shifts
Let\'s say that we have a function in time that is time-shifted by a
certain constant time period *T*. For convenience, we will denote this
function as *x(t - T)*. Now, we can show that the Laplace transform of
*x(t - T)* is the following:
$$\mathcal{L}\{x(t - T)\} \Leftrightarrow e^{-sT}X(s)$$
What this demonstrates is that time-shifts in the time-domain become
exponentials in the complex Laplace domain.
### Shifts in the Z-Domain
Since we know the following general relationship between the Z Transform
and the Star Transform:
$$z \Leftrightarrow e^{sT}$$
We can show what a time shift in a discrete time domain becomes in the Z
domain:
$$x((n-n_s)\cdot T)\equiv x[n - n_s] \Leftrightarrow z^{-n_s}X(z)$$
## Delays and Stability
A time-shift in the time domain becomes an exponential increase in the
Laplace domain. This would seem to show that a time shift can have an
effect on the stability of a system, and occasionally can cause a system
to become unstable. We define a new parameter called the **time margin**
as the amount of time that we can shift an input function before the
system becomes unstable. If the system can survive any arbitrary time
shift without going unstable, we say that the time margin of the system
is infinite.
## Delay Margin
When speaking of sinusoidal signals, it doesn\'t make sense to talk
about \"time shifts\", so instead we talk about \"phase shifts\".
Therefore, it is also common to refer to the time margin as the **phase
margin** of the system. The phase margin denotes the amount of phase
shift that we can apply to the system input before the system goes
unstable.
We denote the phase margin for a system with a lowercase Greek letter φ
(phi). Phase margin is defined as such for a second-order system:
$$\phi_m = \tan^{-1} \left[ \frac{2 \zeta}{(\sqrt{4 \zeta^4 + 1} - 2\zeta^2)^{1/2}}\right]$$
Oftentimes, the phase margin is approximated by the following
relationship:
$$\phi_m \approx 100\zeta$$
The Greek letter zeta (ζ) is a quantity called the **damping ratio**,
and we discuss this quantity in more detail in the next chapter.
## Transform-Domain Delays
The ordinary Z-Transform does not account for a system which experiences
an arbitrary time delay, or a processing delay. The Z-Transform can,
however, be modified to account for an arbitrary delay. This new version
of the Z-transform is frequently called the **Modified Z-Transform**,
although in some literature (notably in Wikipedia), it is known as the
**Advanced Z-Transform**.
### Delayed Star Transform
To demonstrate the concept of an ideal delay, we will show how the star
transform responds to a time-shifted input with a specified delay of
time *T*. The function $$X^*(s, \Delta)$$ is the delayed star transform
with a delay parameter Δ. The delayed star transform is defined in terms
of the star transform as such:
$$X^*(s, \Delta)
= \mathcal{L}^* \left\{ x(t - \Delta) \right\}
= X(s)e^{-\Delta T s}$$
As we can see, in the star transform, a time-delayed signal is
multiplied by a decaying exponential value in the transform domain.
### Delayed Z-Transform
Since we know that the Star Transform is related to the Z Transform
through the following change of variables:
$$z = e^{-sT}$$
We can interpret the above result to show how the Z Transform responds
to a delay:
$$\mathcal{Z}(x[t - T]) = X(z)z^{-T}$$
This result is expected.
Now that we know how the Z transform responds to time shifts, it is
often useful to generalize this behavior into a form known as the
**Delayed Z-Transform**. The Delayed Z-Transform is a function of two
variables, *z* and Δ, and is defined as such:
$$X(z, \Delta)
= \mathcal{Z} \left\{ x(t - \Delta) \right\}
= \mathcal{Z} \left\{ X(s)e^{-\Delta T s} \right\}$$
And finally:
$$\mathcal{Z}(x[n], \Delta) = X(z, \Delta) = \sum_{n=-\infty}^\infty x[n - \Delta]z^{-n}$$
## Modified Z-Transform
The Delayed Z-Transform has some uses, but mathematicians and engineers
have decided that a more useful version of the transform was needed. The
new version of the Z-Transform, which is similar to the Delayed
Z-transform with a change of variables, is known as the **Modified
Z-Transform**. The Modified Z-Transform is defined in terms of the
delayed Z transform as follows:
$$X(z, m)
= X(z, \Delta)\big|_{\Delta \to 1 - m}
= \mathcal{Z} \left\{ X(s)e^{-\Delta T s} \right\}\big|_{\Delta \to 1 - m}$$
And it is defined explicitly:
$$X(z, m) = \mathcal{Z}(x[n], m) = \sum_{n = -\infty}^{\infty} x[n + m - 1]z^{-n}$$
|
# Control Systems/Sampled Data Systems
## Ideal Sampler
In this chapter, we are going to introduce the ideal sampler and the
**Star Transform**. First, we need to introduce (or review) the
**Geometric Series** infinite sum. The results of this sum will be very
useful in calculating the Star Transform, later.
Consider a sampler device that operates as follows: every *T* seconds,
the sampler reads the current value of the input signal at that exact
moment. The sampler then holds that value on the output for *T* seconds,
before taking the next sample. We have a generic input to this system,
*f(t)*, and our sampled output will be denoted *f\*(t)*. We can then
show the following relationship between the two signals:
$$f^{\,*}(t)=f(0)\big(\mathrm{u}(t\,-\,0)\,-\,\mathrm{u}(t\,-\,T)\big)\,+\,f(T)\big(\mathrm{u}(t\,-\,T)\,-\,\mathrm{u}(t\,-\,2T)\big)\,+\;\cdots\;+\,f(nT)\big(\mathrm{u}(t\,-\,nT)\,-\,\mathrm{u}(t\,-\,(n\,+\,1)T)\big)\,+\;\cdots$$
Note that the value of *f^\*^* at time *t* = 1.5 *T* is the same as at
time *t = T*. This relationship works for any fractional value.
Taking the Laplace Transform of this infinite sequence will yield us
with a special result called the **Star Transform**. The Star Transform
is also occasionally called the \"Starred Transform\" in some texts.
## Geometric Series
Before we talk about the Star Transform or even the Z-Transform, it is
useful for us to review the mathematical background behind solving
infinite series. Specifically, because of the nature of these
transforms, we are going to look at methods to solve for the sum of a
**geometric series**.
A geometric series is a sum of values with increasing exponents, as
such:
$$\sum_{k=0}^{n} ar^k = ar^0+ar^1+ar^2+ar^3+\cdots+ar^n \,$$
In the equation above, notice that each term in the series has a
coefficient value, a. We can optionally factor out this coefficient, if
the resulting equation is easier to work with:
$$a \sum_{k=0}^{n} r^k = a \left(r^0+r^1+r^2+r^3+\cdots+r^n \,\right)$$
Once we have an infinite series in either of these formats, we can
conveniently solve for the total sum of this series using the following
equation:
$$a \sum_{k=0}^{n} r^k = a\frac{1-r^{n+1}}{1-r}$$
Let\'s say that we start our series off at a number that isn\'t zero.
Let\'s say for instance that we start our series off at *n = 1* or *n =
100*. Let\'s see:
$$\sum_{k=m}^{n} ar^k = ar^m+ar^{m+1}+ar^{m+2}+ar^{m+3}+\cdots+ar^n \,$$
We can generalize the sum to this series as follows:
$$\sum_{k=m}^n ar^k=\frac{a(r^m-r^{n+1})}{1-r}$$
With that result out of the way, now we need to worry about making this
series converge. In the above sum, we know that n is approaching
infinity (because this is an *infinite sum*). Therefore, any term that
contains the variable n is a matter of worry when we are trying to make
this series converge. If we examine the above equation, we see that
there is one term in the entire result with an *n* in it, and from that,
we can set a fundamental inequality to govern the geometric series.
$$r^{n+1} < \infty$$
To satisfy this equation, we must satisfy the following condition:
$$r \le 1$$ {{-}} Therefore, we come to the final result: **The
geometric series converges if and only if the value of *r* is less than
one.**
## The Star Transform
The **Star Transform** is defined as such:
$$F^*(s) = \mathcal{L}^*[f(t)] = \sum_{k = 0}^\infty f(kT)e^{-skT}$$
The Star Transform depends on the sampling time *T* and is different for
a single signal depending on the frequency at which the signal is
sampled. Since the Star Transform is defined as an infinite series, it
is important to note that some inputs to the Star Transform will not
converge, and therefore some functions do not have a valid Star
Transform. Also, it is important to note that the Star Transform may
only be valid under a particular **region of convergence**. We will
cover this topic more when we discuss the Z-transform.
### Star ↔ Laplace
The Laplace Transform and the Star Transform are clearly related,
because we obtained the Star Transform by using the Laplace Transform on
a time-domain signal. However, the method to convert between the two
results can be a slightly difficult one. To find the Star Transform of a
Laplace function, we must take the residues of the Laplace equation, as
such:
$$X^*(s) = \sum \bigg[ \text{residues of } X(\lambda)\frac{1}{1-e^{-T(s-\lambda)}}\bigg]_{\text{at poles of E}(\lambda)}$$
This math is advanced for most readers, so we can also use an alternate
method, as follows:
$$X^*(s)=\frac{1}{T}\sum_{n=-\infty}^\infty X(s+jm\omega_s)+\frac{x(0)}{2}$$
Neither one of these methods are particularly easy, however, and
therefore we will not discuss the relationship between the Laplace
transform and the Star Transform any more than is absolutely necessary
in this book. Suffice it to say, however, that the Laplace transform and
the Star Transform *are related* mathematically.
### Star + Laplace
In some systems, we may have components that are both continuous and
discrete in nature. For instance, if our feedback loop consists of an
Analog-To-Digital converter, followed by a computer (for processing),
and then a Digital-To-Analog converter. In this case, the computer is
acting on a digital signal, but the rest of the system is acting on
continuous signals. Star transforms can interact with Laplace transforms
in some of the following ways:
### Convergence of the Star Transform
The Star Transform is defined as being an infinite series, so it is
critically important that the series converge (not reach infinity), or
else the result will be nonsensical. Since the Star Transform is a
geometic series (for many input signals), we can use geometric series
analysis to show whether the series converges, and even under what
particular conditions the series converges. The restrictions on the star
transform that allow it to converge are known as the **region of
convergence** (ROC) of the transform. Typically a transform must be
accompanied by the explicit mention of the ROC.
## The Z-Transform
Let us say now that we have a discrete data set that is sampled at
regular intervals. We can call this set *x\[n\]*:
`x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ]`
we can utilize a special transform, called the Z-transform, to make
dealing with this set more easy:
$$X(z) = \mathcal{Z}\left\{x[n]\right\} = \sum_{n = -\infty}^\infty x[n] z^{-n}$$
Like the Star Transform the Z Transform is defined as an infinite series
and therefore we need to worry about convergence. In fact, there are a
number of instances that have identical Z-Transforms, but different
regions of convergence (ROC). Therefore, when talking about the Z
transform, you must include the ROC, or you are missing valuable
information.
{{-}}
### Z Transfer Functions
Like the Laplace Transform, in the Z-domain we can use the input-output
relationship of the system to define a **transfer function**.
![](Z_Block.svg "Z_Block.svg"){width="400"}
The transfer function in the Z domain operates exactly the same as the
transfer function in the S Domain:
$$H(z) = \frac{Y(z)}{X(z)}$$
$$\mathcal{Z}\{h[n]\} = H(z)$$
Similarly, the value *h\[n\]* which represents the response of the
digital system is known as the **impulse response** of the system. It is
important to note, however, that the definition of an \"impulse\" is
different in the analog and digital domains.
### Inverse Z Transform
The **inverse Z Transform** is defined by the following path integral:
$$x[n] = Z^{-1} \{X(z) \}= \frac{1}{2 \pi j} \oint_{C} X(z) z^{n-1} dz \$$
Where *C* is a counterclockwise closed path encircling the origin and
entirely in the region of convergence (ROC). The contour or path, *C*,
must encircle all of the poles of *X(z)*.
This math is relatively advanced compared to some other material in this
book, and therefore little or no further attention will be paid to
solving the inverse Z-Transform in this manner. Z transform pairs are
heavily tabulated in reference texts, so many readers can consider that
to be the primary method of solving for inverse Z transforms. There are
a number of Z-transform pairs available in table form in **The
Appendix**.
### Final Value Theorem
Like the Laplace Transform, the Z Transform also has an associated final
value theorem:
$$\lim_{ n\to \infty} x[n] = \lim_{z \to 1} (z - 1) X(z)$$
This equation can be used to find the steady-state response of a system,
and also to calculate the steady-state error of the system.
## Star ↔ Z
The Z transform is related to the Star transform though the following
change of variables:
$$z = e^{sT}$$
Notice that in the Z domain, we don\'t maintain any information on the
sampling period, so converting to the Z domain from a Star Transformed
signal loses that information. When converting back to the star domain
however, the value for *T* can be re-insterted into the equation, if it
is still available.
Also of some importance is the fact that the Z transform is bilinear,
while the Star Transform is unilinear. This means that we can only
convert between the two transforms if the sampled signal is zero for all
values of *n \< 0*.
Because the two transforms are so closely related, it can be said that
the Z transform is simply a notational convenience for the Star
Transform. With that said, this book could easily use the Star Transform
for all problems, and ignore the added burden of Z transform notation
entirely. A common example of this is Richard Hamming\'s book
\"Numerical Methods for Scientists and Engineers\" which uses the
Fourier Transform for all problems, considering the Laplace, Star, and
Z-Transforms to be merely notational conveniences. However, the Control
Systems wikibook is under the impression that the correct utilization of
different transforms can make problems more easy to solve, and we will
therefore use a multi-transform approach.
### Z plane
*z* is a complex variable with a real part and an imaginary part. In
other words, we can define *z* as such:
$$z = \operatorname{Re}(z) + j\operatorname{Im}(z)$$
Since *z* can be broken down into two independent components, it often
makes sense to graph the variable *z* on the **Z-plane**. In the
Z-plane, the horizontal axis represents the real part of *z*, and the
vertical axis represents the magnitude of the imaginary part of *z*.
Notice also that if we define *z* in terms of the star-transform
relation:
$$z = e^{sT}$$
we can separate out *s* into real and imaginary parts:
$$s = \sigma + j\omega$$
We can plug this into our equation for *z*:
$$z = e^{(\sigma + j\omega)T} = e^{\sigma T} e^{j\omega T}$$
Through **Euler\'s formula**, we can separate out the complex
exponential as such:
$$z = e^{\sigma T} (\cos(\omega T) + j\sin(\omega T))$$
If we define two new variables, *M* and φ:
$$M = e^{\sigma T}$$
$$\phi = \omega T$$
We can write *z* in terms of *M* and φ. Notice that it is Euler\'s
equation:
$$z = M\cos(\phi) + jM\sin(\phi)$$
Which is clearly a polar representation of *z*, with the magnitude of
the polar function (*M*) based on the real-part of *s*, and the angle of
the polar function (φ) is based on the imaginary part of *s*.
### Region of Convergence
To best teach the region of convergance (ROC) for the Z-transform, we
will do a quick example.
{1-r}
` = 1 \frac{1 - ((e^2z)^{-1})^{n+1}}{1 - (e^2z)^{-1}}``</math>`{=html}
Again, we know that to make this series converge, we need to make the r
value less than 1:
$$|(e^2z)^{-1}| = \left|\frac{1}{e^2z}\right| \le 1$$
$$|e^2z| \ge 1$$
And finally we obtain the region of convergance for this Z-transform:
$$|z| \ge \frac{1}{e^2}$$}}
### Laplace ↔ Z
There are no easy, direct ways to convert between the Laplace transform
and the Z transform directly. Nearly all methods of conversions
reproduce some aspects of the original equation faithfully, and
incorrectly reproduce other aspects. For some of the main mapping
techniques between the two, see the Z Transform Mappings
Appendix.
However, there are some topics that we need to discuss. First and
foremost, conversions between the Laplace domain and the Z domain *are
not linear*, this leads to some of the following problems:
1. $\mathcal{L}[G(z)H(z)] \ne G(s)H(s)$
2. $\mathcal{Z}[G(s)H(s)] \ne G(z)H(z)$
This means that when we combine two functions in one domain
multiplicatively, we must find a combined transform in the other domain.
Here is how we denote this combined transform:
$$\mathcal{Z}[G(s)H(s)] = \overline{GH}(z)$$
Notice that we use a horizontal bar over top of the multiplied
functions, to denote that we took the transform of the product, not of
the individual pieces. However, if we have a system that incorporates a
sampler, we can show a simple result. If we have the following format:
$$Y(s) = X^*(s)H(s)$$
Then we can put everything in terms of the Star Transform:
$$Y^*(s) = X^*(s)H^*(s)$$
and once we are in the star domain, we can do a direct change of
variables to reach the Z domain:
$$Y^*(s) = X^*(s)H^*(s) \to Y(z) = X(z)H(z)$$
Note that we can only make this equivalence relationship if the system
incorporates an ideal sampler, and therefore one of the multiplicative
terms is in the star domain.
### Example
## Z ↔ Fourier
By substituting variables, we can relate the Star transform to the
Fourier Transform as well:
$$e^{sT} = e^{j \omega}$$
$$e^{(\sigma + j \omega)T} = e^{j \omega}$$
If we assume that *T = 1*, we can relate the two equations together by
setting the real part of *s* to zero. Notice that the relationship
between the Laplace and Fourier transforms is mirrored here, where the
Fourier transform is the Laplace transform with no real-part to the
transform variable.
There are a number of discrete-time variants to the Fourier transform as
well, which are not discussed in this book. For more information about
these variants, see Digital Signal
Processing.
## Reconstruction
Some of the easiest reconstruction circuits are called \"Holding
circuits\". Once a signal has been transformed using the Star Transform
(passed through an ideal sampler), the signal must be \"reconstructed\"
using one of these hold systems (or an equivalent) before it can be
analyzed in a Laplace-domain system.
If we have a sampled signal denoted by the Star Transform $X^*(s)$, we
want to **reconstruct** that signal into a continuous-time waveform, so
that we can manipulate it using Laplace-transform techniques.
Let\'s say that we have the sampled input signal, a reconstruction
circuit denoted *G(s)*, and an output denoted with the Laplace-transform
variable *Y(s)*. We can show the relationship as follows:
$$Y(s) = X^*(s)G(s)$$
Reconstruction circuits then, are physical devices that we can use to
convert a digital, sampled signal into a continuous-time domain, so that
we can take the Laplace transform of the output signal.
### Zero order Hold
!Zero-Order Hold impulse
response
A **zero-order hold** circuit is a circuit that essentially inverts the
sampling process: The value of the sampled signal at time *t* is held on
the output for *T* time. The output waveform of a zero-order hold
circuit therefore looks like a staircase approximation to the original
waveform.
The transfer function for a zero-order hold circuit, in the Laplace
domain, is written as such:
$$G_{h0} = \frac{1 - e^{-Ts}}{s}$$
The Zero-order hold is the simplest reconstruction circuit, and (like
the rest of the circuits on this page) assumes zero processing delay in
converting between digital to analog.
center\|framed\|A continuous input signal (gray) and the sampled signal
with a zero-order hold (red)
### First Order Hold
!Impulse response of a first-order
hold.
The zero-order hold creates a step output waveform, but this isn\'t
always the best way to reconstruct the circuit. Instead, the
**First-Order Hold** circuit takes the derivative of the waveform at the
time *t*, and uses that derivative to make a guess as to where the
output waveform is going to be at time *(t + T)*. The first-order hold
circuit then \"draws a line\" from the current position to the expected
future position, as the output of the waveform.
$$G_{h1} = \frac{1 + Ts}{T} \left[ \frac{1 - e^{-Ts}}{s}\right]^2$$
Keep in mind, however, that the next value of the signal will probably
not be the same as the expected value of the next data point, and
therefore the first-order hold may have a number of discontinuities.
center\|framed\|An input signal (grey) and the first-order hold circuit
output (red)
### Fractional Order Hold
The Zero-Order hold outputs the current value onto the output, and keeps
it level throughout the entire bit time. The first-order hold uses the
function derivative to predict the next value, and produces a series of
ramp outputs to produce a fluctuating waveform. Sometimes however,
neither of these solutions are desired, and therefore we have a
compromise: **Fractional-Order Hold**. Fractional order hold acts like a
mixture of the other two holding circuits, and takes a fractional number
*k* as an argument. Notice that *k* must be between 0 and 1 for this
circuit to work correctly.
$$G_{hk} = (1 - ke^{-Ts}) \frac{1 - e^{-Ts}}{s} + \frac {k}{Ts^2} (1 - e^{-Ts})^2$$
This circuit is more complicated than either of the other hold circuits,
but sometimes added complexity is worth it if we get better performance
from our reconstruction circuit.
### Other Reconstruction Circuits
!Impulse response to a linear-approximation
circuit.
Another type of circuit that can be used is a **linear approximation**
circuit.
center\|framed\|An input signal (grey) and the output signal through a
linear approximation
circuit
## Further reading
- Hamming, Richard. \"Numerical Methods for Scientists and Engineers\"
- Digital Signal Processing/Z
Transform
- Complex Analysis/Residue
Theory
- Analog and Digital
Conversion
|
# Control Systems/Z Transform Mappings
## Z Transform Mappings
There are a number of different mappings that can be used to convert a
system from the complex Laplace domain into the Z-Domain. None of these
mappings are perfect, and every mapping requires a specific starting
condition, and focuses on a specific aspect to reproduce faithfully. One
such mapping that has already been discussed is the **bilinear
transform**, which, along with prewarping, can faithfully map the
various regions in the s-plane into the corresponding regions in the
z-plane. We will discuss some other potential mappings in this chapter,
and we will discuss the pros and cons of each.
## Bilinear Transform
The Bilinear transform converts from the Z-domain to the complex W
domain. The W domain is not the same as the Laplace domain, although
there are some similarities. Here are some of the similarities between
the Laplace domain and the W domain:
1. Stable poles are in the Left-Half Plane
2. Unstable poles are in the right-half plane
3. Marginally stable poles are on the vertical, imaginary axis
With that said, the bilinear transform can be defined as follows:
$$w = \frac{2}{T} \frac{z - 1}{z + 1}$$
$$z = \frac{1+(Tw/2)}{1-(Tw/2)}$$
Graphically, we can show that the bilinear transform operates as
follows:
![](Bilinear_Transform_Unwarped2.svg "Bilinear_Transform_Unwarped2.svg"){width="400"}
### Prewarping
The W domain is not the same as the Laplace domain, but if we employ the
process of **prewarping** before we take the bilinear transform, we can
make our results match more closely to the desired Laplace Domain
representation.
Using prewarping, we can show the effect of the bilinear transform
graphically:
![](Bilinear_Transform_Mapping.svg "Bilinear_Transform_Mapping.svg"){width="400"}
The shape of the graph before and after prewarping is the same as it is
without prewarping. However, the destination domain is the S-domain, not
the W-domain.
## Matched Z-Transform
If we have a function in the laplace domain that has been decomposed
using partial fraction expansion, we generally have an equation in the
form:
$$Y(s) = \frac{A}{s + \alpha_1} + \frac{B}{s + \alpha_2} + \frac{C}{s + \alpha_3} + ...$$
And once we are in this form, we can make a direct conversion between
the s and z planes using the following mapping:
$$s + \alpha = 1 - z^{-1}e^{-\alpha T}$$
Pro:A good direct mapping in terms of s and a single coefficient\
Con:requires the Laplace-domain function be decomposed using partial fraction expansion.
## Simpson\'s Rule
$$s = \frac{3}{T} \frac{z^2-1}{z^2+4z^1+1}$$
CON:Essentially multiplies the order of the transfer function by a factor of 2. This makes things difficult when you are trying to physically implement the system. It has been shown that this transform produces unstable roots (outside of unit unit circle).
## (w, v) Transform
Given the following system:
$$Y(s) = G(s, z, z^\alpha)X(s)$$
Then:
$$w = \frac{2}{T} \frac{z-1}{z+1}$$
$$v(\alpha) = 1 - \alpha(1 - z^{-1}) + \frac{\alpha(\alpha - 1)}{z}(1-z^{-1})^2$$
And:
$$Y(z) = G(w, z, v(\alpha))\left[X(z) - \frac{x(0)}{1 + z^{-1}}\right]$$
Pro:Directly maps a function in terms of z and s, into a function in terms of only z.\
Con:Requires a function that is already in terms of s, z and α.
## Z-Forms
|
# Control Systems/Physical Models
## Physical Models
This page will serve as a refresher for various different engineering
disciplines on how physical devices are modeled. Models will be
displayed in both time-domain and Laplace-domain input/output
characteristics. The only information that is going to be displayed here
will be the ones that are contributed by knowledgeable contributors.
## Electrical Systems
: {\|class=\"wikitable\"
!Component \|\| Time-Domain \|\| Laplace \|\| Fourier \|- !Resistor \| R
\|\| R \|\| R \|- !Capacitor \| $i = C\frac{dv}{dt}$ \|\|
$G(s) = \frac{1}{sC}$ \|\| $G(j\omega) = \frac{1}{j\omega C}$ \|-
!Inductor \| $v = L\frac{di}{dt}$ \|\| $G(s) = sL$ \|\|
$G(j\omega) = j\omega L$ \|}
## Mechanical Systems
## Civil/Construction Systems
## Chemical Systems
|
# Control Systems/Transforms Appendix
## Laplace Transform
When we talk about the Laplace transform, we are actually talking about
the version of the Laplace transform known as the **unilinear Laplace
Transform**. The other version, the **Bilinear Laplace Transform** (not
related to the Bilinear Transform, below) is not used in this book.
The Laplace Transform is defined as:
$$F(s)
= \mathcal{L}[f(t)]
= \int_{-\infty}^\infty x(t)e^{-st}dt$$
And the Inverse Laplace Transform is defined as:
$$f(t)
= \mathcal{L}^{-1} \left\{F(s)\right\}
= \frac{1}{2\pi j} \int_{\sigma-j\infty}^{\sigma+j\infty} X(s)e^{st}ds$$
### Table of Laplace Transforms
This is a table of common Laplace Transforms.
:
### Properties of the Laplace Transform
This is a table of the most important properties of the laplace
transform.
:
### Convergence of the Laplace Integral
### Properties of the Laplace Transform
## Fourier Transform
The Fourier Transform is used to break a time-domain signal into its
frequency domain components. The Fourier Transform is very closely
related to the Laplace Transform, and is only used in place of the
Laplace transform when the system is being analyzed in a frequency
context.
The Fourier Transform is defined as:
$$F(j\omega)
= \mathcal{F}[f(t)]
= \int_0^\infty f(t) e^{-j\omega t} dt$$
And the Inverse Fourier Transform is defined as:
$$f(t)
= \mathcal{F}^{-1}\left\{F(j\omega)\right\}
= \frac{1}{2\pi}\int_{-\infty}^\infty F(j\omega) e^{-j\omega t} d\omega$$
### Table of Fourier Transforms
This is a table of common fourier transforms.
:
### Table of Fourier Transform Properties
This is a table of common properties of the fourier transform.
:
### Convergence of the Fourier Integral
### Properties of the Fourier Transform
## Z-Transform
The Z-transform is used primarily to convert discrete data sets into a
continuous representation. The Z-transform is notationally very similar
to the star transform, except that the Z transform does not take
explicit account for the sampling period. The Z transform has a number
of uses in the field of digital signal processing, and the study of
discrete signals in general, and is useful because Z-transform results
are extensively tabulated, whereas star-transform results are not.
The Z Transform is defined as:
$$X(z)
= \mathcal{Z}[x[n]]
= \sum_{n = -\infty}^\infty x[n] z^{-n}$$
### Inverse Z Transform
The inverse Z Transform is a highly complex transformation, and might be
inaccessible to students without enough background in calculus. However,
students who are familiar with such integrals are encouraged to perform
some inverse Z transform calculations, to verify that the formula
produces the tabulated results.
$$x[n] = \frac{1}{2 \pi j} \oint_C X(z) z^{n-1} dz$$
### Z-Transform Tables
## Modified Z-Transform
The Modified Z-Transform is similar to the Z-transform, except that the
modified version allows for the system to be subjected to any arbitrary
delay, by design. The Modified Z-Transform is very useful when talking
about digital systems for which the processing time of the system is not
negligible. For instance, a slow computer system can be modeled as being
an instantaneous system with an output delay.
The modified Z transform is based off the delayed Z transform:
$$X(z, m)
= X(z, \Delta)|_{\Delta \to 1 - m}
= \mathcal{Z} \left\{ X(s)e^{-\Delta T s} \right\}|_{\Delta \to 1 - m}$$
## Star Transform
The Star Transform is a discrete transform that has similarities between
the Z transform and the Laplace Transform. In fact, the Star Transform
can be said to be nearly analogous to the Z transform, except that the
Star transform explicitly accounts for the sampling time of the sampler.
The Star Transform is defined as:
$$F^*(s)
= \mathcal{L}^*[f(t)]
= \sum_{k = 0}^\infty f(kT)e^{-skT}$$
Star transform pairs can be obtained by plugging $z = e^{sT}$ into the
Z-transform pairs, above.
## Bilinear Transform
The bilinear transform is used to convert an equation in the Z domain
into the arbitrary W domain, with the following properties:
1. roots inside the unit circle in the Z-domain will be mapped to roots
on the left-half of the W plane.
2. roots outside the unit circle in the Z-domain will be mapped to
roots on the right-half of the W plane
3. roots on the unit circle in the Z-domain will be mapped onto the
vertical axis in the W domain.
The bilinear transform can therefore be used to convert a Z-domain
equation into a form that can be analyzed using the Routh-Hurwitz
criteria. However, it is important to note that the W-domain is not the
same as the complex Laplace S-domain. To make the output of the bilinear
transform equal to the S-domain, the signal must be prewarped, to
account for the non-linear nature of the bilinear transform.
The Bilinear transform can also be used to convert an S-domain system
into the Z domain. Again, the input system must be prewarped prior to
applying the bilinear transform, or else the results will not be
correct.
The Bilinear transform is governed by the following variable
transformations:
$$z = \frac{(T/2) + w}{(T/2) - w},\quad w = \frac{2}{T} \frac{z - 1}{z + 1}$$
Where T is the sampling time of the discrete signal.
Frequencies in the w domain are related to frequencies in the s domain
through the following relationship:
$$\omega_w = \frac{2}{T} \tan \left( \frac{ \omega_s T}{2} \right)$$
This relationship is called the **frequency warping characteristic** of
the bilinear transform. To counter-act the effects of frequency warping,
we can **pre-warp** the Z-domain equation using the inverse warping
characteristic. If the equation is prewarped before it is transformed,
the resulting poles of the system will line up more faithfully with
those in the s-domain.
$$\omega = \frac{2}{T} \arctan \left( \omega_a \frac{T}{2} \right).$$
Applying these transformations before applying the bilinear transform
actually enables direct conversions between the S-Domain and the
Z-Domain. The act of applying one of these frequency warping
characteristics to a function before transforming is called
**prewarping**.
## Wikipedia Resources
- w:Laplace transform
- w:Fourier transform
- w:Z-transform
- w:Star transform
- w:Bilinear transform
|